Business and Society with Michigan Ross

#108 - The Rise of AI

Episode Summary

Our panel discusses artificial intelligence—AI for short—and what its rise means for our personal and business lives now and in the future. In this episode, two University of Michigan professors—Nigel Melville from the Ross School of Business and Shobita Parthasarathy from the Ford School of Public Policy—look at what is bad and at least potentially good about AI. They discuss what they would do about it if appointed government science and technology policy czars, including the prospect of a moratorium, regulations or risk assessments. Most importantly, both say we need to be less in awe of the technological advancements and more focused on the societal risks and benefits.

Episode Notes

AI discussion: 01:05-32:52

More information about some of the topics discussed on today’s episode:

Study: What's in the Chatterbox? by Johanna Okerlund, Evan Klasky, Aditya Middha, Sujin Kim, Hannah Rosenfeld, Molly Kleinman and Shobita Parthasarathy.

Study: Putting humans back in the loop: An affordance conceptualization of the 4th industrial revolution by Nigel Melville, Lionel Robert and Xiao Xiao.

And to learn more about other work being done by Michigan Ross faculty, visit our website.

Have thoughts about topics we should cover or just want to get in touch? Send us an email at baspodcast@umich.edu.

Episode Transcription

[music]

0:00:12.5 Jeff Karoub: Hello and welcome to Business and Society with Michigan Ross. My name is Jeff Karoub, and I'm coming to you from the Ross School of Business at the University of Michigan. On this podcast, we consider some of the ways the business world interacts with our broader society, for better and sometimes for worse. Today our panel of professors will talk about artificial intelligence, AI for short, and what its rise means for our personal and professional lives now and in the future. Spring is doing its thing here in Ann Arbor, so we're glad you're here. Before we get started, I'd like to encourage our listeners to subscribe, rate, and review the podcast. It helps other people find us, and we'd love to hear what you think of the show. You can also reach out via email if you have a question or want to say hi. Send us a note at baspodcast@umich.edu.

0:01:05.5 JK: On today's episode, we're discussing AI, which for good or bad has implications for us all. As the New York Times' Ezra Klein put it, "We are at the dawn of a new technological era, but it's easy to see how it could turn dark and quickly." We're joined by two professors who will provide some insights, Nigel Melville from the Ross School and Shobita Parthasarathy from the Ford School of Public Policy. I'll ask them to introduce themselves. Shobita? 

0:01:32.8 Shobita Parthasarathy: Hi, Jeff. I'm Shobita Parthasarathy. I'm a professor at the Ford School of Public Policy. I also direct the Science, Technology and Public Policy program on campus. I study the intersections between science, technology, policy, and society. I do a lot of work on politics and policy related to innovation, but I tend to be attracted to questions related to the governance of particularly controversial emerging technologies, ones that raise particular ethical, social, equity, environmental, and health impacts.

0:02:06.7 JK: That's great. Nigel? 

0:02:08.2 Nigel Melville: Hi, Jeff. Good to be here. My name is Nigel Melville. I'm an Associate Professor of Information Systems in the Ross School of Business. I'm also the Program Director for Design Science, and I have an appointment also to the faculty in the College of Engineering. My area of expertise is the organizational and environmental implications of digital technologies like AI. Especially in the last five or six years, I've been studying the implications of AI for society, for the environment, and for organizations. So I'm super excited to be here today.

0:02:41.7 JK: This is great. Welcome to you both. Let's start with a question, Shobita, posed recently on social media. What is the first thing you would do to regulate AI if appointed a government science and technology policy czar? 

0:02:56.9 SP: Well, after celebrating that I had gotten such an appointment, I would do two things. The first is to issue a moratorium. It's something that a number of tech giants have been talking about recently and I think that's a good idea. And then I would start to develop a risk assessment system. So this is similar to something that I know the European Union has been considering there. They've been talking about a tiered system where they classify AI in terms of the level of social importance as well as where the AI is being used in terms of the kinds of impacts it might have on marginalized communities. And then they basically say, you know, if it's a low risk technology where maybe it doesn't have a significant social impact and isn't really going to be used significantly among marginalized community, perhaps you wouldn't have an extensive regulation, you might have transparency and disclosure requirements. And then you ramp up the level of oversight based on the importance of the technology to society and in particular the direct impacts it might have on already marginalized communities.

0:04:05.8 SP: So I like that kind of tiered approach. And I should say, it wouldn't be the first time that we've done something like that in the US. We've developed regulatory frameworks like that, for example, in the case of biotechnology, so it wouldn't be wholly new.

0:04:20.1 JK: Great. Nigel. So now you're czar. She was let go.

0:04:21.8 NM: Sure.

0:04:23.5 SP: You took my job.

0:04:25.2 JK: You take it over.

0:04:26.4 SP: It's okay.

0:04:27.3 NM: So I would think about the question in a slightly different way. So instead of thinking first about regulation, I would think about how do we get to regulation that makes sense? So being the program director for design science, of course, I would think of design and how do we design regulations. And I would think about design principles. I would think about three design principles in particular. One is protecting vulnerable populations first and foremost. The second one is holding those responsible for harms and damage liable for that damage and harm. And the third is transparency. So vulnerability, liability, and transparency. So regulations, I think, could be built based on a set of principles, and that might be a set.

0:05:14.3 JK: Great. Well, I'll play devil's advocate. What have you observed or seen that's good about AI? Or put another way, if you had your way as czar in terms of regulation, what positive developments could result? 

0:05:31.6 SP: Yeah. I guess I would think about a couple of things. So specifically when it comes to generative AI, like ChatGPT or GPT-4, I think one of the things that you're already perhaps starting to see, and I think you could definitely see more of, it's something that we've talked about in our research, is that there's a way in which it continues to democratize expertise. So one thing that I think the internet has done is it really allows people to access information in all sorts of ways. Clearly that's been a benefit. And I think that these kinds of new generative AI technologies have that potential as well to digest information and to present it to the user in a way that's potentially comprehensible in all sorts of technical areas. And I think that that's empowering. It's important. It's wonderful. The second thing that I think is potentially useful, but I would have to say, despite the promises we haven't really gotten there, is potential standardization in decision making.

0:06:34.9 SP: So the hope of AI is that it takes human bias out of the equation and it ensures standardization across a range of really important decisions, whether that's in healthcare or social services or criminal justice. The problem is that in reality, it's really just baking in already the biases that exist and making them difficult for us to root out. So in the promise, in the sort of abstract, that standardization is great. But we have to think far more carefully about how to actually achieve it than we're doing right now.

0:07:13.1 JK: Right. Baking in biases is not an ideal place to be.

[chuckle]

0:07:15.9 SP: No.

0:07:18.4 JK: Nigel, what about you? So maybe some of the good, or some of the theoretically potentially good of AI.

0:07:22.3 NM: Sure, Jeff. I would look at two sectors in particular that I think hold great promise for AI, generative AI in particular. One would be pharmaceutical and drug development. For example, coming up with new treatments for Alzheimer's disease. I mean, these are monumental problems and challenges facing millions of people throughout the world. AI can and is starting to now address those problems. Working with scientists, so this is a sort of a collaborative enterprise. The second one is the immense opportunity to help educators K-12 and beyond remake the basic idea of learning. How is learning done and how should it be done and to what end? So for example, I think it's a great opportunity to help our students focus first and foremost on critical thinking skills, how to gather data, how to evaluate information for its voracity, and then how to make proper judgments based on that data. So I would say those two areas are two examples.

0:08:27.3 JK: Good. That's good to know. Now, let's talk about your respective research. Nigel, you and your colleagues have found much greater emphasis being placed on the technological benefits rather than the social implications. And as you said recently, we're talking about AI the wrong way. What jumped out at you from your research that you might want to share? 

0:08:48.7 NM: Sure. A couple of things. We started doing this work about five or six years ago before the big ChatGPT flurry. And so in a way, we had some years to really think carefully about these ideas. One of the things we found is that at least in a lot of academic disciplines, the focus really does tend to be on technology as this magical black box thing, like a wizardry that can have these incredible benefits for society and business especially, that that was really the piece there. So the technology and then the business benefits seem to be, at least in journals that are in like how you would call it, like business and management, things like that, right? And so that really jumped out to us because there are many, many more types of impacts, societal, environmental, and so forth. So that was one. Probably the second one was this general sense of what I call inevitable-ism. In other words, there is a future dominated by powerful, all-knowing AI, and we have very little to say about it as a society. This is inevitable. But there are infinite futures, and we all have a role to play in creating through our actions whatever future we want. So one of the things that I've been discussing with many groups as a result of this research here in the US and abroad is the idea that we have in our responsibility, in our hands, the agency to create the future we want with AI, but it's going to take some work to do that.

0:10:29.4 JK: Great. Shobita, as director of Ford School's Science, Technology and Public Policy program, STPP as it's known, you and your team have looked specifically at large language models such as ChatGPT. And you found they're likely to reinforce inequality and social fragmentation, as you alluded to earlier, remake labor and expertise and accelerate environmental injustice. What else would you like to add or amplify from your research? 

0:10:57.7 SP: I think what our findings were in many ways resonates with what Nigel was just talking about. So our analysis was using what I call an analogical case study methodology. And so what we do there is really look at similar examples of previous technologies that are similar in some way to the emerging technology, either in terms of their function, so in this case of large language models that they aggregate data and sort of engage in some sort of prediction, or in terms of their projected implications. So, in terms of accelerating environmental injustices, for example. And what we find, and I think that this is really important, is that this new technology is not the sea change that we think it is. We tend to think that because it's a technology, it's a new technology that is sort of also, of course, artificially intelligent, that it's wholly different than the technologies that we've managed before. But if we think about it as a social phenomenon rather than a technical phenomenon, then we can actually think in much more nuanced and critical ways about the potential implications that might result because we can look at the history of how we've managed similar technologies and then predict what might happen, anticipate what might happen, which would allow us to achieve the kinds of alternative futures that Nigel was talking about or the kinds of regulatory options that we've both been talking about.

0:12:29.9 SP: So I think that emphasis on sort of not necessarily being kind of so much stunned and in awe of the technology, but actually say, you know what, we have social tools to deal with this, we have a history that we can use to inform our actions is important. So that I think is the first thing I would say. The second thing I would say is that it's quiet. And perhaps that leads from my first point that, you know, these technologies, AI in general, large language models in particular, are both quietly shaping our world, but also quiet in their implications. So there's been a lot of attention, for example, to the ways that it's going to require us to build all these data centers that are going to really tax the environment. There's been a lot of attention, for example, that the early large language models were spewing out all kinds of hateful speech. And those, you know, at least the latter has been dealt with to a degree. But one of the things that we found is that, you know, you have to think about this technology as not, again, something that comes from on high, but actually a technology that's built by people in societies in context.

0:13:45.0 SP: And what that means, and this was really, frankly, the most surprising thing to me when we learned it, and now it's obvious, is that of course these are primarily English data sets built on Anglo-American literature, and therefore it assumes Anglo-American ways of thinking about the world. That's hugely important at a moment where we're starting to think very differently about our relations with one another. I've seen people talking about this in various blog posts as they played with these new technologies. You know, so even a Norwegian was like, this weird, you know, this technology is making all kinds of assumptions about Norway, because it's not about the way Norwegians see themselves. The output is about how Americans and Brits see Norway. Another fascinating example that's come out recently, which we anticipated in our report was that if you ask ChatGPT-4 to generate images, you know, of indigenous, you know, group of indigenous people or a group of people in the early 20th century, for example, they're all group pictures with smiles. Americans smile in pictures, in group pictures. Most of the rest of the world doesn't, right? So those are kind of flags that the output is created in a particular context, and yet we don't understand that to be the case, right? 

0:15:17.6 SP: When we see it, we see this as an output that's produced by an objective technology, but we would do so much better to remember the context and the people that are making these technologies and frankly the biases that they have. And when I say biases, I don't necessarily just mean the ways in which they are bad, but rather just the way the social context, right? 

0:15:46.5 JK: Shortcuts, social shortcuts.

0:15:46.7 SP: Social shortcuts that if this becomes ubiquitous in our lives is going to reinforce particular cultural supremacy, frankly, at a moment that, as I was saying before, we've actually sort of been moving towards a much more egalitarian way of seeing different people.

0:16:06.2 JK: Yeah, we don't, like you said before, I love the baking in the bias. It's one thing if you're in a conversation with someone and you can push back when they talk about everyone's smiling.

0:16:14.6 SP: Exactly.

0:16:15.0 JK: You can say, well, you know, that's an American thing, but it's not something you would see elsewhere. But once it's out there for everyone to find or have pushed upon them in some, you know, ChatGPT or its variant, it becomes, this is it. This is the way.

0:16:28.0 SP: And you lose particular histories, right? You lose particular cultures that way.

0:16:33.3 JK: Right. Well, let's see what's actually being said or done on the policy side. We've talked a little bit about policy. You mentioned the EU. Italy has temporarily blocked ChatGPT over data privacy concerns, and EU lawmakers have been negotiating the passage of new rules to limit high-risk AI products. Here in the United States, President Joe Biden says AI could be dangerous, but that remains to be seen. His administration has unveiled a set of far-reaching goals aimed at averting harms caused by AI, but stopped short of any enforcement actions. What do you make of all this? We'll start with Nigel this time.

0:17:09.8 NM: Sure. I think these are positive first steps. I'm not aware of the details of these policies and I'm not a law professor, so put that into context. But I do think from what I am aware of that we might be looking at regulations and moratoriums in a kind of overly narrow way. And what I mean by that is one of the findings of my research is that we need to think broader about what AI really is and get away from technology language. So in our paper, we reframe that and we talk about machine capabilities that emulate human capabilities in two areas. One is cognition, which is decision-making and generativity, but the other is communication. And this is something that people don't think too often about, but machines are doing two kinds of communication things that are like humans. One is they're working and collaborating among themselves to enable all the services, including AI. The second one is they are developing relationships, a sort of relationship, a certain type of with humans. And that has really major implications. So I think, in my opinion, we should think broader about this emulation of human capabilities so that we can develop regulations that have the appropriate footprint.

0:18:42.0 NM: I'll give you one specific example of this. Recently, a popular social media platform that is for sharing photos, they introduced ChatGPT as one of the friends of everyone on the platform. They pinned it. So one day, you look at your phone and you see your friends. You wake up the next morning and you see this new friend that's pinned and that's ChatGPT. Very little was said about this, but here's the problem, and this is where the relationship with humans comes in, it was shown by some of the founders of what's called the Center for Humane Technology, including Tristan Harris, through a real demonstration, that if you sort of pretend that you are a teenage girl, that this would give you harmful advice. This is not right. This is outrageous. It's completely unacceptable. And so that's why I say thinking just about AI, we have to go beyond that and think about, what sort of capabilities are they emulating and what can we do now? This is happening now.

0:20:01.3 JK: Yeah, that's stunning. So Shobita, we can talk a little bit about, from your perspective, I know you did mention earlier on in our talk about the EU, and you mentioned if you were czar moratorium, but I'm curious about what you're seeing. Maybe you can talk a bit more about here in the United States and the President stopping short of saying this is a danger. Or any aspect of that you want.

0:20:25.9 SP: Yeah, well, I 1000% agree with what Nigel just said. I think it's odd to me that the president is talking about how AI could be dangerous as though we don't know. What I think that that plays into, which is something that I have found really interesting over the last six months or so as the discussions about ChatGPT have exploded, is that we're perhaps... And I think that this, in part, was a marketing ploy. ChatGPT emerges, it's released to the public, frankly, in a pretty irresponsible way, and, of course, all of the other kinds of large language models are released as consumer technologies. So there's a lot of interest and curiosity among the public and people are engaging with it and they are sort of framing it. It's being framed in the pages of the newspaper and, frankly, at my faculty meetings, as though it's something new that we have to contend with, "Oh no. Students are gonna cheat and they're gonna use these technologies." But the bottom line is that they're just one flare from a suite of technologies of the kind that Nigel talked about, but over the last five to 10 years, we are increasingly seeing machine learning algorithms being used across social services, pre-trial risk assessment, assessing risk for children in foster care, the famous controversial case in Michigan itself around the unemployment fraud scheme that led to people losing their homes as a result of a crazy algorithm, in determining medical leave.

0:22:13.5 SP: I mean, the set of places where algorithms are being used, facial recognition technology, which we've also done research on, it's pervasive and we know the consequences and, frankly, they're not good, but there's something, to my mind, perverse about the fact that we're not talking about that full set that could be dangerous or we don't know, is restricted to something that everybody's paying attention to and not the full suite of technologies. And it's in fact become mundane in those other cases. You have, as a colleague of mine said, procurement has become policy, in the sense that you have state government administrators saying, "Hey, look, this... " They are confronted with a company who's offering this great algorithm that it looks like is gonna make their lives easier and the service more effective, when in fact, they don't even know the questions to ask to avoid the kinds of problems that we saw in Michigan with MIDAS. And so that's, I think, a really dangerous situation that we're in and the kind of construction of novelty is disturbing from the side of the president.

0:23:26.7 SP: All of that said, I will say that I know that there are in fact experts within the government who are taking these things seriously. The White House Office of Science and Technology Policy has been doing some important work around AI. Last year, they released an AI Bill of Rights. The Federal Trade Commission, with Lina Khan at the helm, has been doing incredibly important work. They have an office of technology now dedicated to this. And in fact, just yesterday, the Federal Trade Commission, the Equal Employment Opportunity Commission, Consumer Financial Protection Bureau, all basically said, "Oh, by the way, you know we have those laws around disparate impact. Don't think that technology is somehow exempt from that." And what that means is that the folks who are purveying these technologies and using these technologies need to be asking those questions and they're going to be held liable. Again, back to Nigel's point kind of saying, "You're not gonna be exempt from liability. Even if you unknowingly engage in discriminatory practices, we're gonna be looking for you." And so that's a step. I'm happy to see that they're taking that step. I don't think it's adequate, but I'm happy that people are at least starting to think about it, but I think that we need to be doing a lot more a lot more quickly.

0:24:56.0 JK: Well, I think the next question touches on a lot of things that we've talked about, including this idea that of, it was inevitable-ism was the word you used, Nigel, we need to get away from that idea. I do worry about the whole 'putting the genie back in the bottle.' We've talked about the evolution here. Things like ChatGPT and its variants didn't come out of the blue. They were iterations or evolutions as you alluded to, Shobita. We find it incredibly hard to draw ethical lines in the sand especially when it's the proverbial frog in the pot of boiling water. I don't mean to justify horrible outcomes because they are inevitable, because as you say, we can chart our future at least to a certain degree, but how do you keep innovation free of unintended side effects? What can you do to... As the snowball gains velocity going down, going down the mountain side or the hill? 

0:25:50.5 NM: Great question, Jeff. And I'm also a fan of the Socratic method. So now I'm gonna turn the tables on you. Would you drive down beautiful Pacific Coast Highway on a sunny day, 1000 foot down to the beautiful Pacific Ocean without a guardrail? Would that make you a little nervous? 

0:26:06.9 JK: It would make me a little nervous.

0:26:07.9 NM: Okay, alright, second question, does having a guardrail there stop you from enjoying that view and driving down the Pacific Coast Highway? 

0:26:18.8 JK: No, I think if it's designed properly, so I can see over the guardrail, but the guardrail still keeps my car from going over, it's a safety and design win.

0:26:27.2 NM: Great. We need guardrails for AI. We need to decide collectively as a society, what are appropriate guardrails? We've always had guardrails. I mean, here, we're in the center of the automotive innovation revolution 100 plus years ago. What happened before we had road signs and street signs? Like what did we do? There were a lot of accidents. What happened in the early aviation days before we had the FAA and rules about who goes where? The industry itself, in the case of aviation, came together and said, "We've got a problem here. People are scared to get on our commercial flights because they think they're gonna fall out of the sky. And it has happened. So we actually should be proposing and developing regulations for safety." This is the model for AI, in my opinion.

0:27:26.1 JK: And it could start with the bill of rights, but as Shobita says, it has to go further.

0:27:32.7 NM: Yes. And I think to follow up, it will only happen if we hold those responsible for harms and damages liable. In my opinion, that is the only language that business leaders understand in large part. There are good people who have ethics, yes, but by and large, guardrails, a level playing field that applies to everyone that are fair, business leaders are okay with that.

0:28:05.8 JK: What would you like to add? 

0:28:06.8 SP: Well, yeah, I wanna return to this point about responsibility, which I think is really, really important. I think one of the challenges that I see is that tech leaders are punting to government and saying, "Oh yeah, you should regulate some things, you should do something. We are not responsible. We are just producing this technology." As though the... I'm not ready. I don't like the language I have to say of unintended or unanticipated side effects, because I think at this point, yeah, no, you can't say that. You know, you know that there are these kinds of potential risks and you need to be doing differently. And on the other hand, you have, so you have the tech folks who are like, "Oh, not my problem." And then you have the government folks who are saying, "Oh, that's technical, it's not our problem, or at least we don't know what to do."

0:28:54.6 SP: And so here, I'm gonna take the privilege and advertise the Science, Technology and Public Policy Program and what we do, because I actually think that in addition, and I agree with what Nigel is saying, I think language of liability is really important, but I also think education is really important here. So at STPP, we have this graduate certificate program that's open to any student across the university, and we have scientists and engineers and policy students, a lot of folks from computer science and engineering in our School of Information, so a lot of folks who are developing these technologies or will develop them someday, and we're teaching them about how values are embedded in design and how political context matters when you're shaping technologies, and what are the ways in which you can anticipate the implications of technology and then build better technologies. What are the kinds of policies you might develop? 

0:29:47.9 SP: And my hope is... And this program has been around since 2006, so I know we can get there. We've done so with our previous students, is to actually train new generations of students, that is train budding scientists and engineers as well as budding policymakers, both of them sit in our classrooms, to not only think about the technology and the public interest in a more sophisticated way, but also to engage with and learn from one another. So that neither group can absolve themselves of responsibility and blame the other, but rather that those kinds of connections get developed early on in their careers, so that they can build on those as they go on in their careers. And so I think that we need programs like that everywhere. You shouldn't be able to either graduate with a degree in computer science or a degree in public policy without having those sensitivities because you are going to be increasingly confronted with these questions in your careers, and it's our job as educators, I think, to prepare them for that world.

0:30:56.1 JK: Great. Is there anything else you'd like to add or amplify to our discussion? 

0:31:02.4 NM: Yeah, I guess I would just end with a question, here we are sitting in a room in a great university of the world, and we have not talked about the implications, we have actually. Shobita, and I completely agree with the ethos of the program, so we talked about that, but more broadly, I would end with the question, how can we adapt as a great university of the world with sufficient speed to leverage both the opportunities that we have in front of us today, now, as well as to the risk, the real risk to our students, faculty, staff? We're all in this together. So how are we gonna do that? So that's my final... That's how I would end with a question.

0:31:54.1 JK: How about you? 

0:31:55.5 SP: I like Nigel's question, but I would also maybe just end with a call to nuance. None of these technologies are either evil or panaceas. As Nigel said, we have a lot of potential futures ahead of us, but we also can do something to shape those futures, and we need to be thinking about them in a sophisticated way so that we can produce the kind of future that we want.

0:32:27.6 JK: Well, this has been enlightening, and I know we could be talking for hours longer, but I think we hit a lot of... Made a lot of great points here, we brought the Socratic method into it, which I think is fantastic, so I'm really grateful to you both for taking the time and giving us some great insights and asking some great questions that I hope people will take forward and try to answer.

0:32:49.6 SP: Thank you.

0:32:49.8 JK: Thank you.

0:32:50.6 NM: Thanks, Jeff.

0:32:54.5 JK: That wraps up our eighth episode of Business and Society with Michigan Ross. If you'd like to know more about the subjects we discussed today, look for the links in the episode notes, and for more news about our faculty's work, check out the new section of our website, michiganross.umich.edu. Thanks for listening. And we hope you'll come back for more. Business and Society is brought to you by the Ross School of Business and Michigan News. Thanks again to our guests today, Nigel Melville and Shobita Parthasarathy. Our audio engineer and editor is Jonah Brockman, the executive producer is Jeff Karoub, who also composed and performed the theme music, "Lost Einsteins." Special thanks as well to Bob Needham, Business and Society's creator and founding host for his invaluable help. Until next time, this is Business and Society.

[music]