Episode 9

December 01, 2023


The Future Impact of AI on Society Panel | Live from Datapalooza

The Future Impact of AI on Society Panel | Live from Datapalooza
UVA Data Points
The Future Impact of AI on Society Panel | Live from Datapalooza

Dec 01 2023 | 00:58:09


Show Notes

Artificial intelligence has the potential to change our societies, economies, and political systems in both intentional and unintended ways. While it is difficult to understand the full extent of what the long-term impacts may be, we have enough shared knowledge and expertise to predict the likely shapes that these changes may take—both for better and for worse. More importantly, we should ask ourselves what kind of future we want AI to help us create: what we want from the future of AI should ultimately determine the future of AI. This panel will bring together experts to discuss the intersection of AI and society and offer suggestions for how AI might work within a just, inclusive, sustainable, and fair digital future. 


  • Farhana Faruqe, Assistant Professor of Data Science
  • Sarah Lebovitz, Assistant Professor of Commerce
  • Larry Medsker, Research Professor, George Washington University 
  • Mar Hicks, Associate Professor of Data Science (moderator)
View Full Transcript

Episode Transcript

My name is Mar Hicks. I'm an associate professor of data science here at UVA. I'm going to be moderating and we have some wonderful panelists today who are going to be discussing this topic. Those panelists are Professor Farhana Faruqe Professor Sarah Leibowitz. Sorry. Farhana is also from the UVA School of Data Science professor Sarah Liebowitz from the School of Commerce here at UVA. And then joining us remotely will be Professor Larry Metzger, who's a research professor at George Washington University. So to give you just a tiny bit of context for our panel today, I feel like our panel is perhaps one of the most blessed and cursed. Right. Because we've all been seeing and hearing in the news all of this stuff about how General AI has recently made these giant leaps and bounds forward as a consumer product. And at the same time, we're also seeing again and again how problematic a lot of these purportedly revolutionary general AI tools and models can be, whether that's due to inbuilt bias or being trained on materials that infringe copyright or tending to produce misinformation or using a lot of energy. All of these things have come up. So we have a situation where we know that artificial intelligence has the potential to change our societies, our economies, our political systems in both intentional and unintended ways. But we also have to be very proactive in this moment about figuring out what we want to happen. And we also have to think long and hard about whether what we want collectively and individually is actually a good idea or not. So with that, I'd like to transition things over to the panelists and what I'm going to start by asking is if Farhana and then we're going to go to Larry online and then over to Sarah. I'd like each of you, if you could, to talk a little bit about the work that you do. Introduce yourselves in your work and then say a few words for a few minutes. On how the recent surge in interest regarding quote unquote, General AI has started intersecting with your work and how it may have changed your thinking regarding what the future of AI in society might look like and potentially what you hope it will look like. So with that, would you like to start us off from. Hello, everyone. I'm very excited to be here today. Welcome to the session. I'm Farhana Farooq, Assistant professor at School of Data Science. I am in this field, actually, I worked in technology for about 12 years. So as in academia, this is my full time job, and I'm loving it because I get my get the opportunity to work on my research. My research focuses are on human centered design. The acceptance and responsibility. As you can see, a little bit connection here with responsible AI and the topic we're going to discuss, certainly. So before I go deeper into all these directions, definitely I would say there are some changes coming and it will continue to do so. In terms of general AI, still, we are in the phase of narrowing, I would say, but you can think of of AI or generative AI. Now it is this is like a this is a big hype around it, especially general TBI. So if I think about as an educator, I can definitely see it's a blessings are opportunities. At the same time, it's very challenging to cope up with all the changes. So first of all, as a researcher, I can see these A.I. can help me to think about a brainstorming session. When you I generally the idea with your colleagues. It can be like, again, just directly generative. I can back and go back and forth. It's kind of helpful is which is good tool even in terms of a student's perspective. I can see it could be a great learning tool if it is used properly. I'll talk about the challenges, how and what needs to be done on that part. So definitely I am for generative AI for sure, because as a data scientist in my past job, I know how how much effort it takes to build an app or a system. But we just need to be careful about not just building on the AI system, which works perfectly fine, but need to think about the ethical aspects. That's the challenging part I'm going to dive into now being part of the editorial board member in the Ethics Journal. Definitely. We see a lot of research around these area how to make A.I., not just AI responsible one. And to do that, we need to think through from the design to deployment, not just we have this machine learning model, just build it and that's it, deploy it, and then that's how it's going to interact with the society. People are going to interact and then if there is a bias and there is a feedback loop, mission can learn it from there. So it's going to be like a branding process. So definitely need to be careful about that. So that's one aspect in terms of, of as an educator, we can teach our students not to be just big, become a data scientist, but rather become a data scientist to build the responsibility and bring positive impact to the society. Definitely that's there. Now, the second point there are a lot of discussion on how generative AI is going to change the students learning perspective because they could use it just to complete their assignments and whatnot. But if we could train them properly in terms of what are the ethical things they need to consider before they use it and how to use it properly, I think that could be a really strong learning tool because when they interact with this generative A.I., they need to think computational because they need to give a command so that they can get that output. And that's a skill. And the students, when they are going to join in the future workforce for sure, they need to work with the AI system so it cannot really prevent them to use it. But definitely it's our duty to train them properly and let them know what is right versus wrong so that they can be educated and then they can use these tools. Very good. And definitely the societal impact is there. So I think for now I will stop here and then can come back to it. All right. Well, I think we have Larry joining us on line. So let's go over to him. Okay? Okay. Hi. Thank you for inviting me to the this year's event. I'm currently a research professor at George Washington University in the Human Technology Collaboration doctoral program. And by the way, we just recently graduated a fantastic period, and I was the founding director of the GW Data Science Master's program with my colleague. And now your Brian Wright. I'm also a research professor at the University of Vermont. The other thing that's very relevant to the topic for today is that I'm a founding editor of the Online Ethics Journal and I'm chair of the ACM US Technology Policy Committee. So we I'm constantly thinking about the impact of AI on society and how we might have some control over it or some influence on so about my view, the future of AI. The I'm influenced from having lived a long time and experiencing the development of computer science, neural networks and the traditional field. The way I of course today the term AI is used so broadly that we may have to get rid of the term. But anyway, there are a lot of overlap with with the the AIS is a real field and the way that people talk about it today and that's what we're focusing on. And so one of the things I've seen is sort of the peaks and valleys of disciplines of tools and products, you know, programing languages have come and gone. And it was surprising to my students if I say, well, in ten years, you know, people may not even use Python, but, you know, things like that happen. So change is one of the things that I will change in the way it's used and the way it's monitored. And and so so that's that's one thing. So I'm anticipating change and data science is kind of the newest phase of these disciplines and has helped blur the traditional definition of AI, which is okay along with the popular press and society. So that's another thing that I'm sort of more real thinking about these days was how the perception of these different technologies changes and sometimes not for the good thing, and sometimes it's a matter of people learning more. So generally by coming out, a lot of people, I'm sure, like it was the adoption of the people using it was just amazing how fast it happened. And so it probably caused a lot of people think about AI in ways that they hadn't before. So that's one of those kinds of changes. I mean, product comes up and people use it. The other thing is that a lot of things are fun at first or interesting at first, and then people see flaws in it or they just get tired of using it or something. So, you know, usually there's a big excitement and then a dying off. It doesn't have to die at all that way. But with what we're seeing, the value, the hype over generative A.I., for example, may well likely, you know, diminish before long. So anyway, this rise and fall of technologies and products is one of the things that makes me sometimes makes me feel better about the future of AI and sometimes makes me a little scared. But the big point is sometimes people ask me or my people, my family are are you for or against? They are. And it's just not really a very good question. I don't think there's a lot of good things about AI and there are things that are bad already and could get worse. So I think the one I'm in favor of is trying to make things happen for the good of society and that that means smart and timely policies and some careful consideration of regulation and then worry about who gets to decide who gets to decide the future of AI. Is society in general, or is it the certain parts of society? So last thing I'll say is that I'm impressed and hopeful about the AI movement that's afoot on all of what's called human centered AI or human centered design that we incorporate the human, the welfare, the human, as we can see, of a product, as we design it, as we implement it. And then, of course, after it comes out to make sure that has it doesn't have unintended flaws. So human centered eyes, what I would say, at least I hope the future is. All right. Great. Thank you for that. Over to you, sir. Great. Happy to be with you all and with my colleagues up here. So I am Sarah Leibovitz. I am an assistant professor over at the School of Commerce and I study how professionals adapt and change in result of using AI or looking at the organizational structures and processes that change around new technology adoption. And my research over the past five years has focused primarily on A.I. adoption in health care. But I think there's a lot of learning that's broader than just health care. I study specifically in diagnosis, which is one of the top fields that has been predicted to be replaced by AI because of the advances in image recognition, the ability for models to predict, classify and to diagnosis categories. And what I am really interested in around AI is the difference between sort of the hype around it in media and in common conversations and then what's actually happening on the ground. So I'm the field researcher, I study, I watched people in hospitals use. I sit alongside them, I talk to them and I understand what's working, what's not working. And that is the sources of my research. And I think there are three kind of themes that have come up. So a lot of times we hear that I will make you faster, more accurate and kind of in general more fair, or there is conversations around, you know, less bias or more fair or around that category. So I would say I've found that it's much more complicated than those narratives can come across, because the messy organizational realities where we have humans who have professional norms, who have standards that they have to be held to with technical limitations, with cognitive limitations, that on the faster front, given that the tools mostly are black box and aren't offering explanations and most professionals require some sort of explanation to stand behind when they communicate their decisions to an audience. In this case, a patient with life or death diagnosis decisions. If the tool is not explaining itself and it gives a conflicting answer, then the professional is spending a longer time in most cases, trying to come to a consensus that they can report as their final answer. So faster in When I see in these critical decisions, not always. It's actually can be the opposite and slower. And then on the accuracy front, it's really, really challenging to know who is more accurate in knowledge contexts. So in radiology, the really it comes down to the ground truth. And this is data palooza. So we're talking a lot about data. I care a lot about data and the quality of the data that goes into the machines and trains them. And so with diagnosis, the data that trains the machines is sometimes simply what's available to developers and not necessarily the highest quality data out there. So, for example, training a machine, learning the machine, learning to predict cancer in mammography, in breast cancer. So they may train based on singular mammogram images. And it gets really good at predicting it based on this one image. And they evaluate these predictions based on a human looking at just one mammogram and they say, hey, our tool can do better than than the radiologist. But in reality, the radiologist is looking at much more data, much more change over time. They're communicating with the patient, they're using physical touch, they're looking at other imaging to make their assessment in practice. So even though in that kind of controlled experiment, it may look more accurate, the implementing it in practice, you may actually find that doesn't live up to its kind of research standards. So that's the accuracy. And then around kind of fairness and thinking about data as this thing that you can train the model in. It does a great job and you can control for bias. It's again, it's just more complicated. So I'm doing a study where they're creating, instead of going out to collect data that they thought they could just go collect ultrasound data to train the model to make it during pregnancy, like predict fetal and maternal health outcomes. They have to create the data. And this has so many decisions about who to involve, how to collect it, what you know, there's so many layers and you realize how subjective the data creation process is. That often gets conveyed as simply data collection in this very objective and clean way that removes bias. But really it's, you know, humans are involved with the whole process. So yeah, I'm interested in speed and augmentation. The Explainability questions. And I think all of these come up with the question of generally AI and then generative AI as well. So it just kind of adds to the complexity when the model can produce really compelling verbal evidence. You know, it looks like evidence, but really it takes a lot of expertise to know whether it's trustworthy or not. So curious where we're going to go from here. And I think this is a great forum for these discussions next year that. So one thing that strikes me listening to all of the comments so far is that there are there's a lot of optimism in the room, right, as might be expected at a conference like this. But historically, one of the ways that at least I and maybe it's because I'm a cynic, I really like to look at technologies is at the points of failure and breakdown, because sometimes that's when you learn really interesting things about a given system is how it breaks down and just what it can withstand. And especially with infrastructural technologies, whether those are roads or bridges or information infrastructure, when they break down, that's when we really do start to notice specific problems that maybe had been noticed before but weren't addressed because the system was chugging along well enough. The bridge didn't fall down. The informational infrastructure continued to do what it needed to do for people to get their work done. And I wonder and maybe, maybe you could start us off with this question, Sarah. I wonder if the three of you might spend a couple of minutes each talking about any specific failures or potential failures that you maybe already seen or you've been trying to mitigate or that you're concerned about seeing in the near future. As you were speaking, Sarah, about your work with health care technology, I was wondering a lot about, you know, the American medical system. Malpractice insurance is a huge thing and it's a huge expense. And I was thinking about with more of automation in the mix, is there changing anything that has to do with, you know, financial structures in the health care industry, which is a very, very big and lucrative industry, And whether that has anything maybe to show us potentially about regulation or ways in which regulation could proceed that might help us get to these better futures, we're imagining. And if that isn't something that is really, you know, of interest, then please take it in whatever direction you want to. But I'd love to just get some specifics from all three of you about what are some of the, you know, specific potential problems and failures that you see coming up in the very near future that you feel need to be addressed and potentially you're even starting to address in your own work, in your own teaching, Really fascinating and important. So I can 100 things that you've said could could fuel the rest. But so I think I yeah, in in health care specifically so I think of regulation it's actually encoded in FDA approvals that these tools have to be used as an aid which have to support an independent judgment by the physician. And that's I think you could put aside whether so as a patient, as a health care consumer, I think that's great. Like, I think having some a second opinion have my doctor spend longer on every diagnosis that they make, make it be a I come to my own assessment and then turn to the the tool. But that is not helping the know speed and efficiency that the kind of hospital leadership is wanting to see. So there's kind of the regulatory piece is important, but I think it might be temporary. I think it it could be that as soon as that kind of starts to stretch and move away from this mandated second opinion, there could be some potential for, you know, if the tool is not up to what we would consider the physician standard in every case for all types of people, then we may lose there. There's risks to that. But for now, the regulation is there. And then the financial piece, it's another fascinating and complicated situation because the hospitals don't want to pay for the tools because it's expensive and the vendors are trying to arrange with the insurance companies that every use of the tool can be billed as a billable expense that can get reimbursed. So in mammography, the existing detection software for breast cancer is billable, and it's required that it's covered by insurance and Medicare Medicaid. So that has created an incentive to adopt massively like, you know, everywhere. This is this is widely adopted in the US, and it's only because there's this financial incentive. And now the you know, every physician has the decision of, well, I have to use this because I have to bring in revenue for my unit, even though it may not be adding the value to my practice. So there is financial. And then you mentioned breakdowns and I think failures and the malpractice is really interesting and I don't know what's to come, but what I do know is if the physician makes a misdiagnosis and the tool was correct that gets recorded in the patient's files in the in the electronic records just like any other information that can be used against them in a lawsuit. So far, it hasn't been the case that it's really compelling evidence to create like a malpractice suit. But that could change if these tools become more commonly accepted, if there's more tools that are widely seen as better than than humans, and then they disregard it for what seems like good reason on the ground, it could use them against against them in court proceedings and ultimately hurt their reputation. And, you know, this could be a good thing to encourage the more use of AI that we trust. But there's some caveats to to whether or not the technology is really what we want to depend on. Thank you for that. It's a very, very complex. I see you in no way for a harness. So would you like to take this next and then we'll we'll patch into a patch and Larry thinks so. Yeah. I really like couple of points. Especially acceptance and adoption is kind of same thing. So in terms of acceptance, it's really important that, you know, when you work in a team. So you and I are a team, right? So for example, if I do not trust you, then as a team you're not going to be successful for sure. It's kind of same thing as a doctor and I'm using a system and I do not trust its judgment or even think about it. Okay, let me think. Why saying that? Is there anything I am missing? If that's synergies there, then definitely the situation is going to be much better. Now, the question is why certain group of people are individual don't trust. Right. That's really an important question. And to answer that, we need to do several things. First of all, I think you have mentioned earlier adoption. So you have maybe talked to doctors or you have seen this like AI is not really explainable enough or transparent enough for them to understand why the decision has been made. I have kind of similar experience right after, like when I joined as a data scientist at the National Hospital, I was very excited to do certain machine learning model. But first thing I have no, we are not using that. And so why is that? Can you tell me why this value is here? Can you tell me like why this is because people I can say, well, this is the significant, that's why you need to use it. So I'm talking about like 2017 of that time. I say yes, I don't know. So definitely transparency into what those are, it sounds like. Yeah, sure. That's like a feature for machine learning model. Sure they are. But the impact is huge in terms of buying from the user. Like there's a doctor, a user, they're using a system for their work and forget about the end user. They need that. Otherwise you cannot really expect someone then to without any technical knowledge you're saying something and is going to blindly trust it. Sometimes they have these questions. Why is that? So that's like a significant difference between technology acceptance and acceptance, because technology acceptance is a very well established field. There is a model like a technology acceptance model, and for a long time now we need to work on the acceptance because the same model we cannot fit for a because the is dynamic. It just not the technology. It's living, breathing things. It can interact, it can learn from the environment. So definitely that's my another. That's why research we like how we can get acceptance from the OD, like not just from the end user of the practitioner. Right? A different levels of users are they are we to think through their needs and make the system trustworthy so that we like as A.I. and human can work together to have a like a better solution and reduce the failure. So that's the concept. So it's fascinating that you have the same expert. Yeah. All right, let's go over to Larry. Okay. And thanks for giving us permission to be negative. So I've got the examples here. Well, in the news, you know, we hear of failures of what people call autonomous vehicles and one race with one tragic event that everything that was recalls lots of cars. And so so that's that's that shapes are of our acceptance or trust and things that's that's one example there. But it's interesting also that regular cars not automated are you know pretty much accepted. Some people are still scared to drive and police in some parts of New Jersey. But but they get used to it so and so actually the record for people being injured or killed in regular auto accidents is in large. And some of those are failures of the cars. But with the new thing, it's a real problem for the people making first. And also they have to be really careful that they check it out, that they're not sort of debugging the system after solar. And so so that's that's an interesting for for two reasons. I mean, the fact that, you know, they're the downside and the danger, but also that's how people become not trusting. So so there are people would say, you know, I'll never want an autonomous vehicle, but then others have that try them. You know, they like it and they're willing to take a chance on the probably small risk of a problem with the ones that are established. And that's one reason, too. Then in our little world of human centered technology that we don't like to use the word autonomous, you know, well, automatic automated vehicles as well, that's a better term. You know, that we add aspects to cars that give it some automatic features, but there's also a human involvement. And I think that's a theme that goes through a lot of products. I would say generating AI is like that to narrow uses of generally they AI in a professional setting, maybe with a wall so that that data that they have is hacked into or reused by other people is probably the place that's really useful right now in the general public using tragedy or something without really any training or knowledge of how to make the best use of it makes for a lot of misinformation. The shared with other people based on the chat it because the people didn't really know how to use it. So Burner might be an example to of regulation. And it was noted that you pointed out that in health care as we know that things are very well, they are regulated to large extent and so there are other fields where regulation was accepted and so why not? I and I wondered if either one of you two would want to speculate on why are we so why are we as a society so reluctant to? I mean, a lot of people are reluctant to have regulations when it comes to A.I. because it does produce dangerous objects. So, yeah, let's jump off that point regarding regulation, because obviously that's something that is going to be an ever larger part of any conversation around AI, whether it's applied specifically or more broadly. And one of the things that we've been seeing in the past few months in particular is there's a race to sort of get products almost in a beta test stage out in front of the public to sort of wow people and say, look, this not this fun, isn't this nice? It's not going to hurt you. And that's backfired in some ways. But it's also worked pretty well as a marketing campaign. Now, all three of you are experts in the fields who have done a lot of specific work in research on more narrowly applied AI systems. And I wonder that given this moment that we're in, how do you foresee or how would you like to see regulation proceeding in the United States, given the fact that we are in sort of a fraught moment where there are narrow uses of AI that are definitely things that are being productively explored and should continue to be productively explored. And then the broader context is this sort of rush to market for much more general systems where there isn't necessarily a good framework for how they're going to be used. Really, it's it's a sort of a sleight of hand or an effort at heterodox honestly, engineering the environment so that there is a large future market for such generalized AI tools. And I see you nodding a lot. Sarah. So if you want to take it first, we can go to you first or anybody can go ahead and jump in. Yeah, I just share your concerns. I'm just I'm nodding and in solidarity. I have I'm not a regulation expert and policy is isn't my expertise. But I will say that this rush to market is something that resonates a lot and I think can be, yeah, the accelerated pace of development can lead to kind of overlooking a lot of the things that that are really important for a high quality and safe system. And so when I think back to any kind of time, Larry's question, think about why we're not regulating. I think I think so far and it sort of makes sense in my mind, which is not you know, there's probably smarter people are about policy here, but to to focus on the context of use. So in medicine, what is it doing? What are the kind of criteria that we care about and how do we measure in an ongoing way what is safe use over time? I know that when I look at the FDA approvals for what they consider high quality tools, it's not as robust as as you might hope it is. So relying on kind of the domain specific seems to be the model that's going forward, like in driving or in other context, like or, you know, weapons and things. So that's just my initial reaction. But I do think the speed has a lot of concern, but the more excitement it creates, the more incentive there is to build the tools and build them quickly and get them in the hands of users, which yeah, it does. I just share your concern. Yeah. And I see Larry was writing as you were speaking, so maybe he wants to go next and we'll go over to Farhana after, you know, because I now that I know from what Sarah said, made me think about how with autonomous vehicles were also automated vehicles. You know, other vehicles, if we look at those, you have to do inspections, you know, of the cars as far as being able to be permitted to drive. So that and the idea of auditing and reporting failures or having a database or, you know, where people can read about failures or systems would be not exact. Well, sort of regulation, but it's a it's maybe a government requirement, but it's not a regulation of how they're made. So much so. So I think, you know, if you just look at other fields like SaaS or other domains, you'll find that people will accept regulation. But so far, I'm speaking partly because we talk with the congressional leaders and staffers in the tech policy area. There are you can imagine that at least half of Congress doesn't want to regulate anything. And so it's really a tough battle. So you really have to kind of use a spectrum of risk. So if you're worried about getting the wrong recommendations for movies to watch, you know, that will sort itself out. But when it comes to, you know, life threatening systems, then more people will be in favor of regulation. Yeah, So I can add my point of view as a technologist, not of course, perspective from the regulatory and the policy. I really think there are two things in my head that when we are talking about regulation and policy, it kind of depends on what type of product are talking. For example, they're definitely high risk and the low risk, if it is a low risk product based on like entertainment, I think we could be like, okay, regulation, not too much. We care about, but definitely for highest tech product, we have to be very much careful what we are. So first of all, we have to have a way that product needs to go through certain regulation or policy based. We should have that. And of course, after having gone through all the steps that requires and then approve it or even like a beta version approved that get feedback from the user and seeing the data, then like a final phase, it could be like a deploy in a bigger scale. If it is a higher stake application that has that could bring really negative impact to humans life. So that's my $0.02. As a technologist, I definitely want to see that in future that we have certain things in place. So yeah, I would just kind of jump in and say maybe there's also other opportunities beyond government, like we have like standards that are created by like members of the Academy and then by certain companies that give these checklists or give, you're kind of the stamp of approval. So if we can kind of develop some standards for what it takes to be a human centered design or whether it's using good, fair practices, then having like a stamp of approval and then knowing that that gets adopted maybe more often than the ones that aren't, maybe it could be kind of not not look to the government, which is typically a slow, bureaucratic, not technical expert field. So now, like when you said human centered design, I would like to add a little bit on that. So I if you think about the whole lifecycle AI, it's not not like the model building part. Most of the time people think, well, when you're going to have this modular algorithm, we need to think about the ethical aspects, whether there is a bias or anything. But it's really a start from when you have a problem in your hand. I think about like the design perspective, what are the data you're going to collect, Who are the people or anything you're going to collect the data from. So in that step, we need to think through how much do they really need. As a researcher, I can tell people has this tendency, let's gather like all of this data set, we'll see what can we do later on. That's not a really good attitude. So in terms of design thinking, definitely thinking through what are the things you have like a you need get those things in and then that's like the data collection part and then algorithm is they are definitely is a huge part. Testing should be there. Humans like the user experience are need to think about what is the best thing for the human, whether they are going to get benefitted on that. We have to always think about that throughout the process. So basically embedding ethical aspects are practiced throughout the lifecycle. So that's really important to like having these check like that. The first checkbox and then if we do that, I'm sure the risk is going to be significantly minimized. And of course the regulator on top of that could be there as well. And most of the time after deployment people think this is the last step. We have deployed our product. That's right. That's it. But to be honest, after deployment, how end user is using the system bias can get into the as a feedback loop and we need to monitor that how it's behaving. And then if we see something, definitely need to fix that before it's too late. Yeah, so that's another point that I pick up on that first, second, go for it. Okay, So the I think it's a you know, it's an example of what can go wrong or change it, but it's also other products. So I was talking with a marketing person at a big tech firm who said that I ask, Are you worried about people taking your high level products? The for anybody could just take data and throw it in there and get a result, but they might misuse it or not know how to use it and they didn't know really what to say. You know, just things seem like it could be bad publicity. For one thing, if somebody blames the product that they use for getting a bad result. But it seems like the companies I don't want to give guidance to people on what data is appropriate for different tools. And that's one of the big problems that we you know, this from data science to this, not just students, but all of us like to throw data out of product at an application and then let's see what comes out and it's what we like and seems to work. But that's just a joke. But anyway, we do teach good practices and data science of having high quality data, of data that's applicable to what you're trying to to do with the application. And that's not, I don't think, often enough in the in society where people make sure the data was good and that it's being used with the right application. Well, I wonder if at this point, since we about 15 minutes from the end, if you would like to, I'll take some questions from the audience. It's always interesting to see where audience members are, you know, going with these things and there are some microphones going around. So if you have a question, just wait for the mic to come for you. And that will make sure the folks online can hear as well. Sounds great. Do you think the medical community is using AI right now? Because there's no staff, there's no providers, there's nobody in radiology office. There's there's basically no body behind the diagnosis. So that's one thing. And boy, they're making a lot of money, not hiring a lot of people. And talk about regulations. There's nobody who can afford a lawyer to sue anybody for making a non diagnosis. And prior to this, we've had teams where nurse practitioners and this would involved as well as different medical systems like endocrinology and such. As far as like Kaiser Permanente and Sloan-Kettering and the Mayo Clinic and that was supposed to support diagnosis, also cloud sharing and things like that. But that basically is more expensive. So when you're talking about regulating you're talking about the bottom line, and that's in the arts and entertainment and everything is that whatever people can get away with and and what do you think about that? It's a classic question. Yeah, I agree with you. Yeah. I think, you know, so I do think it's complicated. I think that when hospital system administrators adopt tools with the intention of these promises of accurate and faster, then the expectation is that it can replace humans. But it doesn't. I'm not sure what you mean by there's no one in the radiology rooms or there's no one behind it, because I, I see there are people still there. But for now and maybe we're thinking in the future, but for now, it's it's still run by humans. We have a lack of, you know, there's a mess of health care right now and a lack of funds to hire people into staff and to take care of the patients we do have. So I guess that's one reaction to you. I don't think I answered exactly your question. I do think with regards to the teams that usually come together for diagnosis, it's completely lacking in. So this is something I wrote, you know, right about in terms of the ground truth that gets captured to train the machines. And there's a big difference between probably this the ideal ground truth that Larry's talking about. What we teach in school versus what's available, what the vendors can get and what gets lost when you lose out on this rich dialog and the communication. And not only do you not have good ground truth, but you lose training and co training and kind of building this community of practice that we're used to be facilitating our health care environment. So I think it's there's, there's a lot of risks and you've named a few I don't I'm not exactly advocating for you to be AI. So in this context without a lot of careful thought. So I'm, I'm sensitive to what your what you're saying as well but I don't the answers any other thoughts. I, I don't know so just one comment I think I think I'll earlier have mentioned A.I. and human needs to work together is not really the intention yeah. Is going to replace human in general. The nature of the job could change but I think from my point of view we will be very successful when we learn to work together as a team. So I'm really looking forward to to see that in future. The see that the AI is going to, for example, as a human, we have very good intuitive and everything and machine can pass so many data and give the solutions. And think about this too. And it's combined because we're not we cannot really process millions of data, you know, five, 10 minutes we may need some time. So if you combine these two characteristics is really strong. So hopefully in future we'll see this more and be successful, especially in health care where, you know, I know health care. Yes. I think I once you mentioned in the this I'm just now in this discussion of workforce that at least that's the way I interpret some of it and that's another big issue that, you know we could spend quite a bit longer talking about. But you know and and I mentioned the intention and that's really important thing, is that people who develop systems, you know, if they're human based, then they have a different intention than if they're just purely profit based. So if you make a product with the idea that you will replace people, that's a whole different story, you know, an impact on society. Now you'll hear big tech companies say, for all, we're going to lose a lot of jobs, but we're going to produce even more. That sounds good. But the trouble is, are the new jobs going to be available to the people who lose their jobs and at the same pay, or will they have to take a can? And so that's not discussed very much, but it's a real problem. And it's a problem for not only universities, but companies. You know, do they have some responsibility to prepare workers for the new jobs of the future, Maybe working with unions or community groups. So anyway, I just wanted to put in a plug for that big issue of workforce because that's a big part of society. Yeah, that kind of reminds me a little bit of the, you know, mimetic quip that has been going around where people say, you know, I don't think the future that we wanted was machines writing poetry and doing art while people work in mines and meat processing plants and so on. And so with that, let's get to the next question though. So we have enough time. Okay. Yeah, that actually overlaps pretty big with the question I had since A.I. is actually the best at doing the jobs at the top of our society. And one of the earlier speakers today had talked about the deflationary effects, you know, the adoption of this technology. How do you see incentivizing people who are in positions of authority with pretty decent paying jobs to eliminate their role in society? And what kind of transition plan might society have for such people so that their change in role isn't as contentious as the removal of, say, feudal management was The last time we had a major technological shift that yeah. What would you sort of say to that? How do you get doctors and CEOs and lawyers and regulators to recognize that they can't do their jobs anymore, but computers could? And like, how do you retrain people like that to do something of value to society? Well, that's a really interesting question and sort of two questions and one. Right? Because the first question or I would say implied question is, is I actually better at doing a lot of these things at the top levels of society? That has not quite been proven yet. And then the second question, the second part of the question is that assuming it is or soon will be how do you get people to essentially step away from their jobs and their livelihoods and their access to medical care and health insurance in the United States? Because that's tied to our jobs in favor of essentially more automation doing their work instead, we could also look around and say it's also professors. And it's also I mean, there's we have a lot of people. Yeah, yeah, yeah, yeah. I mean, the first question I don't think it's yet at that point. I think a lot of getting back to Larry's point is so thinking about the difference between sort of task specific expertise and then the like a job or someone's profession, I would say there's a big difference between changing the nature of, you know, physicians work or professors work versus replacing the whole category of that job, at least not the next generation, potentially beyond. I'm not a tech futurist, but I would say also to remember for, you know, we have these institutions in our world that have a lot of inertia, that have a lot of power. And so if there is a contingency of people with a lot of power and with a lot of historical inertia, it does take a lot to change these societal structures. So I think it would have to be something pretty revolutionary to convince certain groups to give up their seat at the at the table. But and I'm not I don't have any answers to have it exactly do that. We can look to the others. Yeah I can add a little bit on that. So I think the matter I it's just my point of view nature of the job, maybe like shifting from one to another, but that person is going to hold the position. For example, I think Georgia Tech, if I remember correctly, they have a robot like a chat bot type of year teaching assistant. So think about if I end up like an instructor, especially if I teach in school. You have a lot of students, right? Right now, they struggle all the time to prepare the questionnaire, the quiz and checking these papers and doing a lot of other administrative jobs. And then really they do not have much time to work with students, motivate them or or other things they're supposed to do, majority time involved with a lot of other works. So think if there is a AI based robot or some sort of system is there to help that person, then what will happen? He or she can get time to work with their students and guide them when they need that. So right now sometimes we just overlook because they're doing so many other things just to go to the class, to the lecture, and that's how it is. So definitely they will be the teacher, but they're their matter of the work shift a little bit. So as to I think for the doctors in a way, because they're very busy with, again, administrative work looking at the report, so many like their past history and then come up with like, okay, this is maybe the diagnosis can come with the verdict, right? But if there is a helping system that already has done a lot of like works in terms of looking at their previous history and suggest, well, based on that, it's kind of part of explainability to explain because of this this past history, I am saying this person has like X, Y, Z disease. This is my diagnostic now doctoral thing. Let me see. It kind of makes sense. If he's aligned with that, then he can go, Well, this is something that happened and then that person could spare time to talk to the patient. Because when a patient really likes that, if they have really some severe disease, they need that like a time that while they require that time, not necessarily doctor could provide it. But if any system can be done like a do all this job, definitely he can use that time to work with patients to talk to the patients. So that could be like another change. But definitely I can see the nature will change in the near future, but not really replace solely. So of course that's the perspective I have. Great. We're on Larry part. I it's interesting that is data here. Yeah. Technology is changing so quickly I mean Chad GP is is not a year old yet. I think in terms of the AI but it still takes 12 years or ten years depending on how you want to look at it, to produce a worker in the workforce, that that number has changed. But the technology keeps kind of being faster in class. So we're getting pretty close to the end here. So I want to thank you all for the conversation today. And maybe just to offer a sentence or two of concluding thoughts. One of the things that came to mind as you were giving your last round of comments is, you know, throughout this there's been sort of this unspoken assumption. We we tend to assume rationality and linearity in our technologies and our process of societal change, and that's very often not the case. I'm speaking as a historian because that that's my field history of technology. And I think maybe this is what I'm hearing from all of you as we're thinking about the future. We really need to keep that aspect of the past and the present in mind as we attempt to supposedly outsource intelligence or even trying to speed up the pace of technological and social change, not necessarily knowing exactly where that endpoint or even some of the waypoints along that line are going to be. So thank you so much for being here today. I'm going to mention that immediately after this session. There's a short break and then there's going to be another session, a combined session here in the ballroom. That's going to be a discussion on generative AI in teaching and learning. And I hope that all of you will join me in thanking our speakers and thank you to the audience as well. Thank you.

Other Episodes