Episode Transcript
Monica: Welcome to UVA Data Points. I'm your host, Monica Manney. In this episode, we're diving into one of the most timely and important topics in trustworthy AI. What does it really mean for artificial intelligence to be trustworthy? And why should it matter to you? To help us unpack these questions, we're joined by Farhana Farouk, a data scientist, researcher and entrepreneur specializing in research related to trustworthy AI, and Dr. Larry Metzger, a leading expert in AI ethics and policy with experience in neural networks, AI systems and policymaking. The two bring a wealth of insight into how we can and must develop artificial intelligence that is safe, ethical and accountable.
[00:00:46] Farhana: Before we get started, I would like to acknowledge that this podcast is part of Career in Trustworthy AI project funded by the Jefferson Trust at the University of Virginia.
Today I'm very excited to welcome our guest, Dr. Larry Metzger, a leading expert in AI ethics and policy.
He chairs the SEM US Technology Policy Committee and is a research professor of physics at the University of Vermont. He also founded George Washington University's Data Science Master's program and co edits the journal AI and Ethics and and has authored four books and over 100 publications on neural networks and AI. Welcome, Larry.
[00:01:31] Larry: Thank you. Glad to be here.
[00:01:34] Farhana: So we are going to start with the very basics, so please help our audience understand what is trustworthy AI and why does it matter for their carriers.
[00:01:44] Larry: That's really good to be talking about trustworthy AI because I see it as one of the fundamental things about AI that raises other questions about what is AI and why do we have it. We could just say trustworthy AI is kind of like reliable and responsible AI. It needs to be transparent, fair, you know, and aligned with human values. That would be one way to just sort of define it. But the other thing about it is it raises fundamental question from the beginning of AI that was in the 1950s. And one of the issues then was are we modeling humans or are we trying for something, I hate to say the word better, but something that is. Has more capability than humans. And they decided to simulate, basically, you know, model what humans do, but realize that it's a computer, you know, it's not just like a human.
Then at the same time, people were looking at what turned out to be artificial neural networks, which is a different way where we're actually trying to make AI system that is very similar to the brain. So the brain has neurons and connections between neurons and, and the structure of AI using that model is. Is trying to make something that really is like a human.
But in either case, you know, it's modeling.
And if, if the human is what we're modeling, then we're going to have the bad with the good.
We have bias. And all those things I said about trustworthy AI are problems.
So trustworthiness is a problem if you're dealing with a human or an AI system that's trying to be like a human. So I think that's why I say it's a fundamental issue.
What is AI and what are we trying to do with it?
[00:03:56] Farhana: So in terms of a workforce standpoint, what changes should we expect as trustworthy AI matures over the next 5 to 10 years?
[00:04:08] Larry: Really, we should talk about scenarios. So let's say five years from now, what, what do we expect? Right now we're, I'd say we're in a five year period where more AI is being used, but we don't know for sure. What, what is the impact of that?
We, we know some things, but we need more data. That's what the bottom line of that. We need data on what is happening to people right now, what happens over the next five years. And that might help us predict what's going to happen later on. And we need to predict that because there might be some systems that shouldn't be built if they're dangerous or we might have ways to mitigate the dangers or difficulties by those systems and anticipate by having regulations or economic changes that will help with the negative impacts.
So, you know, it's really who's going to be impacted.
It's really mixed and unknown at the moment. And until we get better data, we won't really be able to say. Now you'll hear people a lot say, you know, especially if they make AI products that, oh, it'll produce more jobs than it gets rid of.
But there's no data for that yet.
So I think that's one thing that people have to think about in the job market is that the rosy predictions that they may hear may not come about.
[00:05:45] Farhana: We keep hearing, as you have mentioned, that AI creates more opportunities than it eliminates.
So how should new graduates interpret that claim?
[00:05:55] Larry: That's a great question. Because if you're choosing your major or whether to even go to college, you know, that's makes it difficult to know what, what to major in and where to put your, your energy.
One of the things that's difficult at the moment is that people hiring people into AI change their minds, you know, regularly. I mean, so they may want somebody, there was a little time there where they wanted people using chat GPT type of systems to be a prompt engineers. That means that they know how to put input into those and get them get results back. But I don't think that's going anywhere because just about anybody can learn to do that.
So it's not really a professional thing. But my point is that people may advertise jobs, you know, for some kind of new name for a new kind of job, but it might not work out. Especially because AI is getting easier to use and, and more powerful.
Not only do people not need to know some things they might be learning, but, but they also may not be as many jobs as we thought there would be.
So it's just the details are the problem. You know, people can say that there'll be more jobs, but on the other hand, who will be getting those jobs? Another problem we're seeing in a prime example is in cyber security.
The degree is fine, but they're looking for people with experience.
Even though somebody might have, you know, several degrees in things related to cyber security, they can't get a job because they don't have any experience.
Now there are ideas on what to do about that, like having internships, partnerships with industry and academia and UVA and George Mason are both doing those kinds of things.
But it's not just a matter of.
[00:08:04] Farhana: Getting a degree since AI is everywhere. Even though students who doesn't really necessarily have like a background that much, not from like a computer science or data science major, but they're definitely going to interact with AI system for sure in that case.
So whoever for those planning carriers in AI, what do you think? Like what ethical considerations should be on their radar from day one when you.
[00:08:37] Larry: Think about a new job in AI, I believe that what's going to be needed more and this is sort of good news for people who aren't in the technology and maybe bad news for people who are is we need the human element. And that's my opinion. But a lot of people agree in the AI system and it's not just that sort of acting like it's a human. You know, there's stories about people using chat, GPT or something where they get to thinking that it's really a person, you know, which is little scary. So it's not that so much as designing the system for humans, you know, so why. So the whole process of human centered design, I know you're interested in that in your research is, is really important because if it's used fully, you would start at the beginning, before you've even designed an AI system. And Say who is this for and what, what are we trying to accomplish? And if it was used objectively, people might decide they don't want to build an AI system that they're thinking about.
That would be really great, you know, if we could keep from building things that, that are going to be a problem for people.
So there's one thing that we like to say is that just because you can do something doesn't mean you should do it or you have to do it. Now that comes up against people who need to make a lot of money.
So they don't care so much.
This is to serve a political opinion here or policies. But let's say that is that people want to build things and make money and that's, that's a good thing. But if they don't think about who's going to use it and what problems might come up, that's not good. And there are some reports about systems, AI systems for disabled people who haven't been designed properly because they didn't really talk with people who are disabled. And there are even reports where it's been dangerous, you know, it's, people can get hurt or by using those systems because they weren't designed for really with them in mind.
So this new movement that I know you're involved in too is human centered.
So human centered design and development, human centered AI.
Remembering that at least as far as the short and medium term AI systems will be used by people to help people and they're not replacing humans completely.
There needs to be this human in the loop, we say, so that they can make sure the system is being used properly and that there's also this human element in interacting with the system.
[00:11:41] Farhana: So do you see any upcoming AI challenges that are not getting enough attention right now that future professionals should be thinking about?
[00:11:51] Larry: So, so one thing is having experience using AI systems and that's starting a lot, you know, even in the elementary school periods time frame to get used to having AI systems in your life and making sure you use them properly and all that.
If we go to the current day with people who graduating the generative AI, the Chat GPT Gemini type of systems are, are really causing us as, as professors to think about how people are using those and, and I would say not banning those like people initially said, but how do you actually use those? Of course it would require faculty learning that themselves and you know, that's always a problem. The big thing there is to, and we've had this with computer science from the beginning is that, you know, the garbage in garbage out kind of thing that you can't just ask a question and just take the answer as, as correct. And so, you know, there's a good way, you know, through a process of interacting, that's what they call it, Chat GPT, with the system to, to get to a point where you feel confident or you trust it, trust the answer that comes out. And the danger is that I don't know the statistics on this, but I'm afraid that there's a lot of people in general society that are, and even in the universities, students in our programs who are just taking the output of any systems, but especially generative AI and just assuming it's correct.
And so anyway, the, one of the ways that this is a good thing for people who are not in computer science or data science or something is that if you're an expert in a certain field and there's so many different ways that chat GPT can be used, one of the things you can contribute, you know, is being able to see if answers do make sense.
We need somebody who knows as much as or more than the generative AI system to be able to see that the results are correct. So as students we need to learn to use it and to, we could say trust but verify that expression, you know, you've got to check it. And the other thing is that even when you use something like ChatGPT, it tells you that you should really check references, for example, that it produces or other things, because it does make mistakes once in a while. So, so that's, I think that's an area for, you know, arts and sciences, human humanity majors to, to get involved in a really important way because data quality and, and the relevance of the data you're using in an AI system are extremely important.
I think what they need to know is not so much about coding a system, but how to work with an AI system, how to know when something's not correct or questionable and then pursue that. And one of the areas where that's especially true is our area of ethics, AI and ethics and how to make policies that keep people aware that there can be ethical impacts of systems that we produce and put out into the public.
[00:15:43] Farhana: So let's zoom out a bit because the environment we are stepping into is really shaped by force beyond individual, like individual companies.
So in terms of how is regulation, like both existing and emerging, impacting the demand for AI skills?
[00:16:03] Larry: That's a interesting and active area for people in general.
Europe is ahead of us in terms of regulation in the US there's a Lot of resistance to regulation. So regulation does happen in the U.S. you know, if you look at Food and Drug Administration and people have to have a driver's license to drive a car, you know, there are a lot of regulations that are needed to keep our society making sure that technology is, is doing good things and not creating more problems. So one of the arguments that people make, you know, is that, well, regulation will make innovation more difficult. People won't want to build things because they won't be able to sell them or you know, that kind of thing. So there are people who argue that that really is not a problem, that we can have safe systems and ideally self regulating. If companies are aware of human centered design and make sure that systems are safe, then we wouldn't need regulation.
So that's sort of the other side of the argument.
[00:17:19] Farhana: So in terms of like I'm just thinking about new graduates, things are changing continuously every day. What new challenges in AI do you think future graduates will need to address that we maybe are not talking enough about right now?
[00:17:35] Larry: A general comment is that the speed of transition, you know, new systems appearing and being adopted into the workforce is something that new graduates have to think about.
You know, they, they, it's. You could still learn as an undergraduate about AI and learn, and make sure you learn the basics and ethical AI and so on as fundamental things. But if you, you, if you go too much into the current language of building systems or current ways that systems are used, then it could become a problem because in just a few years things can change. Where employers are looking for something different or don't need people's help anymore on certain topics. Specifically, though, I think some of the roles that I think everybody would agree are going to be there. People who can audit systems and find out if they're built properly and make sure that they are safe. You and I know people who are now in job descriptions of called AI ethicists and there are people who advocate that every development group should have an AI ethicist on their team since so.
[00:18:56] Farhana: Many things are happening around us. Like with AI evolving so fast, how do you personally stick current and how can students and young professionals keep up to like. Do you have any suggestions?
[00:19:12] Larry: Yeah, I think that there really is a certain, a type of educational program that emphasizes experience.
Even in my physics background, you know, I've been involved in physics education and research.
What we've discovered is that active learning is students learn more and they retain it better if they, if they don't just listen to lectures, but they have actually Opportunities to try things out.
And that really applies to all areas of education.
If people solve problems, work on projects, have internships, those kind of things, they really learn how to use systems. And I think that's just more and more important.
[00:20:02] Farhana: Do you have any experience you want to share with. How can, like an industry and academia work together to make sure AI professionals are not just. Just technically strong, but also ethically grounded?
[00:20:15] Larry: Yeah.
I'll use an example from my work at George Mason University with partnerships with cybersecurity companies where they take on students who are in that field, cyber security, and have these industry internships where they can go for the summer or for some period of time and, and gain the experience that that's really necessary to go beyond the book learning to do the kind of things in cybersecurity that are necessary. And there's some other programs where people in the companies will come and spend a year or a semester on campus. I think those are really important, you know, because getting that experience just really rounds out the academic program of classes you take and everything.
And they're. So anyway, there are industries that who do have programs like that. And, you know, it's in their interest too, because the people who have those internships may end up working there later on.
So it's a good thing all the way around.
[00:21:28] Farhana: One quick thing I definitely want to hear your thoughts on. We keep hearing like a lot about lifelong life learning because things are changing.
Even though students finish, like everybody needs to when they're working, they need to learn new things to keep them up to date and stuff like that.
So it's kind of overwhelming most of the time for some, especially some students who are going to join in the workforce. What's your, like a single most important piece of advice for them?
[00:21:58] Larry: Yeah, I think lifelong learning is the advice. And it's not just sort of like an optional thing. I mean, it needs to be an ongoing habit, you know, that you want to keep learning. And it's not just like you learn things and then you're done learning, but to stay curious, to embrace change as part of your job and look at new tools or challenges as an opportunity to expand your experience. And in my data science program, we really did a lot of things to make that happen where people would not just learn a tool because they were going to keep using that the rest of their life. But. But how do you use a tool to develop a system and then. Because it's going to change and they're going to be more things going on.
So, yeah, I Mean, I've, you know, I wasn't born yesterday, so. But I still am learning stuff all the time. I've actually learned things from you.
So anytime I can learn from somebody, I'm, I'm happy. But the podcasts like this, networking with people in a field you're interested in and I think also, you know, looking at the impact on humans. You know, if you never meet anybody who lost their job because of AI, you think it's not a big deal. But there are plenty of people out there who are doing things that they don't really want to because they didn't have any choice after they lost their job. So I just think, you know, generally working with people, getting to know people in, in the real world is, is a good thing. I think one thing is to not be afraid.
I started out in physics. I did a lot of work on data and I love that. So I wandered off into computer science. I worked in artificial neural networks, so they're not unrelated things.
And I've done physics education research and I wouldn't do it differently doing different things and learning new things and meeting new people, it's, it's really exciting. So I think people shouldn't be discouraged by things people may say about the future of work because you'll find a way, you know, to if you keep an open mind and learn from everything that you can.
One of the things that I did earlier in my career in neural networks was I promoted the idea and did a lot of workshops and so on on the idea of hybrid intelligence systems. And what I mean by that is not trying to solve everything with a single tool, but seeing how combining different tools or different methods can produce something more powerful than using just one approach to something. And I did that in the context of neural networks plus fuzzy logic or you know, different other technologies. But I think in general there's a lot of ways to combine things, you know, different areas of interest.
So that's, I think people should keep their eyes open for that.
[00:25:17] Farhana: That's a great suggestion. I think sometimes we do not think that part. We just keep solving things by ourselves sometimes and that is hard. And also you're not learning or exploring other avenue because yeah, as you have mentioned, multiple sources, intelligence, either human or helping, like getting help with AI tools, that helps. I really, really appreciate your time, Dr. Metzger, and for sharing your insights, you have painted really a nice balanced picture of the opportunities and responsibilities in these new AI driven world. That's super helpful.
I really hope audience definitely going to get benefit from that. Part and to our listeners. Remember, the future of AI is not just about technology, it is about the people shaping it. And as professor has mentioned, stay curious and keep learning.
Thank you.
[00:26:21] Monica: Thanks for listening to this episode of Data Points. More information can be found at datascience.virginia.edu. and if you're enjoying UVA Data Points, be sure to give us a rating and review wherever you listen to podcasts. We'll be back soon with another conversation about the world of data science.