Episode 5

June 20, 2023


A.I. Goes to School: The Future of Artificial Intelligence in Higher Ed

A.I. Goes to School: The Future of Artificial Intelligence in Higher Ed
UVA Data Points
A.I. Goes to School: The Future of Artificial Intelligence in Higher Ed

Jun 20 2023 | 00:40:49


Show Notes

In this episode we’re looking at the past, present, and future of artificial intelligence in higher education.

To explore this topics we’re featuring a conversation between Phil Bourne, the dean of the UVA School of Data Science, and Jeffrey Blume, the Associate Dean for Academic and Faculty Affairs, also at UVA Data Science.

Jeffrey and Phil discuss the recent trends in artificial intelligence and they look at how this will impact the student experience, the faculty and staff experience, and the research landscape in higher education.

View Full Transcript

Episode Transcript

A.I. Goes to School: The Future of Artificial Intelligence in Higher Ed Phil Bourne This is an aside and probably won't make the podcast but so you probably don't remember a fellow called Bob Newhart, but he was a American comedian was initially on the show, it was initially on the radio, and that he has a wonderful skit where he talks about being the person responsible to go. And there's no, you know, axiom that if you give an infinite number of monkeys, typewriters, they'll type Hamlet. And so he was the person responsible for going around and looking at the infinite number of monkeys and seeing what they've actually done. And he's actually reporting back and he says, it goes to station like 15,003. And he said, I think we have something here. To be or not to be. That is the because you didn't assign. So that doesn't happen. We chat GPT it basically gets it right to some to some degree. Monica Manney Welcome back to UVA Data Points. I'm your host, Monica Manney. In today's episode, we're looking at the past, present and future of artificial intelligence in higher education. To explore this topic, we're featuring a conversation between Phil Bourne the Dean of the UVA School of Data Science, and Jeffrey Blume, the Associate Dean for Academic and Faculty Affairs, also at UVA Data Science, Jeffrey and Phil discuss the recent trends in AI. And they look at how this will impact the student experience, the faculty and staff experience, and the research landscape in higher education. So with that, here's Jeffrey and Phil. Phil Bourne Hello, everyone. I'm Phil Bourne. I'm the Dean of the School of Data Science here at the University of Virginia. And I'm also a professor of biomedical engineering. Jeffrey Blume And Hi, everybody, I'm Jeffrey Blume. I'm the Associate Dean for academics and Faculty Affairs here at the School of data science. I'm a professor of data science, and I am a biostatistician by training. Phil Bourne So let's start off. We're here to talk about the future of AI in higher ed, which is a pretty broad topic. So, Jeffrey, you're responsible for the programmatic activities of the School of Data Science. Let's start there. I mean, how, how is AI embedded in the curriculum? You know, either undergraduate graduate level or both? And how do you see that changing over time? Jeffrey Blume Yeah, that's a good question. I think there are two levels here that that, that I can touch on very broadly. One is what our curriculum looks like. So how are we training people to build AI tools to interface with AI to evaluate those tools and go through and think about those things. And also just the impact that the advances in AI have had over this last year, for example, Chat GPT on how we deliver our material, how we run our classrooms, and how we evaluate and test students. And that the latter has been the big change. The school is developing and putting its curriculum together in many ways, and we're trying to keep up. But since the since Chat GPT has got here, I have never seen so many courses, and assignments and thoughts about how we teach and how we deliver material has just changed from a virtually every instructor has stories about this, students are openly asking how they can use chat GPT, either in real time in the classroom, or on exams, or for homework, or to learn. And there's a wide variety of responses. It's just fascinating to see how people have dealt with it. And it's, it's really, it's been really just interesting to sort of walk through this transformation, and it's a major transformation in the classroom. Phil Bourne Yeah, I think you know, more genuinely, it has created an absolute ripples through the fabric of education, there's just no two ways about that. And I think it'd be interesting to see how it evolves. I mean, I think, from the point of view of me being a dean of the sort of leadership level, the University and I've been talking about the AI and the impact that it's going to have for a number of years and it just didn't have the urgency that suddenly it has now because of the likes of Chat GPT and you know, when I think about why that is, I think it's really because it's there in everybody's face and it's there in a way that is both exciting and disturbing. So you know, the generative part of that the fact that it's generating language that you know, can effectively can pass the bar exam or can you know, certainly undertaken AP Biology test, write an admissions essay for the university, university entrance. This is this is pretty groundbreaking. And so I think that's part of it. The other almost nuanced piece of it is the way it does it the way that it acts. To be typed out, letter by letter, what it is that you see, it actually makes you think that behind that curtain, Jeffrey Blume There's somebody there. Phil Bourne There's someone there. And, you know, I think that that that leads to a level of disturbance. So the good news is that it's got us as the university really trying to determine how best to utilize this, I think there's a general feeling across the board, that this is an asset, it's not a threat. It could be a threat if used in nefarious ways. But that's, that's been around for quite some time, we could certainly talk more about that. But if you think about where it could be an asset, it opens up all sorts of possibilities. The trouble is that while that was stuck, that's being that asset. It's also disruptive. There's no question that it's disruptive to all stakeholders. You know, we talked about this at a different podcast, Ken owner, who's the provost, advisor, on stem, and the watching the chat, as we were doing that the kinds of questions that were coming in were from all stakeholders, students asking what the expectations were with respect to how it will be, how these tools will be used. There were parents asking, well, what's the how's that going to affect my child's chance to get admitted? It just it was, you know, and and then there was how there were, there were faculty were saying, Well, how do I accommodate this in my classes? And then, you know, there was, how do I use it? Likewise, in research, and what does it mean to my research? So you know, it affects every stakeholder in the higher ed ecosystem. Jeffrey Blume Now, the amount of disruption is really incredible. I mean, even at a basic level of what you teach and how you teach it. I do want to say one funny story is I think students feel a little bit like it is a little bit of cheating, because they're asking somebody else. Whereas students, when they have a take home exam, for example, in my class, they felt comfortable. Because I told them, they could do research, they felt comfortable doing research with the Google search and looking for answers and pulling some things out and distilling it, they didn't feel comfortable using chat GPT, even though chat GPT is doing almost the same thing. And part of that is because they it's personified, I think to some extent. And so I, for instance, I had a chat with my class about what's in bounds and what's not. And I just told them, Go ahead that they could use anything, they could use chat GPT, as much as they wanted. And I most of our faculty have actually taken the same stance. And in fact, we have some faculty who actively gave homework to go to chat GPT to ask Chat GPT to solve their problems. And then to go back and figure out whether or not the answer was correct, and in what sense, it was correct and why or why not. So we're already a really trying to get this new tool, rather than seeing it as a cheat sheet. But as a new tool for doing research. And for helping students develop skills and getting some instruction, we're already trying to do to work that I think faculty are actively trying to do that to work that into their class. But that requires revisiting all sorts of materials. So the types of homework, the type of rote homework that used to give here to practice, is almost, I think it's hard to figure out where to work that in now, because that can get solved very easily. And then the students don't gain the benefit of thinking through and building those that muscle memory or that brain memory about how to solve the problems, they could just ask Chet GPT to solve it. They almost do better if you ask them to ask Chat GPT first, and then have them evaluate whether or not chap GPD does a good idea. So it's really changed? Phil Bourne Yeah, no, I think that's really the right attitude for how to how to approach this. It seems to be the consensus. It's partly I said, already within the university leadership, but that's exactly how to approach it. We're waiting for the results of a task force that is going to report over the summer, that's been interviewing a large fraction of the stakeholders of the university. But my suspicion is, this is what's going to come out of it. And as a tool, like any tool, you need to recognize what it can do and what it can't do. I like you, I was giving a class on responsible research. It's all about the ethics of research and so on, which of course, this touches. But I actually use chat GPT to ask me a definition of what it thought was, you know, responsible research, and it did. If you look at that, if you highlight which I did, the elements that I came up with, they're exactly the things that were already in the syllabus, other lectures are talked about, and they were what I was talking about. So all of that was that was good at it. It sort of created a new starting point. And I think that's, that's what tools often do, right? They create this new starting point from which where you can advance forward. So I think that is very exciting. The downside of this, the hallucination that everybody talks about is I asked to get at chat GPT to give me a set of references that are seminal in the study of this field, it gave me 10, lovely references, I started Googling them to look them up, not one of them was real. And there was nothing real about what it pulled up as a reference to the work, even though it's clearly got through, you know, a lot of things that relate to particularly to the likes of Wikipedia, I've got general definitions, but it, it wasn't scholarly in the way that you expect out of your students. So it gives you that starting point of which point the student is then still left to, to go and research the literature around what a chat GhasPT set. So that's kind of the next incremental step in this course, with every version, it's going to get better. So that, you know, I think a real challenge is going to be moving us as teachers and the students moving fast enough to keep up with the pace of improvement in these tools. Jeffrey Blume And, and, and thinking about interacting with it differently, right thinking of as a giving you a draft, and then you're going to revise it. But that just because it shows up on the internet doesn't mean it's correct, or even what you want, so that the same hallucinations come up. So I tested my midterm, I gave a take home midterm, and I tested it by running it through Chat GPT. And it did great, except it had lots of those hallucinations. So it looked like I should say, look like it did great, except that there were a lot of empty functions functions that I said compute the Jamestown estimate, it would say, okay, get that Jamestown estimate. But there's no such function. So it understands that you want a thing, and you need a function to do it. And it can help you set up some structure but otherwise than that, it can't. And it actually took me longer to get Chat GPT to get the code, right, by giving a very specific examples that it would, there was twice as long than if I had just coded it myself. So it's not quite at that point where it's, it's super-efficient, ready, I think it will grow into that. So that Chat GPT is going to be I what I think of as a research calculator, in the same way that we use a calculator to do arithmetic. And we don't emphasize a lot of arithmetic, as much now but we enter emphasize general concepts, I think Chat GPT is going to be more of a research calculator for people to help people distill information to get a sense or a place to start. Phil Bourne Yeah, I think that's true. And, you know, I think the interesting thing is it touches every discipline it This isn't confined to stem or anything else. This is this affects the humanities and social sciences as much as it affects, you know, what we've been doing traditionally. And I think that also, of course, relates to what we do in data science. And maybe we'll come back to that in a minute. But I think thinking about it as a tool in these less traditional fields, it just clearly changes the way you learn. I mean, already, we're in the situation where if, say, you're a historian, you're, you're studying some aspect of history, you're going to, you're going to use Google to find documents, but then you're actually going to read those and you still, because a lot of that stuff is still not digitized, you're gonna have to go to special collections and all sorts of places to actually get to that content. Clearly, this is the sort of inverse driver of this is the power of these tools means that more and more material is going to be digitized to be embraced within the tool. So you know, the special collections will. And there's all sorts of copyright and other things going on here, of course, but putting that aside, you're going to start seeing all of this. And so you can do the most amazing things you can suddenly the starting point is not you going and poring over a manuscript on a very specific topic. It's very an accumulation of manuscripts within the framework of a generative model, that is your new starting point, then you might dig into specific aspects. So you've really, what's happened is the efficiency by which you're working, the rate of discovery, presumably, will increase. And the way you go about it will also change quite dramatically. And that is, I think, where the excitement is in all of this. And just I mean, when I think about I started using various aspects of AI and biomedical research many years ago, that's essentially what happened. It's just now it's suddenly become on steroids. Jeffrey Blume Yeah, I mean, I also think in terms of the learning model It's really accelerated the learning model. So you might imagine that the students would first need to take a class before they could be a TA and check other students work. But with that, with what Chat GPT is doing is essentially turning every student into a TA. So if you ask it the question, you now have to know enough to evaluate the answer. And the students aren't quite there yet, we haven't quite prepared the students to switch into that role. And for a lot of students switching in the role from a learner to somebody who can be more of a teacher and help other students is a big step, and takes a lot of synthesis of material and confidence and ability to think through and practice. Think about what what is right, how do I evaluate this, I'm sort of flying without a net. And the way we teach now, we sort of don't let students take that role on yet. And part of at least what we're doing in the school in our active learning set is to try and do exactly that, to get people working in groups. But chat GPT almost forces this now. So that the students now need to do that practice to learn math as masters outside of that, and then come and do these things. So I just find it's accelerating the demands on the student related to the subject material very quickly. And it's, it's it's totally changing how we do student evaluations, what choice of materials we want to teach, and how we present that material and all sorts of things. Phil Bourne It does make me question situation where that in some ways the student is ahead of the faculty with respect to this kind of adoption, because I think about active learning, which is, you know, in a sense, what you've been describing, and how that is as caught on. But, you know, a lot of faculty is slow to change. And save time, the rate of change within these, as a result of this kind of tool is pretty dramatic. And I think that in terms of who does best at higher ed going forward, it's really going to be thinking about how we embrace this and be prepared, not just for the fact already, we're looking at this, you and I are talking about this retrospectively, there's tools here, right? What we're not, what we're not talking about yet is what is the next tool. So what is the next thing that really is going to, of course, that's very hard, because we don't know what it is. And even those who are researching the field, necessarily have a very good handle on that. But clearly, there will be other advances. And it won't just come from AI per se, it'll come from other types of technologies that feed into this, the use of virtual reality, for example, which still is not really used much in the classroom, but that's clearly going to change over time. So it's, it's it's a, it's a new world, and very exciting and somewhat daunting. Jeffrey Blume Yeah, I could totally imagine a virtual classroom where students could be home, and they could join a class. So they wouldn't feel like they're the only person maybe there are other people. And there's a short, it's a short class 10 minutes, and they learn a particular topic. Whereas people are watching YouTube videos for this. Now they might do it is more in a social way, by everyone doing things through virtual reality. And then the interactions might be run a little bit by tools like, chat GPT, I think the major advance for me in chat GPT, the big stunning advance is the ability to communicate in what is seems like a very common language and easier understanding. So you used to go to Google, and you used to get a whole bunch of things, and you'd have to go through them. Now it's all condensed in a way that you can read and understand it and get to it and feel close and feel closer. There are trade offs with that. I mean, you're trading some volume and some sensitivity for specificity there and your results. But I think those will, then those sorts of things are now going to translate probably into the classroom and the student learning experience. So I think students will be learning more, or I hope students will be learning more outside the classroom. And the classroom will be more about synthesis and how you figure out what a right answer is, in this particular context, or how do you think about it and less about rote practice, we're sort of moving from rote practice to more higher level function in the classroom. And that's what's exciting. But that's also I think, we say faculty are slow to change. It's very hard to redevelop, you know, a course to catch up with that because all your materials and frameworks are for building you're smiling because you've done it, you know, are building as you move things through to bring the students to some some certain place where they understand everything and now that's just been accelerated. And I also think it forces the faculty to come out a little bit from behind the lectern in a you know, in a figurative sense they they have to do more of an apprentice and less here, just learn this learn this learn this year. My slides, but here, let me take you through the learning process. You know, and let's evaluate this as a group. And that's a very different interaction as well. Phil Bourne I'm actually smiling not because of that I'm smiling, because you're defending of faculty, which you should. And I, of course, I should too, but I didn't do anywhere near as much teaching as they do, or you do so it, you know, you're absolutely right. It it. It's to be fair to the faculty, it's a lot of effort to read, refactor, under these circumstances, I think be useful. You know, we talked a lot about how the, how AI affects education in higher ed, let's just talk a little about research, and then administration, which are the other two things, and we spend a lot of time on. So let's just talk about research. I've, you know, my own research, I am just particularly excited about what happened, I work a lot at the molecular level. And about two years ago, the the breakthrough of the year, according to science magazine, which is a reputable source, was an AI model that actually could actually predict the structure of a protein. This has enormous ramifications with respect to how biology advances, because with the ability to be, because from those structures, comms function, and so on, and that affects everything from food production, to transport, obviously, health. So it's, it's really changed. And it it can't I wouldn't say it cast aside, but it requires significant retooling of experimental methodology that was in place to do exactly what an algorithm can do now. And it can only do it because it's got so much data to learn from that was derived from experiment. So in a way I to say that it put put these folks out of business, by virtue of what they'd already done, it's not fair. But it changes the model of how things get done. And so it has, you know, real, amazing ramifications. And I think it changes the research method as well. If you look at, you can go and look in any bibliographic database. And you'll see, the number of titles of papers that's that mentioned, AI in some form, has just exploded, as this just takes hold across the across all of scholarship. So it's affecting all of our research, Jeffrey Blume it's certainly completely I think I have mixed, I have some mixed feelings on how it affects the research, I mean, so I guess, as a statistician, I tend to be a little bit of a reductionist and I find it helpful to have a simpler model that generally describes nature. But that may always not be correct. And I think that that has its limitations, in your case for protein folding. And when you take a approach that can really search the entire solution space and figure out all the solutions, that's certainly one that's that opens up tremendous amounts of avenues for research simply because we hadn't learned yet, even if there were rules that governed all those cases, or we didn't understand them names, you would know sort of better than I, but but now all of a sudden, you have all these extra proteins and all these other structures to explore. Whereas before, you weren't able to get there from that smaller reductionist model. So there's a real trade off in thinking and we keep going to these bigger and bigger and bigger solution spaces. And I, you know, I begin to wonder, can you even get through all these things? Can we test them all? I mean, how much are you know, what sort? What's the research look like now, right? I mean, is there is every, is every protein going to be generated in a lab? And tested for different things? Are there too many? Are we using it for guessing about what's the right molecules for here or there? It does sort of change us from Can we do it? Would this work too? Okay, if this should work? How many of them should we do? Right? Phil Bourne This is a whole podcast unto itself. That, you know, I, I will just say a couple of things about Well, one thing mainly about about this that I find really intriguing is and it relates to human versus machine. So is as it relates to this field. I mean, everybody recognizes the double helix of DNA, which of course, a precursor to protein. And that is an iconic view. And the reason we have it is because that's how humans can actually really comprehend what they you know, this very complex molecule. And it just, it's just a useful, very useful shorthand for doing so. The problem is that after a period of time you start because you're so used to working With this kind of representation, you actually begin to think that's what it's really like. And it's what I call the curse of the ribbon. Proteins have more of a ribbon diagram, but it's the same concept. And on the other hand, an algorithm is just absorbing many, many features, and essentially exploring that feature space and coming up with answers. So in a way, by the human act of this is a theory, the human act of reductionism, through this, creating an iconic and simplified view of a molecule is actually made it harder for us to see the big picture of what's really going on. And then alpha fall to is this algorithm that first really had the major breakthrough is able to do that. And so, you know, you, you start to question whether the scientific method, which is very much aligns with reductionism, under these circumstances, when you can absorb an ever increasing amount of feature space, by virtue of the amount of data you have, and the amount of computing power that's forever increasing, it begins to change how you think about how you do science? Jeffrey Blume No, I'm just thinking the exactly that. I mean, what are the general lessons? What are the broad strokes? What are the rules that hold almost all the time that sort of goes away, right? For lookup table? If I want the answer to this case, you tell me the features, and I can go find out exactly to that case. But there's no general pathway through the solution space to that answer from one place to another, that the same thing is going on, interestingly enough, with on the biomedical side with clinical prediction models, I do a lot of work there. In particular, in lung cancer screening, where we want to know who do you screen, because who might develop lung cancer, and we have some general rules. So you smoke a lot. You may get lung cancer, you're older, you may be susceptible, you may have a genetic history, you may be susceptible, but there's some broad rules. And we haven't quite gotten to the point where we can do this very highly sophisticated, giant models, we can run them, but I don't know that they predict all that much better what the lung cancer risk is. And we're also struggling with well, what are the features that we put in those models. But clinical prediction models are getting more and more and more complex for exactly this reason, and you lose some of the medical flavor of you know, if you're older, you might be a little bit more at risk for cancer for you know, and because you're taking into account all sorts of other trade offs, you know, maybe you live somewhere where there wasn't a lot of air pollution, so you have less exposure to find particulates in the air, and that reduces your things. So we're still struggling on on that side about putting those together. And those solution spaces are really ginormous, because everyone's their own, you know, you're trying to do almost personalized medicine at that case where you're trying to predict people's risk. And there you can see the shift is coming from having a few easy factors to check, say in a clinic, right? Are you over 55? Are you male or female? Do you smoke? Okay, I'm going to screen you to let me calculate your risk according to some model. And if that calculation is high enough, then I'll screen you. So we're sort of going from this easier space to sort of what is your risk and risks are hard to estimate and hard things to get a to get a sense about. And it's interesting as similar phenomenon sort of growing into other areas and impacting. So that's really changing a lot of what we're doing on the research side, all these advanced tools for, for prediction, there's a hunger for data, right. So as soon as we figure out how to get into all of and combine lots of everyone's medical records from different places and things, which is a problem onto its own. The I think you'll start to see much, much more advanced or risk prediction models for the person that aren't necessarily disease specific, but that are for you, what are those types of things that you'll eventually might run into or might be at high risk for? Phil Bourne As you were talking? I was I was thinking about what it is that drives us into research in the first place. And in my own case, I don't know what I ended up in data science, but my PhD is in chemistry, in physical chemistry. And I think it was in high school that I was really taken by the periodic table. Here was this thing that there was such order in the universe, and I just as well, but, you know, that had been discovered by, you know, a bunch of people, Russian scientists in particular, you know, working in poring over this and discovering new elements and adding them to the to a model that worked. And then I think, well, how would that how would that discovery happened now? I mean, if you knew if you knew the atomic number and If the properties of all these elements and you said, Build me an organization, an organized chart of of these together, I wonder whether it will come up with a periodic table? That the issue, of course, is that it, I'm sure it would now because it already knows what a periodic table is. But you know, this the ability to be facilitator and all of this, it's it, you know, I might have gone on to do something completely different. If Chat GPT have been around when I was in high school, it would Jeffrey Blume have told you what it is I wish to catch up with around high school because I didn't like chemistry, I had a harder time in chemistry. But the the organization that's there, right, and chemistry sort of advanced over time. And you can see it too. Yeah, that's sort of fascinating. Yeah, I was, Phil Bourne it's kind of the building block of everything, you know, you you look at the properties of those elements as they sit in that table. And that defines so much of what actually happens in life. And whether it's Its synthetic life, or, you know, or something synthetic, or whether it's human life. So let's turn to Jeffrey Blume administration, because I can a guy, yeah. So I mean, one thing that I noticed, for example, I use Chat GPT, to help me write. So I don't think of myself as a natural writer. And when I have really important things to present, or I want to have very clean language, or I want to make sure my purpose is plainly made, and sometimes, you know, as administrators, you have to send emails that are very clear. I'll I'll have Chat GPT edit them. And and I'm not the only one I know. But it is fantastically helpful. It's on demand. It's fast. I used to use thesaurus a lot. I don't use a thesaurus anymore. I just use Chat GPT Phil Bourne Yeah, it's hot and fat, you know, the President University, Jim Ryan was saying, you know, part of his convincing of this fundamental changes, he asked chat GPT, the write a mission statement for the University of Virginia. And it did a pretty good job. Actually, he said it did a better job than he did. But I'm sure it actually goes back to the periodic table example, I'm sure it actually pulled a lot of what he'd already had to say about the mission statement, the university. So it, you know, from an administrative point of view, it actually defines things. It's, it's, it's potentially facilitatory, and defining the mission of the institution itself, all the way down to everything that can possibly happen. I mean, you know, we're we're very, we're very much focused on generative models, but there's, there's so many other aspects of what AI can do, particularly as it relates to new already alluded to it in the sort of data integration space. You know, the example and one example I like is the notion that there's the interrelationship between student's health and well being and their grades, and trying to be predictive in and catching early where they're having significant problems by changes in their, you know, even minor changes in their grades. This opens up a huge ethical can of worms with respect to what should be looked at, and what shouldn't be looked at. But you know, that the, the idea that potentially, you can help the student and their performance, you know, just comes into the whole precision education piece of it. But, uh, you know, there's a lot of administrative function here in the sense that the student records and student health are all kept in administrative resources within the university. And, you know, we're, I remember saying to the President, again, Jim Ryan, when we were forming the school of data science, and he was congratulating us. And I was saying, well, we only need a reason to hire the people. We're graduating, because we're not really in the university. And this is not a criticism of UVA, it's it's very general, we're not actually using a lot of these tools right now, to actually improve the the university experience, both for students for all stakeholders, parents, students, faculty, and clearly that now with this kind of interest in these tools, I think that's going to start to happen. Jeffrey Blume Yeah, so for us, it's already starting to happen on a couple of levels. So we still do the old the old approach to this, which was to have a student review meeting, where you invite the students advisor and you invite all the faculty and you run through the students and you see how the students are doing, they're okay. But we're also through our Student Affairs, working on setting up dashboards that are based off student records so that you can indicate Hey, the student hasn't turned in homework. So in our online program, for example, we want to catch students who consecutively don't turn in their homework so we can engage really quickly. We are building dashboards so that we can check mid semester grades, it would be great to even be able to roll in some, you know, interim performances, if we can have that more slowly putting this together. And that dashboard is looked at as well as people's Student Affairs who understand who the students see, and talk about how they're doing, you know, their life, are they having a good time getting their job? Are they enjoying the, you know, are they are they engaged in the program? Do they feel welcome here, all that sort of information can all get into one place. And some of it we can automate, but some of it requires, you know, a human touch. And that human touch, I think, is important, but the ability to almost like having electronic health and electronic academic record that was broad that you pull everything together, and you can see how the students doing would be great. And right now we have lots of separate academic records that we use to do that. So it wouldn't be fantastic to pull these things together and go, Yeah, I will say so one thing for the on the administration point and using chat GPT. So is that think as administrators, you have to be careful, it's easy to use it, but it does, I feel like lose the personal touch. And so I have been trying to figure out how to take the help on Grammar and Style, but maintain who I am in my email for that connection, because we do a lot of communication through email. And people do, even though they do tend to read the email in the tone they're in at the moment or mood they're in that connection, I think is very important. And so it still does take chat GPT still is very generic, and will probably go on being very generic, but it's important, I think, to connect. Phil Bourne well, the scary thing is are the thought is that sooner or later, it also detects your emotional state at any given time and responds in accordance to not just answering a question or you know, providing some text, but it's doing it in the context of your current mood. I'm making this up, but it's clearly Jeffrey Blume it does that a little bit, it was the guy from New York Times who convinced chat GPT to fall in love with them. But it just you know, there were lots of prompts. Some of that was some emotional feeling and eventually picked up. Phil Bourne I mean, this is not new, I mean, they've been trying to use these tools to actually gauge when someone calls the helpline to try and gauge what the likelihood is that they were going to, you know, basically do some damage to themselves, for example, you know, it's obviously a huge dangers with this, but potentially value to society, which of course, is, is what we're all about. But just sticking on the administrative thing I have is just one of the fears in all of this is that jobs are going to go away. And just I was just thinking, as you were talking about some of the administrative aspects, I just see it creating in some way, it's creating a lot of new jobs, but a lot of different jobs. So the idea that, you know, again, there's lots of nuances and ethics associated with this, but part of the joint, the developments that are going on are not just in these tools, but also in things like censors. So you know, the presence of people in in certain. So you know, you, we know this already happens all the time, you go into the cafeteria and you buy something, you know it somewhere that something knows that you've been there, and you have more and more of that, but you know how you use that information effectively to improve the, for example, the student experience. So that, for example, that there's always enough spaces for people to sit? And do you know, this, all of these kinds of things that, you know, we right now, we don't really, we kind of very retrospective, but trying to predict. So trying to, for example, predict new how you what kind of buildings needs to be put up to, to respond to these kinds of changes or to be ready to be Yeah, is that creates a whole new industry. Jeffrey Blume I also, I also think on the administrative side, you as you were, as you were talking, I was thinking, you know, I do have a number of people I work with who are all very good and will all help me write and craft things. But now I can have them help me be strategic about what is the message that I really want to send? And how do I help people get on board with that message? And how do I move the team in that direction? So we spend more time talking about how to help people come together? And what exactly we want how to be strategic than we do sort of writing the memo. So I do think it's an advance but you have to be you have to be cognizant that you're going to be more purposeful with your message there. Phil Bourne Yeah. Well, maybe we should think about wrapping up at this point. But I have to say, well, first of all, it's been Great conversation. It's been fantastic. You know, I, I think we have a different but exciting future in front of us. And you know, I think universities and higher ed more generally, I've really got to be prepared for this. We didn't really talk about preparing the students coming into higher ed, for this. And that's, that's probably a whole other podcast, but exciting time, indeed. So thanks very much. Jeffrey Blume Oh, it's been fantastic. It's fun to talk about these things, and it's a pleasure to be able to just think about Monica Manney Thanks for checking off this week's episode. To stay up to date on current episodes, subscribe wherever you listen to podcasts. We'll see you next time.

Other Episodes