Transcripts

Transcripts are generated using a combination of speech recognition software and human transcribers and may contain errors. Please check the corresponding audio before quoting in print.

Joe Supervielle:

Welcome to Voices In Local Government. My name is Joe Supervielle, with us to discuss how local government can understand and use artificial intelligence is Santiago Garces, CIO of Boston, and Hemant Desai, CIO of ICMA, and who also held that same position in various local governments for many years. Welcome.

Hemant Desai:

Thank you, Joe.

Santiago Garces:

Thank you, Joe.

Joe Supervielle:

Also of note, register early for $100 off ICMA's Local Government Reimagined conferences, April 10th through 12 in Boston, and Palm Desert, California, June 5th through 7th. The theme for both conferences is artificial intelligence, so excited to kick off that topic with you two today.

Today we are going to try to avoid hyper hypotheticals, instead we're going to focus on the immediate impact of AI in local government regardless of population or budget size. This will include employee and resident impact, vetting vendors, ethics, policy, how to take first steps, and maybe even a couple questions about our favorite AI movies if we have time for that at the end. So first, Santi, we all know about Siri, Alexa and even ChatGPT, but can you define AI and maybe dismiss the most common myths or misconceptions?

Santiago Garces:

Absolutely. Again, I think when we talk about AI, we are talking about a really large field, and a field that has gone back decades. I think that most of the times we're actually thinking about generative AI and particularly these large language models and some of the multimodal language models that have come up in the past year or so in the past couple of years. So I would broadly describe it as this really complex and amazing type of math that allows us to use data as an information that exists in the world to try to be able to...

And for the most part, we've been thinking about having that data be able to predict or to determine something. I think that within the generative world, we use that data to be able to generate new content, so we're able to ask questions and generate texts, or generate an image, or generate a video. The part that's amazing is that based on the data that these models have been trained, we're able to produce content and things that are actually quite useful, if not always reliable. And that is I think what the buzz is about when we think about AI these days. But I don't know Hemant if you have further thoughts on this.

Hemant Desai:

No, absolutely. I think you nailed it Santi, and I think you said very nicely that AI has been around for decades, since early '60s, and even before that. But many of us only came into awareness of AI with the new generative AI, which essentially is basically some kind of a field within AI that allows us to have easier access and maybe even more powerful than the AI that we always had in the backend in the labs where people use it for... If you've heard of DeepMind created this AI robot that beat Kasparov in chess, but really, at that time people said, "Wow, that's great," but nobody knew what that meant until last year, a year and a half ago now, when generative AI, with OpenAI, release of ChatGPT became available to general public that people became more aware of AI.

So for me, really, AI's use includes everything that Santi said, data analysis where you provide some data and provide predictions. I see AI as kind of two separate arms for people who maybe knew a traditional AI, which essentially existed long before generative AI, where you provide some data and you did a great job of making some predictions, whereas the generative AI goes little beyond that when not only make predictions based on data you give, but it actually, as if you play with ChatGPT, you can basically give some high level thing like, "Once upon a time in Neverland...," and then dot, dot, dot, it actually creates a whole story for you from something that never existed before. So I think that's where I see the value of the new tool in AI is how creative it can be. At the same time, obviously some caveats to that too I'm sure we'll talk about later on.

Joe Supervielle:

Is the creativity the difference between glorified search results or even just those business intelligence dashboards that have been around for years, if not, a decade plus? Is that the key difference between those things?

Hemant Desai:

I think so. What do you think, Santi? There are many other differences, but certainly creativity and getting more targeted responses much faster, for me, seems to be a better reason for adaptability being easier. Now what do you think, Santi?

Santiago Garces:

Yeah, I think there's creativity matters. Particularly there's certain parameters. There's certain things that below the surface you can set in the generation around how random you can be in the generation of the response and that's part of what it makes it seem novel and creative. One of the things, and this might get a little bit esoteric, but the way that the generative AI tools work is they've created these really clever representations of language, and what is novel is, for instance, in the past when you used a search engine, if you go and you wanted to find...

Let's say that you're looking for grants in the city's page or in the state page. You had to find agricultural grants and you have to use the word agriculture or agricultural, something that appears either directly in the text or in the metadata associated with what you're looking for. What is really incredible, and it's almost magical about these especially the large language models, is they use these things called embeddings, but they change the representation of language. So if you type in cow and it is used in a context that is similar to what the word agriculture was used, you could actually retrieve information about that. So it's the generative piece, there's a part about creativity, but there's also something that is really remarkable, which is kind of this contextual part of how it represents language that is novel and really quite amazing.

Joe Supervielle:

To follow up on that, Santi, getting back to practical uses in local government and not just the big picture, can you talk to us about how AI can be used as a precise tool for a specific project versus this kind of like, "Oh, it just does everything for anyone at any time. Can we just use it for this or this," how can a local government figure out a specific use, apply it to a specific problem, and solve it whether it's internally or for the residents?

Santiago Garces:

Absolutely. I think the first thing is that we're still kind of in the early days of these tools. In some sense, I tried to imagine myself what it was like in the early '90s to start to think about how could government work better with this new thing called the Internet, so we're trying to figure it out. There are some things that seem really promising and that work well.

So you can use some of these large language models that ChatGPT, Bard, Claude to brainstorm and to think about new ideas, again, in some sense, kind of like the randomness itself helps you create and think about broadly around policy ideas, programs, names for a particular program. The language models are very good at processing language, so you could ask it to summarize text to change the tone in which a text has been written or try to aim text to a particular audience.

So I that why this technology for me is so exciting, it's kind of like a fundamental technology just like spreadsheets. There's no one business case for a spreadsheet, but they're so useful in almost anything where you have to use numbers. Large language models are a little bit like that, but for text and language. So you can use it to generate new text, you could use it to edit and revised text, you could use it to summarize text, you could use it to translate text.

Now because it's still early, making sure that you're tracking or understanding how does it do that in a way that is reliable, and in a way that you're accounting for bias, in a way that you're accounting for the fact checking of, because it will describe things in a way that looks accurate and real because it's language, but it might not be. So just I think the use cases are really broad, and I think that a lot of where we are is still in a space where we're trying to experiment and trying to figure out based on our needs. Do the tools do what we need it to do? And then the second stage is, will it do it in a way that is reliable and safe enough that we feel confident that we could keep using the tool for that purpose?

Joe Supervielle:

So going back to what is AI and what is not, I would say... Correct me if I'm wrong, but just creating a transcript of this interview, for example, is not really AI. That's been around forever. But if AI can listen to this entire thing and create a legit summary of it, that would be AI, is that right? And I asked too, thinking about a town hall meeting that might go on for hours and hours, and they have the transcript, and they can publish that online, but the average citizen doesn't want to sort through all that necessarily. But if the local government can use AI to get a legit summary of it, would that be an example of a specific type of project that could help?

Santiago Garces:

They both are types of AI as in we use AI to try to transform these sound waves. Our microphones are picking up air pressure, it's noisy, there's background... It's really messy signal. We use AI to try to turn that into see what words were said. But to your point, that's AI that we've had for a while. That's what Siri does, that's what Alexa or Google Assistant does. What is novel is, in the past it wasn't really like a good, high quality and efficient way of turning those words into a summary. Because semantically, there is no way of us knowing what you are actually trying to say, what were the important points, and what is amazing about this math, and again, it's just math underneath the hood, it's actually able to take that things that were just words and try to create something that actually is as accurate, like a good summary. There's tools and techniques to make it work better, but on the one hand, you're using an older version of AI to get that initial transcript, and then you're actually using that newer version of AI to get more out of it

Joe Supervielle:

To build on it, yeah. Okay. The next kind of bigger question was, how can local governments vet vendors pushing AI solutions? Because again, tech companies out there, they're going to throw AI, that's the buzzword. Now they're going to throw it on there as some big new solution. May or may not actually be, I suppose it doesn't really matter what you call it, if it helps you do your work or finish your project better, but whether it's in procurement or if you're just the IT director or whatever job you're in local government, how can you get rid of the fluff or the junk and ask the right questions to see if these tools can really help?

Hemant Desai:

It's a very good question, Joe. In fact, I'm only guessing, they're so new or accessible to people, but it must be happening already so frequently, Joe. And I say that because when something happens frequently, a phrase or term is coined. And believe it or not, there's a term coined exactly what we just mentioned. It's called AI washing.

So AI washing is exactly what you said where companies are using AI to market a product where AI may have nothing to do with it, right? So I asked one vendor a question when they presented to me about a month ago on a product we are looking at.

They said, "Oh, we have now a new AI-injected algorithm within this tool."

I said, "Great, can you elaborate more about what type of algorithm have you used? What type of platform do you have that makes it become AI-driven or AI empowered?" And believe it or not, Joe, they could not give me the details.

"I'll get back to you," never happened, right?

So what I'm saying is, so that's not to say that every vendor may be that, but I think as an audience, maybe, I'm glad you asked this question, be aware that vendors are pushing AI, that you're not being AI-washed, that you are doing your research, and asking more deeper questions, and maybe getting your technology people on the call so they can ask those questions.

Joe Supervielle:

Yeah, I was just going to say, because you know what to ask for because that's what you do. But if I'm the communications director or in whatever other department, public works, I don't necessarily know. So I think getting someone from the IT staff or whatever it's referred to or called is a good idea. Santi, anything to add to that one?

Santiago Garces:

Yeah, I would actually say there are pieces that you don't even need a technical person in that be clear about what is the value that you want to get out of this solution. I think that this is going to sound counterintuitive because Hemant and I are both lovers of technology, but a lot of times people buy technology because they just want the problems to go away, and it never works that way. So you knowing what problem you're trying to solve-

Joe Supervielle:

But that's what they're selling us.

Santiago Garces:

I know.

Joe Supervielle:

Sorry to interrupt you, but that's what they're selling us.

Santiago Garces:

And then shame on us if we think that with just the magic that there's... Nothing's magic. Technology always works in function of a problem, so you knowing what problem you have, and then make the vendor demonstrate that they're able to solve that problem, as in, AI is not going to make people know what permit they're going to make just miraculously. So you have to understand what about AI is going to help with what to get you what you need.

And here's the part that is a little bit maybe unpopular. It doesn't matter if it's AI. And in fact, sometimes there's tools that won't need AI that will do things better. If you're trying to get quick information about frequently asked questions, maybe a website is better than having a chatbot. We have a broad set of tools in technology to try to address issues, and I think that, as you say, not feeling that the technology enough itself is going to give you like it's better or not because it uses AI, but if you have clarity around the problem, and you have clarity around what do you think the solution might look like, force the vendor to meet you at this solution. And if they can't get there, then that's not the right vendor to work with.

Joe Supervielle:

Doesn't matter.

Hemant Desai:

Very good point.

Joe Supervielle:

And I kid you not, as I was writing this note for the interview today, I got one of those emails pushing a nonsense marketing AI from an AI bot, and the human did not write it, sent it, or created it. It sent it as I was writing this question, it was a little creepy, which is, I guess, part of the AI universe too. Sometimes a little creepy, but...

Santi, one more thing. Can you elaborate on... I agree it doesn't necessarily matter if it's classified as AI or not if it's going to help, but this is an example too of a lot of the listeners out there are not from a place like Boston with that size, scope, budget, et cetera. So can you elaborate on the smaller mid-size towns, counties that are looking at technology tools, AI or otherwise, that... How do they scale? How can they kind of reduce this maybe overwhelming topic of AI like, "Oh, my gosh! I don't even know if we can handle this. I don't know if I know enough as the IT director or the city manager, town manager there." How can those type of governments handle it?

Santiago Garces:

Absolutely. Well, I'll start by sharing a dirty secret that we have spent so far close to $0 on AI in the city of Boston. So you can do it at any size city. I think the first thing is trying around the tools. We benefit from the fact that a lot of the tools are easily accessible for free. If you go to OpenAI, you can access ChatGPT, or if you use Bing, you can get access to ChatGPT, even ChatGPT version 4, which is the most powerful model, you could use it for free. Or you can use Bard for free.

Joe Supervielle:

Do you think it's going to stay that way for a while or they're just trying to get everyone hooked and then start charging later? What is your prediction for that?

Santiago Garces:

It's hard to tell. Look, when you see a little bit of the math of what it costs computationally to run each prompt, it's in the 10th of cents usually, cents to tenths of cents. So it's expensive for them to run the service for free. But we don't know... Again, this is the part that is not only the technology is new, the business models are new. And everybody claims that they have it figured out, but you see companies like Microsoft, like Google, AWS, they're all changing the way that they try to market and try to incorporate AI. Microsoft starting to charge a premium to have the Copilot, which is AI-enabled productivity suit. So this is what I would say is, embrace the fact that it is free for the time that there are free options, and you can use it to experiment to try to learn what it is, what it isn't to get a level of comfort.

And as long as you're not using sensitive information in the prompts, you'd probably find the same like... Don't put anything in the prompts that you wouldn't want accessed by a reporter from the local newspaper. Besides that, you should be fine. But there's a lot of open data, open records, and open things that you could do that should be fine. There's ways of starting small. You could start with a few subscriptions of the plus version of ChatGPT if you want. I think that they'll be plenty sufficient for a lot of places. Increasingly, if you're a little bit more sophisticated and you're working in an IT department that has more resources, a lot of the ways that I think that we're going to be buying some of these services precisely that is like add-ons on existing services. So add-ons, Copilot for GitHub or Copilot for Microsoft Office, getting some services or access to the OpenAI, API through Azure.

So it's like through our existing cloud vendors and cloud partners, more than likely. That's the way that we're going to be consuming some of these resources. But if you're small, embrace the fact that it is free, give it a try, and there's ways of getting a lot of value, at least knowing what it is and what it isn't. I tell you, I don't think that you can't do it without getting your hands a little dirty and actually trying stuff out. And I think that it is helpful to do it regardless, and that's our attitude in Boston because everybody has access to it.

So the vendors are using it. Knowing how a vendor might use it might be helpful. How a potential applicant to a job might use it is helpful. Whether you like it or not is the reality. That's the world that we are living in, how advocate groups, how community groups, all of these tools are available. So I think familiarizing yourself is really helpful of knowing the context in which we live because the world's definitely a little bit different than it was a year or two ago.

Joe Supervielle:

Santi, local government managers overwhelmingly report being burned out as civility degrades and social media aggregates and amplifies vitriol. AI has the potential to make this worse, it can lower the cost for bad actors to flood public workers with fabricated content, I think it also could be offense/defense situation, so how do we protect our teams from this? What role does AI have in protecting the local government from this risk?

Santiago Garces:

I think awareness is the first place. Again, being able to understand what the tools are able to do... This is what we just talked about, technology is never value... It is kind of value neutral, but it depends on how we use it, and it can be quite detrimental. Unfortunately, I would say awareness is the first defense, and it's complicated. These are tough jobs. And it's a complicated world that we live in and I think being able to balance... There's things like even when it comes to spam detection, is that... When we think about people's first amendment right and the ability of people reporting to government that they're unhappy about something versus our ability to detect and block that from our public discourses, these are trickly things that I think that we'll keep wrestling with, but I think awareness is essential. If you don't know that this is happening and you don't know how it works, I think that there's no chance that you'll be able to anticipate and understand that.

Hemant Desai:

I agree absolutely that civility is a very complicated topic, certainly requires some deep conversations. But overall, I concur and echo what Santi just said, that it needs to have some measure of governance, if that's the right word around those conversations to make sure that people have true picture of what it can and cannot do, and the power of it. Just like social media right now sometimes is used in a way that can disrupt people's perceptions, and not maybe in a good way, I think AI, if not properly marshaled and used, can even almost like social media risk on steroids, in my opinion.

Joe Supervielle:

Hemant, what would your opinion be on... When Santi says experiment with it, that could be individual employees, it could be departments, it could be teams, but at a certain point there's got to maybe be some type of policy or process or just some kind of guidelines to, A, be efficient, but B, to make sure no one goes off the wrong way. So whether it's the CIO or the city/town manager, how can they implement that without stifling the creativity or productivity of individuals or small teams? How would you find the balance there?

Hemant Desai:

Yeah, I think that's a very good question, again, Joe. And I think Santi literally was one of the leaders in his environment of creating a guideline, and I, in fact, mimic pretty much what Santi said for city of Boston for us. But I would say that for any small to midsize or even large organization in local government that wants to embrace and experiment responsibly, begin with a baby step of creating some awareness, some guideline document. Doesn't have to be policy.

I know sometimes policy work comes with a very much a different thing in different governments, but a guideline that creates a guardrail of what to do, the dos and don'ts. In fact, essentially Microsoft now, on the Copilot site, has a one-page cheat sheet, dos and don'ts. How to use prompts effectively, and what not to put in the prompts. Rightfully so.

And again, Santi said that you don't want to put private information in the prompt, but some responsible use I think should preclude some type of guideline initiative by the leadership within government, whether it's CIO or some type of a leadership entity within the local government that says, "Okay, we encourage you to use it responsibly. There is some other ways you can use for free, zero cost, but be mindful of here's some guidelines of the dos and don'ts," and they're very simple templates available for free in many different sources, including OpenAI site themselves. They're nice templates for responsible use of AI too.

Joe Supervielle:

Yeah, myself, I went down the rabbit hole of trying to learn and read about this stuff, and then I started playing around with ChatGPT and even one of the image creators. I won't put an exact time limit on it, but a few hours later, I realized I don't know that I should be doing this during work hours. Don't tell HR. They don't listen to this anyways, but that's just one example. An employee might mean well, or like, "Hey, I want to learn this," but that's not necessarily what they should be doing, deadlines aside. But that's the kind of thing where it's case by case, and maybe as Santi said also earlier, we're still learning, right? There's not necessarily a strict here's-the-rule-on-that. And I imagine early on, it's better just to encourage that type of time, right?

Hemant Desai:

Yeah, I think so. I think a responsible, creative timeline for staff members to begin experimenting is the best way to not just embrace the new technology that we already have access to, but leverage it for the betterment of running the local government. So only if you're given that little bit of a space to be creative, can you come up with solution, right? So the balance is important. And I don't have the correct answer. Each entity will have come with their own framework of what responsible AI usage means for their environment.

 

Episode is sponsored by

Guest Information

Santiago Garces – CIO, Boston

Hemant Desai – CIO, ICMA

Episode Notes

In part one of two, the immediate impact of Artificial Intelligence on local government is discussed:

  • Define AI and generative AI and dismiss common myths or misconceptions.
  • How can AI can be used as a precise tool for a specific project right now?
  • How can local governments vet vendors?
  • Application for smaller towns and counties.
  • Are generative AI tools going to remain free?
  • AI’s ability to help and/or hurt public discourse and local government employees caught in the middle.


Resources


City of Boston Interim Guidelines for Using GenerativeAI


ICMA's Local Government Reimagined Conferences:

Explore the Future of AI in Local Government

One topic.  Two locations. ICMA is headed to Boston, Massachusetts, and Palm Desert, California, for the Local Government Reimagined Conferences. From cutting-edge applications to strategic insights, discover how AI is reshaping the landscape of local government. Register here.

Government AI Coalition

Join over 140 government agencies in creating standards for responsible AI procurement and governance for public agencies. Learn more.
 

Additional Content

Zencity: Using Generative AI for Community Engagement, from our partner named a 2024 GovTech100 company.

New, Reduced Membership Dues

A new, reduced dues rate is available for CAOs/ACAOs, along with additional discounts for those in smaller communities, has been implemented. Learn more and be sure to join or renew today!

LEARN MORE