![AI image](/sites/default/files/2024-11/AI%20image%20resized.png)
Over the past two years, AI technologies have increasingly being adopted to streamline operations and enhance productivity. One such application is the use of AI to generate automated summaries of meetings and distribute them to participants. Often, participants may not be even aware that meeting discussions (chat and often voice) are captured. These are then automatically summarized and sent to all participants often without any approval or oversight. While this can save time, it also poses significant risks, particularly for local government leaders who handle sensitive information.
Some of the major risks include
- Confidentiality Concerns
- Automated summaries may inadvertently include sensitive or confidential information that is not appropriate for all participants. For example, you might add participants who were not originally invited, causing them to receive information that may not be relevant or appropriate for them.
- Contextual Misinterpretation
- AI may lack the nuanced understanding of context, leading to summaries that misinterpret or misrepresent the discussions.
- Example: A nuanced debate on policy could be summarized in a way that oversimplifies or distorts the actual conversation.
- Lack of Human Oversight
- Relying solely on AI-generated summaries can eliminate the critical human oversight needed to ensure accuracy and appropriateness. There is always a chance that AI based summaries may not always be accurate, especially if there is language translation involved as well.
- Legal and Ethical Implications
- Distributing sensitive information without proper vetting can lead to legal liabilities and ethical breaches.
Recommendations
- Implement Review Processes
- Ensure that all AI-generated summaries are reviewed by someone designated in your organization to verify and approve before being distributed.
- Set Clear Guidelines
- Establish clear guidelines on what types of information should and should not be included in automated summaries. A clear checklist for reviewers to follow will help. This could be part of your AI usage guidelines.
- Educate and Train Staff:
- Provide training for staff on the potential risks and best practices for using AI-generated summaries. Internal staff training and workshops on responsible use of AI are highly recommended. Emphasis on AI as a complementary tool with “human in the loop” for review and approval.
Conclusion
While AI-generated summaries can be a valuable tool, it is crucial for local government leaders to be aware of the potential risks and take proactive steps to mitigate them. By implementing review processes, setting clear guidelines, educating staff, and using AI as a supplement, organizations can harness the benefits of AI while safeguarding sensitive information.
Editor's Note: This article was written with the assistance of Microsoft Copilot, an AI language model, to generate initial ideas and draft content, and then thoroughly reviewed and edited by the author.
Join ICMA's Technology, Engineering, and Data team for A Practical Guide to GenAI in Local Government.
New, Reduced Membership Dues
A new, reduced dues rate is available for CAOs/ACAOs, along with additional discounts for those in smaller communities, has been implemented. Learn more and be sure to join or renew today!