Governments’ information systems play an important role in the success of our communities and all trends point to the fact that our communities will increase their dependence on information systems going into the future. Technology is being deployed everywhere—projects for making cities smarter, predictive analytics to mine data for better decision-making, autonomous vehicles, and even collective intelligence platforms for cocreating solutions with residents. Yet, in recent times, we have witnessed several incidents that have demonstrated how communities can be brought to their knees due to those vary same information systems which are aimed to assist, not inhibit.
Four U.S. cities—Pensacola, Florida; St. Lucie, Florida; New Orleans, Louisiana; and Galt, California —were all the victims of cyberattacks throughout December 2019 and these attacks rendered their telephone, email, law enforcement, waste, energy, and payment systems inoperable. Often these attacks demand a ransom, and councils find themselves either paying the attackers or employing external cybersecurity and consulting firms to mitigate the situation and repair the damage. In the case of New Orleans, Deloitte was paid $140,000 to investigate the attack despite the city having a cybersecurity insurance policy that wound up covering a portion of the final cost of the attack. In a separate attack in Lake City, Florida, the council reluctantly paid $460,000 to cyberattackers after the entire council systems was shut down.1
In addition to IT being hacked and experiencing irrevocable damage, we have also seen other cases where information systems have been deployed in communities with unintended consequences and/or pushback from stakeholders. In these cases, while the deployment may have been successful from a systems standpoint, the outcome of the deployments were very undesirable.
In Detroit, a $9 million initiative, “Neighborhood Real-Time Intelligence Program,” implemented facial recognition software and video surveillance cameras at 500 Detroit intersections. This initiative built on the previous “Project Green Light” Initiative, which installed 500 cameras outside of businesses capable of recording and reporting real-time video footage to the police. The software boasted of an ability to match faces with 50 million drivers’ license photographs in the Michigan police database. However, recent research has shown that current facial recognition software more often misidentifies black faces than white faces.2 This technology has generated widespread public criticism as residents feel their privacy is compromised and awareness of the racial biases continues to increase.
The problems of bias and unintended consequences have also been noted in the private sector healthcare system. For example, risk analysis programs from UnitedHealth Group were found to assign comparable risk scores to white patients and black patients even when the black patients were considerably sicker.3 While these risk-analysis algorithms can be useful in managing hospital resource efficiency, this algorithm predicts healthcare costs as risk rather than sickness. Therefore, an unintended consequence of this algorithm demonstrates that white patients are more likely to receive care management due to their comparable risk scores, essentially reinforcing a racial bias in health care.
Finally, in the Kentucky judicial system, a risk-analysis algorithm was implemented to present a score predicting the risk a person would recommit a crime or skip court. The intended consequence was that the justice system would more fairly decide on whether to hold a defendant in jail before trial. Officials hoped to reduce the number of people in jails, reducing prison expenses and presenting better circumstances to defendants. Unfortunately, the technology did not work as intended.4 Judges in rural counties—who generally had more white defendants—were more likely to grant release without bail than judges in urban counties—who generally had more minority defendants—as the rural judges more frequently overrode the algorithm’s recommendation.Furthermore, it was found that judges in urban areas more often overruled the default recommendation of waiving financial bond if the defendants were black.
A New Reality
Throughout local government, information systems are being designed, developed, and deployed today quite differently from traditional transaction processing systems or even your traditional e-government systems. Today, systems being deployed incorporate machine learning algorithms—also referred to as artificial intelligence or AI—that learn on the job. They ingest large volumes of data, are trained to recognize latent pattens in the data, and generate recommendations (outputs). Seldom do these systems have the level of transparency or auditability when compared to traditional information systems due to their complexity and the nature of algorithms.
Over the last year, I have spoken to more than three dozen managers who have oversight over communities (e.g., city managers, assistant city managers) and personnel responsible for building IT systems in communities (e.g., software engineers, programmers, data scientists). One focus of my conversations was to understand the knowledge gaps between managers/administrators who have to commission the design, development, acquisition, and implementation of information systems (especially emerging technologies that incorporate machine learning) and those that actually build these systems (e.g., IT professionals, data scientists). I was quite surprised to see the knowledge gaps in managers/administrators when it comes to understanding the nature of emerging technologies, especially those that are AI-inspired.
Only a handful of local government managers understand the intricacies of current computational approaches. When asked to describe their level of knowledge on artificial intelligence or machine learning, most remarked it was “novice,” and many simply said they had no knowledge whatsoever. This is quite concerning given the fact that IT solutions that incorporate these computational approaches are being designed, developed, and implemented in many communities around the world. What is even more concerning is the fact that these solutions will be connected to the systems that already exist in the current IT ecosystem raising the possibility of such alarming scenarios as cascading failures across networks.
If things were not bad enough, my conversations with developers/builders of systems highlighted another fundamental knowledge gap: These personnel, while skilled in the technicalities of how to curate data, construct machine learning algorithms, and build data visualizations, often lack the necessary “public values” context. Put differently, they seldom appreciate what is unique about building systems for the public, with public resources, and those that can account for the nuances, diversity, richness, and complexity of the public these systems are intended to serve.
Designing systems for the private sector—where one can focus on just improving one or two outcomes and can choose which segments of the marketplace to target—is easier than designing public sector systems that must serve the needs of the entire community. Moreover, public sector datasets are often much more messy, incomplete, and disconnected than in the private sector and this impacts the success of the system.
Critical Considerations
Acknowledging, appreciating, and closing the knowledge gaps between government administrators and system builders is key to ensuring that any digital transformation efforts, especially those involving AI or machine learning systems, are developed in a responsible manner.
Toward this end, here are a series of points to ponder when involved in conversations in digital transformation efforts within your communities.
For the Public Manager/Administrator
On data: What is the data that is going to be used for the systems? Is the data free of biases? Is the data representative of the community? How secure are our data sources? What are the community’s expectations regarding privacy, security, and use of data? Do you have the necessary social licence to use the data in a manner that is different from what the information was originally collected for? Who will have access to data during the system building effort and why? Do you need to anonymize the data prior to sending it, or providing access, to external parties?
On analytics and algorithms: What is the collection of algorithms that are being used to generate insights? How are these algorithms being trained to learn patterns from the data? How are the outputs of the algorithms being validated? Have the outcomes been validated on data that is representative of the community? What are the limitations of the algorithm? Is the software code open for inspection? Did the software reuse code from prior efforts? If so, why, and if not, why? Who has access to manipulate and alter the algorithms and overall system code?
On interpretation and insights: How should one interpret the output of algorithms? What are the confidence levels associated with any outputs? How should personnel interact with system outputs? What happens if a resident disagrees with a judgment made by an algorithm? How should insights gathered from the use of the system be fed back to system designers so that revisions can be made? How should personnel be trained to use algorithmic outcomes to augment their interpretation of an issue?
For the System Designer/Builder
On data: How should we handle data to ensure no violation of public values? What are the primary set of public values that need to be upheld (e.g., fairness, privacy)? How should conflicts among public values be resolved in terms of access to and use of data? What protocols are in place to protect the community from harm in case of data misuse, breaches, or security violations?
On analytics and algorithms: How can we ensure that the algorithms being designed account for outliers in the dataset? How can we involve residents in the design and testing of algorithms? How adaptable are the algorithms to ensure that they can deal with changing conditions in the internal and external environment of the community? How do we ensure that the system being built is financially viable from a maintenance perspective? How do we ensure that the system is extensible ( i.e., can be extended with new functionality)? How do we build mechanisms to routinely audit the performance of the system? Under what conditions should system use be halted, and what is the backup approach to satisfy community needs?
On interpretation and insights: How can we ensure that the outputs of the algorithm are fair and who interprets its fairness? How can we ensure that there is some level of transparency, traceability, and “model explainability” for the outputs? When presenting outputs as visualizations, have we checked to ensure that we are not inadvertently reinforcing existing cultural and societal biases? How do we collect and analyze feedback on the system as it is deployed? How do we share confidence levels in outputs in a meaningful manner to augment decisions made by humans? How do we share risks and limitations of using the system?
While not comprehensive, I hope these questions will help the two key stakeholders have meaningful conversations to close the knowledge gap and better cocreate next-generation information systems to make our communities more livable, just, sustainable, and resilient.
Endnotes and Resources
1 https://www.nytimes.com/2019/06/27/us/lake-city-florida-ransom-cyberattack.html
3 https://www.govtech.com/health/NY-Regulators-Probe-for-Racial-Bias-in-Health-Care-Algorithm.html
4 https://www.wired.com/story/algorithms-shouldve-made-courts-more-fair-what-went-wrong/
New, Reduced Membership Dues
A new, reduced dues rate is available for CAOs/ACAOs, along with additional discounts for those in smaller communities, has been implemented. Learn more and be sure to join or renew today!