, , ,

AI in 2024: Laying the Groundwork

1. Understand Generative AI’s Potential and Pitfalls

The public release of GenAI tools — chatbots such as ChatGPT and image generators such as DALL-E — late in 2022 drew the world’s attention to AI. By January 2023, ChatGPT had 100 million monthly active users, making it the fastest-growing consumer software application in history.

It quickly became clear that GenAI offers great possibilities, but also significant risks.

GenAI can improve government in multiple ways, including:

  • Translating language and converting documents and videos for users with hearing and vision impairments
  • Identifying groups that can benefit from more outreach
  • Summarizing meetings, reports and other documents
  • Analyzing public feedback
  • Converting legacy software code to modern programming languages and streamlining new code
  • Improving cybersecurity
  • Optimizing resource allocation and energy efficiency

The primary risk of using online versions of GenAI tools is that they are public platforms. You can’t assume privacy or accuracy, as the U.S. Senate noted in its guidelines last December.

Nearly as dangerous are GenAI’s “hallucinations,” — its well-publicized tendency to put words together in a way that sounds plausible but isn’t accurate. Every AI output must be doublechecked against other resources and reviewed by a human.

2. Cultivate AI Expertise on Staff

Federal AI guidance emphasizes the need for AI skills in the workforce — in more than just technical roles. OMB advises agencies to put AI-savvy people in both mission and program offices, including designers, behavioral scientists, contracting officials, managers and attorneys.

It turns out that many of the skills required to work effectively with AI are “soft” skills. The list of AI competencies from the Office of Personnel Management includes “creativity and innovation,” “integrity” and “political savvy.”

A Government Accountability Office study regarding the Department of Defense recommended answering these questions up front:

  • Who is included in the AI workforce?
  • Who should be included in the AI workforce?
  • Which positions require personnel with AI skills?
  • What is the current state of your AI workforce?
  • What are your future requirements?

Ongoing learning will be an essential element of life with AI. Agencies must help their existing employees adapt, both by offering training in AI related skills and mapping career pathways for people whose jobs are substantially altered by AI adoption.

3. Put AI to Work for Cybersecurity

One of the most promising arenas for AI is cybersecurity. As the number and variety of cyber threats have grown, it’s become impossible to keep up manually. Even traditional software systems can’t keep pace, according to the IEEE Computer Society.

Agencies can use AI’s strengths in pattern recognition and data analysis to protect systems and data. AI can:

  • Spot anomalies, such as suspicious login attempts, in real time
  • Automate incident response, reducing the time from incursion to remedy
  • Predict likely threats, by both assessing system weaknesses and staying up to date on evolving attack methods

 In August 2023, the Biden administration launched its Artificial Intelligence Cyber Challenge. Competitors will use AI to identify and fix vulnerabilities in some of the country’s most essential software infrastructure, such as code that runs the internet, the electric power grid and transportation systems. The Defense Advanced Research Projects Agency is partnering with AI vendors including Anthropic, Google, Microsoft and OpenAI on the challenge, and will announce the winners at the Defcon hacker’s convention in 2025.

But don’t forget it’s an arms race. Bad actors can access AI, too.

4. Protect the Public’s Rights

Despite its great potential for good, AI has revealed a potential to do harm.

Systems based on AI can reproduce societal biases and discrimination, for example in hiring, health care and credit decisions. Even more dangerously, some AI-driven facial recognition software used to identify criminal suspects has been unable to tell Black individuals apart, leading to wrongful arrests.

There are also serious privacy issues. Vast amounts of personal and other data fuel AI applications, making it easy to track and collect information about people’s activities.

The key to avoiding these pitfalls is human involvement at all stages.

NIST counsels that people must be involved in all phases of a project, and policies must be in place to guard against bias and privacy violations.

To preserve the public’s rights:

  • Plan your project around program needs, not the technology.
  • Include all stakeholders in your planning.
  • Include diverse perspectives on both your business and technical teams.
  • Practice good data governance.
  • Assess the AI algorithms for potential privacy and bias impacts.
  • Don’t depend on vendors for security, privacy protection or elimination of bias.
  • Evaluate and adjust AI programs on an ongoing basis.

The Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence underscores its commitment to leading by example in making AI safe: “The Federal Government should lead the way to global societal, economic, and technological progress. … This leadership is not measured solely by the technological advancements our country makes. Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly.”

This article appeared in our guide, “Gearing Up for AI.” To learn more about AI’s transformative impact in government and prospects for 2024, download the guide here:

Photo by Vojtech Okenka at pexels.com

Leave a Comment

Leave a comment

Leave a Reply