AWS Public Sector Blog

6 foundational capabilities you need for generative AI

AWS branded background design with text overlay that says "6 foundational capabilities you need for generative AI"

Generative artificial intelligence (AI) is easy to use—and that’s what makes the technology so exciting. But it’s important not to confuse ease of use with effortless deployment. The value of generative AI is inextricably linked to the technology’s reliability and traceability, as described in our earlier blog post, Generative AI: Understand the challenges to realize the opportunities.

To achieve this reliability and traceability in any organization, it helps to focus on AI’s potential one use case at a time but also to plan for the bigger picture. As AI strategists for the Amazon Web Services (AWS) Generative AI Innovation Center, we wrote this post to provide a tried-and-tested AI framework you could consider.

1. Business considerations: Appoint a cross-functional team

A cross-functional team will be best able to balance AI skills with knowledge of the target business process(es) and considerations around the source data and what happens to it. This will help you to continuously spot opportunities for generative AI–based use case discovery across your organization, as well as have the ability to qualify them and prepare them for development.

2. People: Cultivate a mindset that prioritizes machine learning

You’ll also need to cultivate teams that embrace and harness generative AI effectively and efficiently. The combined focus should be to create a culture of innovation and experimentation and to understand how technology can transform the way your organization does things. Leaders need to instill an openness to AI and readiness to embrace change since this is fundamental to what comes next.

3. Platform: Assemble the right technology

The AI solution you’ll use to transform a particular process may not exist in a ready-to-use state. In other words, you’ll need to do important groundwork before you produce results.

If your use case means you need to train a bespoke machine learning (ML) model, then you’ll need data. This may already exist across internal line-of-business systems. Or you may want to draw on information held in publicly-available repositories or procured from a third party. All of this will need to be categorized and labeled so that the AI/ML algorithm knows what it is and how to process it.

Through training, the AI system will learn to recognize similar content and infer what it means. This initial organization, categorization, and system training takes time, but it’s essential to ensure the output can be trusted.

If you lack the technical skills to train the model internally with your own team, you may want to work with a specialist organization or make use of existing models that have already been trained, for instance, in translation, facial recognition, or document processing. Generative AI approaches would also allow you to use a pre-trained foundation model (FM) and look at the most efficient and effective way to customize it for your current use case.

4. Security: Controlled use of data

Any IT system, including those with ML–based capabilities, will only ever be as good as the data it’s fed. Carefully evaluate any original sources; the integrity, completeness, and currency of the data; how usable it is in its current form; and how to safeguard that data throughout its use. Also, think of where you’ll experiment with the data and how to minimize access to sensitive data, such as personally identifiable information (PII).

Consider:

  1. Where does the desired data exist now?
  2. How can you make that data accessible and discoverable by your team?
  3. How will you control access so that the right people can see the right data, and for only as long as they need to?
  4. Where will your data scientists perform their ML–based experimentation? (Preferably on a secure, cloud-hosted platform).

5. Governance: Decide how to ensure AI/ML transparency

To help determine who’ll develop your deep learning algorithms, consider how to ensure that any data being fed into the systems—along with any connections being made and analyses and resulting decisions or actions—are reliable, unbiased, and reproducible.

Suspicions and risk-based concerns about AI–based systems or processes tend to be linked to black-box capabilities, whose sources and calculations are hidden from view. Maintaining strong visibility over all data and deductions is the best way to avoid this—as long as measures don’t compromise security or data privacy requirements as part of these provisions.

6. Operations: Create feedback loops

ML models are likely to degrade over time. They’ll be as powerful as they’re ever going to be on the day when, having performed well enough in training, they launch to production. The data used for inference often drifts away from the characteristics of the data that the system was trained on. This makes it important to monitor reliability and detect when data may be drifting, which triggers the need for retraining.

It’s important, too, to be able to monitor the AI models over time to ensure they deliver the expected business value.

The AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI (CAF-AI) covers these six foundational capabilities in more detail.

These themes are discussed in greater detail in a series of four AWS Institute AI Masterclasses:

Additional resources

Marion Eigner

Marion Eigner

Marion is an artificial intelligence (AI) strategist for Amazon Web Services (AWS) based in Munich, Germany. She works in the AWS Generative AI Innovation Center.

Neil Mackin

Neil Mackin

Neil Mackin leads the Amazon Web Services (AWS) team focused on helping EMEA public sector customers to identify and develop business applications using artificial intelligence (AI) and machine learning (ML).