AWS Public Sector Blog

Generative AI in education: Building AI solutions using course lecture content

Generative AI in education: Building AI solutions using course lecture content

The education sector has gone through a transformative technological change in the last few years. First, the pandemic created a rise in e-learning solutions, as teachers and students adopted digital platforms for communicating, teaching and learning, and managing academic information. These solutions show that students all over the world can get quality education over the internet. With the ease and the reachability of digital platforms, there has been a shift in the mindset of teachers and students to continue to leverage online learning along with offline classes.

More recently, innovations in the field of artificial intelligence (AI)—namely, generative AI—are providing new opportunities for educators and education technology companies (EdTechs) to reimagine how this technology can foster student success, increase faculty and staff productivity, and more.

In this blog post, learn how to create multiple generative AI solutions with Amazon Web Services (AWS) using recorded course lectures to support the student experience with automated and personalized classroom tools. These examples include transcribing videos to text; generating lecture summaries, homework, and quiz material; translating study material to regional languages; creating personalized classroom chatbots for problem-solving sessions; and more.

Solution overview: Building generative AI solutions for education using lecture content

Live-streaming video services, such as Amazon Interactive Video Service (Amazon IVS), help teachers share classroom lectures with remote students in real-time. With live-streaming, remote students can ask questions and interact with teachers similarly to how they would in an in-person environment. Educators can then record and transcribe the streamed lecture content, and generative AI solutions can use these transcriptions for various needs.

This blog post presents high-level solution architectures featuring AI services on AWS, along with pre-trained large language models (LLMs) that you can use with Amazon Bedrock, a fully managed service that makes foundation models (FMs) from Amazon and leading AI startups available through an API, or Amazon SageMaker JumpStart, a machine learning (ML) hub that can help you accelerate your ML journey.

Solution foundation: Lecture streaming, recording, and transcription

Before an AI solution can use course lecture content to support teachers and students, the content must be recorded and transcribed. Using Amazon IVS, teachers can deliver the lecture from a physical classroom and support simultaneous, remote viewing of the lecture.

Amazon IVS is a fully managed live streaming solution: simply stream your content to Amazon IVS, and the service makes low-latency, live video available to any viewer around the globe. Amazon IVS handles the ingestion, transcoding, packaging, and delivery of your live content, using the same technology that powers Twitch.

You can configure Amazon IVS to record live video to an Amazon Simple Storage Service (Amazon S3) bucket. Video streams are saved as video files, which can be sent to Amazon Transcribe to convert speech to text.  Amazon Transcribe can turn the lecture audio and video to text in real-time during the class or in batches after the class.

You can then store the lecture transcriptions in Amazon S3 as well, from where it can be used for numerous generative AI use cases using Amazon Bedrock and/or Amazon SageMaker JumpStart.

Additionally, you can store other text-based content, like student homework solutions, in Amazon S3, and use these alongside the lecture transcriptions for building more personalized student experiences.

Figure 1. Architectural diagram of the solution that forms the foundation of this blog post. First, an educator uses Amazon IVS to stream and record content for students to watch remotely. Then, the recorded lecture video can be processed by Amazon Transcribe, which automatically turns speech in the video to text. Amazon Transcribe then uploads the transcription to an Amazon S3 bucket. AI services like SageMaker and Amazon Bedrock can use the transcription to support various activities, like generating content to send communications to learners via SMS, email, or social media posts; generate lecture summaries; create a personalized chatbot that can answer student questions on course material; generate relevant images; assess student evaluations; and more. Amazon Translate can use the transcription from Amazon S3 to translate transcriptions into regional languages, and Amazon Kendra to create a smart search solution from class content for students to navigate.

Figure 1. Architectural diagram of the solution that forms the foundation of this blog post. First, an educator uses Amazon IVS to stream and record content for students to watch remotely. Then, the recorded lecture video can be processed by Amazon Transcribe, which automatically turns speech in the video to text. Amazon Transcribe then uploads the transcription to an Amazon S3 bucket. AI services like Amazon Bedrock and Amazon SageMaker can use the transcription to support various activities, like generating content to send communications to learners via SMS, email, or social media posts; generate lecture summaries; create a personalized chatbot that can answer student questions on course material; generate relevant images; assess student evaluations; and more. Amazon Translate can use the transcription from Amazon S3 to translate transcriptions into regional languages, and Amazon Kendra to create a smart search solution from class content for students to navigate.

Generate lecture summaries and searchable class content indexes

Once the lecture is transcribed, educators can create a solution to generate lecture summaries. Students can quickly read the summaries of the classes and study the important concepts taught throughout the course.

Figure 2 features how to create a high-level architecture of this model with AWS. An AI solution can summarize the content of the entire class into just a few paragraphs using a wide range of models available for text summarization in Amazon Bedrock or Amazon SageMaker JumpStart. The summarized text can then be converted to speech using Amazon Polly, a service that uses deep learning technologies to synthesize natural-sounding human speech, so students can also listen to summaries of the classes. To localize the content to students in different geographies, the original transcribed and summarized texts can be translated into supported languages by Amazon Translate.

Educators can also compile the recorded classes, transcriptions, and summaries as chapters so students can quickly access the content for specific classes. Using Amazon Kendra, an intelligent search service powered by ML, educators can then create a searchable index of course content so students can search for what they need. Applications built with Amazon Kendra enable students to ask questions in natural language and get highly accurate answers from the material. For example, a student taking a chemistry course could search their class’s content stored in Amazon Kendra with questions like, ‘What is an acid?’ and Amazon Kendra will retrieve the available content across the lecture transcriptions. An integrated large language model (LLM) can be used summarize the content and present it in natural language.

Figure 2. Architectural diagram of generating lecture summaries from transcribed text of lectures. When the transcription is uploaded to an Amazon S3 bucket, SageMaker JumpStart can use pre-trained models to summarize the transcribed content. This summarized content can then be sent to Amazon Translate, to translate these summaries across multiple languages; to Amazon Polly, which can turn these summaries into speech across multiple languages; and to Amazon Kendra, which can create an index featuring intelligent search of all course content.

Figure 2. Architectural diagram of generating lecture summaries from transcribed text of lectures. When the transcription is uploaded to an Amazon S3 bucket, Amazon Bedrock or Amazon SageMaker JumpStart can use pre-trained models to summarize the transcribed content. This summarized content can then be sent to Amazon Translate, to translate these summaries across multiple languages; to Amazon Polly, which can turn these summaries into speech across multiple languages; and to Amazon Kendra, which can create an index featuring intelligent search of all course content.

Generate questions and study materials

Teachers can apply generative AI to transcribed lecture content to quickly generate question and answer pairs in the context of the specific lecture and class. Educators can use these questions to produce homework assignments, flash cards, pop quizzes, and student exam preparation. Text generation models available in Amazon Bedrock and Amazon SageMaker JumpStart, can generate this type of material from the transcribed lecture content. Then, these evaluation and study materials can then be indexed and made searchable in Amazon Kendra.

Figure 3. Architectural diagram of generating questions. The major components are Amazon Sagemaker JumpStart.

Figure 3. Architectural diagram of generating questions. The major components are Amazon Bedrock and/or Amazon Sagemaker JumpStart.

Create personalized class chatbots to answer questions on course material

Educators can improve the student experience by creating chatbots that can automatically respond to questions from students in the context of the course. Figure 4 features a high-level architecture demonstrating how to design this solution on AWS.

Amazon Lex is fully managed AI service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications, i.e. a chatbot. Students can then interface with this chatbot via speech or text.

When a student asks the course chatbot a question related to their class, Amazon Lex retrieves answers from Amazon Kendra, which provides context to a text generating LLM, with a response in natural language. Amazon Lex can then present this response back to the user. This process of retrieving contextual information from an external database to augment the knowledge of LLMs is called Retrieval Augmented Generation (RAG). Amazon Lex invokes an AWS Lambda function that uses LangChain retriever for Amazon Kendra index to implement a RAG workflow. LangChain is a framework that allows LLMs to integrate with external sources of information.

Educators can further improve the student learning experience by providing the exact time in an audio or video file that a specific concept was taught in the class. The MediaSearch solution, available on GitHub, makes audio and video content searchable within Amazon Kendra.

Figure 4. Architectural diagram of automated doubt solving using the questions and answers indexed in Amazon Kendra. The major components are Amazon Sagemaker JumpStart, Amazon Kendra, and Amazon Lex.

Figure 4. Architectural diagram of automated doubt solving using the questions and answers indexed in Amazon Kendra. The major components are Amazon Bedrock and/or Amazon Sagemaker JumpStart, Amazon Kendra, and Amazon Lex.

Automate grading student exams and more

AI solutions can also help validate student assessments, so teachers can spend less time on routine evaluations and focus more on complex topics and personal guidance for their students. Plus, students can get instant feedback on their answers, which can help support faster understanding and progress in their classes.

Figure 5 illustrates a sample high-level architecture for this solution. To validate assessment responses, the student’s answer and the expected answer, retrieved from a database such as Amazon DynamoDB,  are sent as a text prompt to a text generating LLM. LLMs available in Amazon Bedrock and Amazon SageMaker JumpStart can be asked to validate the student answer against the expected answer. Using this architecture as a basis, a system can be designed to enable students to quickly determine if their answers are correct, partially correct, or incorrect with explanations. Also, this architecture can be extended to allow teachers to automate an initial evaluation of students’ answers, in bulk.

Figure 5. Architectural diagram of student answer evaluation. The main components are Amazon Sagemaker JumpStart and Amazon Lex.

Figure 5. Architectural diagram of student answer evaluation. The main components are Amazon Bedrock and/or Amazon Sagemaker JumpStart and Amazon Lex.

Generate images to make information more clear and compelling

Using image generating models, teachers can turn their imagination into captivating pictures and explain the concepts with a story-telling approach. For example, diffusion models can be used to generate visual illustrations from lecture transcripts, in order to clarify concepts or to simply make the content more fun and engaging. Along with the source text, sample images can be used to generate the desired imagery using the Stable Diffusion models from stability.ai, which are readily available via Amazon Bedrock and Amazon SageMaker JumpStart.

Figure 6. Architectural diagram of image generation. The major components are Amazon Sagemaker JumpStart.

Figure 6. Architectural diagram of image generation. The major components are Amazon Bedrock and/or Amazon Sagemaker JumpStart.

Conclusion

Class summarization, making assessments, answering student questions and answers (Q&As), and text and image generation can be time consuming for educators. AI solutions powered by AWS can simplify these tasks with new generative AI capabilities, so educators can focus less on tedious tasks and more on creating enriching student experiences.

For more information on working with generative AI on AWS, refer to Announcing New Tools for Building with Generative AI on AWS.

Sarat Guttikonda

Sarat Guttikonda

Sarat Guttikonda is a principal solutions architect for the Amazon Web Services (AWS) worldwide public sector. Sarat is an artificial intelligence (AI) and machine learning (ML) enthusiast with a desire to drive innovation and transformation for customers without sacrificing business agility. In his leisure time, he loves building Legos with his son and playing table tennis.

Paul Saxman

Paul Saxman

Paul Saxman leads global technical programs and initiatives that support education and academic research institutions, worldwide, in the adoption of next-generation computation and storage in the cloud. He joined Amazon Web Services (AWS) with the goal of advancing science and education through the adoption of cloud and AI/ML technologies, following his early career in research and systems development in biomedical and clinical informatics.