Sandra Ahlgrimm is a Senior Cloud Developer Advocate at Microsoft who specializes in Java and AI. She is passionate about helping developers deploy their Java workloads on Azure with ease and efficiency. Sandra and her team, the Java Advocates, work closely with product teams and developers to ensure that Azure services are tested and optimized for developers’ needs. They also drive awareness and provide education to the community on the capabilities of these services.
Sandra’s work is focused on enabling developers to take full advantage of the platform’s capabilities. She is an expert in deploying Java workloads on Azure, whether it’s through App Service, AKS, Azure Spring Apps, Azure Functions, or Azure Container Apps. Her expertise in AI and machine learning helps developers to build intelligent applications that can scale with ease on Azure.
Sandra is a passionate advocate for the developer community and is always happy to chat about tech-related news and issues. You can connect with her on LinkedIn or Twitter.
AI technologies, and particularly large language models (LLMs), have been popping up like mushrooms lately. But how can you use them in your applications?
In this workshop, we will use a chatbot to interact with GPT-4 and implement the Retrieval Augmented Generation (RAG) pattern. Using a vector database, the model will be able to answer questions in natural language and generate complete, sourced responses from your own documents. To do this, we will create a Quarkus service based on the open-source LangChain4J and ChatBootAI frameworks to test our chatbot. Finally, we will deploy everything to the Cloud.
After a short introduction to language models (operation and limitations), and prompt engineering, you will:
- Create a knowledge base: local HuggingFace LLMs, embeddings, a vector database, and semantic search
- Use LangChain4J to implement the RAG (Retrieval Augmented Generation) pattern
- Create a Quarkus API to interact with the LLM: OpenAI / AzureOpenAI
- Use ChatBootAI to interact with the Quarkus API
- Improve performance thanks to prompt engineering
- Containerize the application
- Deploy the containerized application to the Cloud
At the end of the workshop, you will have a clearer understanding of large language models and how they work, as well as ideas for using them in your applications. You will also know how to create a functional knowledge base and chatbot, and how to deploy them in the cloud.
Searching for speaker images...