Holly Cummins is a Senior Principal Software Engineer on the Red Hat Quarkus team and a Java Champion. Over her career, Holly has been a full-stack javascript developer, a build architect, a client-facing consultant, a JVM performance engineer, and an innovation leader. Holly has led projects to understand climate risks, count fish, help a blind athlete run ultra-marathons in the desert solo, and invent stories (although not at all the same time). She gets worked up about sustainability, technical empathy, extreme programming, the importance of proper testing, and automating all the things. You can find her at http://hollycummins.com, or follow her on socials at @holly_cummins.
Generative AI has taken the world by storm, and it seems like every executive leader out there is telling us “regular” Java devs to “add AI” to our apps. Does that mean we need to drop everything we’ve built and become data scientists instead now?
Fortunately, we can infuse AI models built by actual AI experts into our applications in a fairly straightforward way. We promise it’s not as complicated as you might think! Thanks to the ease of use and superb developer experience of Quarkus and the nice AI integration capabilities that the LangChain4j libraries offer, it becomes trivial to start working with AI and make your stakeholders happy.
In this session, you’ll explore a variety of AI capabilities. We’ll start from the Quarkus DevUI where you can try out AI models before writing any code. Then we’ll get get into the code and explore LangChain4j features such as prompting, chaining, and preserving state; agents and function-calling; enriching your AI model’s knowledge with your own documents using retrieval augmented generation (RAG); and discovering ways to run (and train) models locally using tools like Ollama and/or Podman AI Lab. In addition, we’ll take a look at observability and fault tolerance of the AI integration. We might even try some new features, such as MCP.
Come to this session to learn how to build AI-infused applications in Java. This is also an opportunity to provide feedback to the maintainers of these projects and contribute back to the community.
Join us for a guided tour through the possibilities of the LangChain4j framework! Chat with virtually any LLM provider (OpenAI, Gemini, HuggingFace, Azure, AWS, ...)? Generate AI images straight from your Java application with Dall-E and Gemini? Have LLMs return POJOs? Interact with local models on your machine? LangChain4j makes it a piece of cake! We will explain the fundamental building blocks of LLM-powered applications, show you how to chain them together into AI Services, and how to interact with your knowledge base using advanced RAG.
Then, we take a deeper dive into the Quarkus LangChain4j integration. We'll show how little code is needed when using Quarkus, how live reload makes experimenting with prompts a breeze and finally we'll look at its native image generation capabilities, aiming to get your AI-powered app deployment-ready in no time. By the end of this session, you will have all the technical knowledge to get your hands dirty, along with plenty of inspiration for designing the apps of the future.
Development is about working with computers, right? Well, not quite. Development is all about working with computers (easy), and working with people (hard). Oh, and it’s about physics. Things like the speed of light and thermodynamics influence APIs, because they influence hardware and networking. If, like Holly, you slept through statistics modules in university, it will be a surprise to discover how statistics has changed our development workflows. Finally, we mustn’t forget economics. The end of zero-interest-rates has changed the employment landscape for many of us. In this wide-ranging talk, Holly will cover why the end of Moore’s law means we might finally need to get to grips with concurrent programming, why is Loom a good idea now when green threads were a bad idea, why is AOT a good idea now when it used to be a bad idea, and how much you should care about business studies, finance, and statistics.
None of us actually like waste, but many of us tolerate it. This is a shame, because waste is really really bad.
It makes our software more expensive to develop, and more expensive to run. It contributes to climate change. It means sometimes, people who’d like to use our software, can’t. It slows us down.
In this talk, Holly will present a range of practical waste-reduction techniques, including:
- LightSwitchOps
- Moving computational work to where it hurts least
- Measuring the right thing, instead of measuring the wrong thing (harder than it seems!)
- Performance profiling basics
- Doing less
Searching for speaker images...