Recently, I participated in a five day course on agentic AI hosted on Kaggle and co sponsored by Google. It was structured as a guided learning experience, not just a competition. Each day layered new concepts, and the program culminated in a capstone project where participants had to apply what they had learned by building something real.
I joined the course for one reason. I wanted to get hands on with agentic AI at the code level. I had spent plenty of time reading and writing about agents conceptually, but I wanted to understand what actually holds up when you have to design, orchestrate, and run a system under real constraints.
This post is about where that capstone project led me and why it unexpectedly pulled together my interests in AI, STEM education, and learning by doing.
Why the Course Pushed Me to Build
The structure of the course mattered. Five consecutive days of focused work forced momentum. You could not stay theoretical for long. Each lesson pushed toward implementation, and the capstone made it clear that understanding would be measured by what you built, not what you could explain.
As I worked through the material, one theme kept resurfacing. Agentic AI is not primarily a model problem. It is a design problem. Clear goals, clear roles, intentional boundaries, and meaningful evaluation matter more than clever prompts.
That realization felt familiar.
It mirrored what I have been seeing in education.
A Parallel Between Agentic AI and STEM Education
In both agentic systems and classrooms, failure often comes from the same place. Too much abstraction. Too little structure. Or structure that removes curiosity instead of enabling it.
Watching my own children work through STEM assignments over the years, I have seen how often the experience gets flattened into worksheets and disconnected tasks. Not because teachers lack creativity or care, but because good hands on resources are hard to find, hard to adapt, and time consuming to build from scratch.
The capstone project gave me a chance to explore that problem space through a different lens.
Choosing a Capstone That Solved a Real Problem
Rather than building an abstract agent demo, I chose to focus my capstone work on something practical. I wanted to see if the ideas from the course could be applied to help STEM teachers create and adapt hands on lab experiences more easily.
The goal was not to build a product. It was to build something useful.
I approached it the same way I approached the agentic AI lessons. Start small. Define clear roles. Reduce cognitive load. Make the system support the human instead of replacing them.
A Small Solution, Built on Purpose
As part of the capstone, I built a small solution to help STEM teachers streamline how they create and adapt hands on lab activities. It was designed to solve a very practical problem and to be useful immediately, not to be a polished product.
I am intentionally keeping the details high level for now. Part of the value for me has been exploring what is possible without locking myself into a specific implementation too early. It may stay exactly where it is, or it may evolve into something more formal later. Right now, it is a learning tool.
What the Experience Reinforced
The course reinforced something I keep encountering across domains. Whether you are building agentic AI systems or designing learning experiences, success depends less on raw intelligence and more on enablement.
Good systems help people think better.
Good labs help students explore more confidently.
Good structure creates room for curiosity instead of constraining it.
Tools matter, but only when they are designed with the human experience in mind.
Why This Work Feels Connected
This capstone project did not live in isolation. It connected directly to the STEM labs I have been building and to my growing interest in getting more involved in the local educational ecosystem.
The same principle applies everywhere. Learning happens when people are given the right balance of structure and freedom, supported by tools that reduce friction instead of adding it.
The Kaggle course gave me a focused environment to test that idea in code. STEM education gives me a place to test it in practice.
Moving Forward Without Locking In
I am intentionally letting this work remain exploratory. Some of what I built may stay exactly as it is. Some may inform future tools. Some may simply change how I think about teaching, learning, and enablement.
For now, the goal is simple. Build things. Use them. Learn from them.
That mindset is what made the five day course valuable, and it is what continues to guide how I approach both AI and education.
If you have taken part in hands on courses, capstone projects, or learning experiences that forced you to build instead of just absorb information, I would love to hear about it.
What helped you learn the most when theory finally had to turn into practice? Share your thoughts in the comments.










