The Wright Strategy

My thoughts and contributions to the AI and STEM communities.



I make Science and AI approachable, turning complex ideas into clear understanding that sparks curiosity and inspires action. My passion isn’t just in knowing how things work, but in helping others see that they can explore and understand these ideas too.

Over the past two decades, I’ve worked at the intersection of technology, data, and learning. What I’ve learned is that complexity often isn’t the barrier, accessibility is. Whether it’s experimenting with hands-on science projects or breaking down how artificial intelligence fits into everyday life, I focus on removing that barrier. My goal is to make the intimidating feel approachable, and to spark the kind of curiosity that leads to exploration and confidence.

Today, I channel that energy into teaching, mentoring, and creating content that helps people of all ages engage with science and AI in meaningful ways. Sometimes that means building experiments that make abstract concepts visible. Sometimes it means guiding professionals or communities through the practical realities of AI. Always, it’s about opening doors for learners, leaders, and communities alike.

If you’re interested in exploring how science and AI can be made accessible, practical, and inspiring, let’s connect.

  • Building Better STEM Experiences by Building Things Myself

    Recently, I participated in a five day course on agentic AI hosted on Kaggle and co sponsored by Google. It was structured as a guided learning experience, not just a competition. Each day layered new concepts, and the program culminated in a capstone project where participants had to apply what they had learned by building something real.

    I joined the course for one reason. I wanted to get hands on with agentic AI at the code level. I had spent plenty of time reading and writing about agents conceptually, but I wanted to understand what actually holds up when you have to design, orchestrate, and run a system under real constraints.

    This post is about where that capstone project led me and why it unexpectedly pulled together my interests in AI, STEM education, and learning by doing.

    Why the Course Pushed Me to Build

    The structure of the course mattered. Five consecutive days of focused work forced momentum. You could not stay theoretical for long. Each lesson pushed toward implementation, and the capstone made it clear that understanding would be measured by what you built, not what you could explain.

    As I worked through the material, one theme kept resurfacing. Agentic AI is not primarily a model problem. It is a design problem. Clear goals, clear roles, intentional boundaries, and meaningful evaluation matter more than clever prompts.

    That realization felt familiar.

    It mirrored what I have been seeing in education.

    A Parallel Between Agentic AI and STEM Education

    In both agentic systems and classrooms, failure often comes from the same place. Too much abstraction. Too little structure. Or structure that removes curiosity instead of enabling it.

    Watching my own children work through STEM assignments over the years, I have seen how often the experience gets flattened into worksheets and disconnected tasks. Not because teachers lack creativity or care, but because good hands on resources are hard to find, hard to adapt, and time consuming to build from scratch.

    The capstone project gave me a chance to explore that problem space through a different lens.

    Choosing a Capstone That Solved a Real Problem

    Rather than building an abstract agent demo, I chose to focus my capstone work on something practical. I wanted to see if the ideas from the course could be applied to help STEM teachers create and adapt hands on lab experiences more easily.

    The goal was not to build a product. It was to build something useful.

    I approached it the same way I approached the agentic AI lessons. Start small. Define clear roles. Reduce cognitive load. Make the system support the human instead of replacing them.

    A Small Solution, Built on Purpose

    As part of the capstone, I built a small solution to help STEM teachers streamline how they create and adapt hands on lab activities. It was designed to solve a very practical problem and to be useful immediately, not to be a polished product.

    I am intentionally keeping the details high level for now. Part of the value for me has been exploring what is possible without locking myself into a specific implementation too early. It may stay exactly where it is, or it may evolve into something more formal later. Right now, it is a learning tool.

    What the Experience Reinforced

    The course reinforced something I keep encountering across domains. Whether you are building agentic AI systems or designing learning experiences, success depends less on raw intelligence and more on enablement.

    Good systems help people think better.
    Good labs help students explore more confidently.
    Good structure creates room for curiosity instead of constraining it.

    Tools matter, but only when they are designed with the human experience in mind.

    Why This Work Feels Connected

    This capstone project did not live in isolation. It connected directly to the STEM labs I have been building and to my growing interest in getting more involved in the local educational ecosystem.

    The same principle applies everywhere. Learning happens when people are given the right balance of structure and freedom, supported by tools that reduce friction instead of adding it.

    The Kaggle course gave me a focused environment to test that idea in code. STEM education gives me a place to test it in practice.

    Moving Forward Without Locking In

    I am intentionally letting this work remain exploratory. Some of what I built may stay exactly as it is. Some may inform future tools. Some may simply change how I think about teaching, learning, and enablement.

    For now, the goal is simple. Build things. Use them. Learn from them.

    That mindset is what made the five day course valuable, and it is what continues to guide how I approach both AI and education.


    If you have taken part in hands on courses, capstone projects, or learning experiences that forced you to build instead of just absorb information, I would love to hear about it.

    What helped you learn the most when theory finally had to turn into practice? Share your thoughts in the comments.

  • Launching My New STEM Lab Collection: Why I Built Them and What I Hope They Spark

    I have been quietly building a series of hands-on STEM labs designed to make teaching easier and learning more engaging. After seeing both teachers and students struggle with uninspired materials, I decided to make these labs accessible through a new store on Teachers Pay Teachers. It is a simple way to help educators find activities that actually bring concepts to life.

    This post is about why I created these labs, what they aim to improve, and how this work connects to my growing interest in supporting STEM education in my own community.

    Why I Started Creating These Labs

    Like many parents, I have watched my children work through assignments that feel more like busywork than science. They are not harmful, but they do not ignite curiosity. This is not a teacher problem. It is a resource problem. Materials are scattered across the internet, and when teachers are pressed for time, the easiest option becomes random worksheets that are quick to print but do little to engage students.

    My frustration came from recognizing how much opportunity sits right there, waiting to be tapped. STEM is naturally hands-on and full of moments that spark imagination. Yet many activities do not reflect that potential.

    So I started building labs that let students interact, experiment, test ideas, and rethink their approach. I am not reinventing the wheel. I am simply adding a cleaner, more intentional version of something teachers already want. The goal is engagement that fits within the practical limits of a real classroom.

    What Makes These Labs Different?

    I kept the design clear and consistent.

    1. Real engagement instead of passive tasks

    Students learn best when they are active participants. Each lab gives them something meaningful to build, test, or observe.

    2. Practicality for everyday classrooms

    Teachers need simple setups and inexpensive materials. These labs use items that are easy to obtain and instructions that are easy to follow.

    3. Clear alignment to NGSS and TEKS

    Every lab includes explicit standards alignment so teachers immediately know where it fits in their curriculum.

    This is not about replacing everything that already exists. Many high quality STEM resources are out there. My goal is to offer additional options that are easy to find and designed with engagement in mind.

    Why Teachers Pay Teachers?

    Teachers Pay Teachers is a marketplace where educators look for reliable, classroom tested resources. Publishing there increases the chances that these labs reach the people who need them most.

    Part of this effort is also personal. I am thinking about getting more involved in my local educational ecosystem. I may volunteer, support programs, or even help directly in classrooms from time to time. Creating these labs now gives me ready made tools to bring with me when those opportunities arise. It allows me to contribute to my community while helping teachers everywhere.

    What Is Included in the First Release

    The initial bundle contains a full set of classroom ready labs. Each lab includes:

    • A teacher guide
    • A student handout
    • A detailed description
    • A matching illustrated cover image

    Teachers can use the labs individually or combine them into longer units. Everything is modular and easy to adapt.

    This is only the starting point. I plan to continue adding new labs with the same goal of making STEM more accessible and engaging.

    Why This Matters

    Across conversations with teachers and parents, as well as what I have seen in my own family, one conclusion is clear. Small improvements in STEM activities can produce significant gains in student engagement. When students build and test ideas, the entire classroom energy shifts. Curiosity becomes visible. Learning becomes memorable.

    I am not trying to overhaul STEM instruction. I am simply adding supportive tools to the mix. If these labs help even a few teachers create deeper learning moments, then the effort is worth it.

    If you would like to explore the labs, the full bundle is now available on Teachers Pay Teachers. I would love to hear your feedback and your ideas for where this collection should grow next.

    What STEM challenges or concepts should be turned into hands-on labs in the future? Share your thoughts in the comments.

  • The Rise of Agentic AI: Why the Next Wave is About Systems That Act, Not Just Predict

    Most organizations are still trying to wrap their heads around traditional AI adoption. They are experimenting with copilots, exploring governance frameworks, and building strategies to make their data more reliable. And just when they start to feel comfortable, another wave arrives. The conversation shifts from models that generate answers to systems that take action on your behalf. This is the world of agentic AI.

    If generative AI was about giving every employee a smart assistant, agentic AI is about giving every workflow a sense of direction. It is not a new model. It is a new way of organizing work.

    Let’s break down why this matters, what is changing, and what leaders should think about before diving in.

    What Agentic AI Actually Means

    Most AI we use today answers questions. It waits for input. It predicts. It generates text or code. It improves efficiency but it does not change the shape of a process.

    Agentic AI goes a step further. It gives AI a set of goals, a set of tools, and the ability to perform multi step tasks with minimal human intervention. Instead of asking an LLM for a list of candidates, for example, an agent can search, filter, evaluate, draft outreach, personalize messages, schedule follow ups, and update your CRM. It becomes a small operational unit that works through a sequence on its own.

    This shift matters because it forces organizations to rethink how work gets done. It is not simply an enhancement to a task. It is the beginning of a new automation layer that can cross departmental lines.

    Why This Moment Feels Different

    Agentic AI is not entirely new. We have seen flavors of it in data ETL processes, orchestration engines, and workflow automation tools for years. So why is everyone talking about it now?

    Here are the big drivers.

    1. LLMs can now reason well enough to navigate ambiguity

    Earlier automation tools required clear rules. If the inputs did not match the expectations, the workflow broke. LLMs solve this problem by interpreting unstructured content and making reasonable choices about the next step.

    2. Tool integration has become the norm

    Modern agent frameworks can interact with APIs, databases, internal search tools, and even custom functions. The result is an orchestration layer that feels much closer to how a junior employee operates.

    3. Cost curves and infrastructure allow experimentation

    When inference costs fall and model access becomes easier, experimentation shifts from central innovation teams to individual builders who want to automate their own workflows. This is the moment where ideas compound.

    The Opportunity and the Risk

    The opportunity is straightforward. Agentic AI allows companies to automate complex, multi step, cross functional processes that were previously too messy for traditional automation. It can reduce operational drag, help teams scale without adding headcount, and create entirely new patterns of productivity.

    The risk is equally important. Without proper guardrails, agentic systems can act too broadly or too confidently. They can produce invisible errors that propagate downstream. They can also amplify gaps in data quality or governance. Leaders who treat agentic AI like a magic extension of a chatbot will run into trouble quickly.

    This is why thoughtful design, strong observability, and clear guardrails must come first. Agentic AI magnifies both strengths and weaknesses inside an organization. The sooner leaders acknowledge this, the smoother the adoption curve will be.

    What Leaders Should Be Thinking About Right Now

    Even if you are not ready to deploy agents in production, there are several questions worth asking.

    1. Where do we have repetitive multi step work that is rules light but context heavy?

    These are areas where traditional automation struggles and where agentic AI shines.

    2. Do our teams have clear processes documented, or are we relying on institutional memory?

    Agents require structure. Even flexible structure. If no one can describe the workflow, automating it becomes guesswork.

    3. How will we observe agent behavior?

    This is where FinOps style thinking becomes valuable. If you cannot see how an agent spends time, resources, and compute, you cannot optimize or control it.

    4. How will we train our workforce to collaborate with autonomous systems?

    Employees need to move from task completion to oversight and exception handling. The shift is cultural as much as technical.

    5. Where should we start in order to learn safely?

    Low risk experimental workflows offer the best launch pad. Early wins provide clarity and confidence.

    Why I Find This Space So Exciting

    Agentic AI blends everything I enjoy. Systems thinking. Workflow design. Tool integration. Data quality. Practical innovation. It feels like the natural next step after years spent improving performance, observability, and cost efficiency across the data landscape.

    More importantly, it brings us closer to something organizations have chased for decades. A world where technology does not simply accelerate tasks but actually collaborates in the flow of work. We are early in this journey, but the trajectory is clear.

    This is not about hype. It is about capability. And once the capability exists, business models begin to shift.

    Where I Am Going Next

    In the coming weeks I will share some deeper explorations, including practical design patterns, real world examples, and details on the agentic workflows I have been building. I will also break down how companies can evaluate use cases, reduce risk, and build internal programs that scale safely.

    There is a lot to unpack here, and I want the discussion to stay grounded in what professionals can use today instead of speculation about what might arrive later.

    For now, the key takeaway is simple. The next chapter of AI is about systems that act. The companies that learn to guide that action will discover entirely new categories of efficiency and value.

  • Why the Northern Lights Are Everywhere: Understanding the Aurora and Our Supercharged Solar Cycle

    If you have spent any time scrolling social feeds in the past year, you have probably noticed an unusual trend. Friends in Tennessee, the Carolinas, and even North Texas keep posting pictures of purple and green streaks across the sky. These are not cleverly filtered sunsets. They are the aurora. And if it feels like the Northern Lights have been everywhere recently, that is not your imagination.

    Let’s break down what the aurora really is, why it happens, and why this particular solar cycle has turned North America into a light show far beyond the usual northern latitudes.

    What the Aurora Actually Is

    At its core, the aurora is a planetary physics demo that plays out at a massive scale. Charged particles from the Sun stream toward Earth at high speeds. Once they reach our planet, most are deflected by the magnetic field. A small percentage becomes trapped along the magnetic field lines and spirals down toward the poles.

    When these energetic particles collide with the gases in our upper atmosphere, they transfer energy to atoms of oxygen and nitrogen. The atoms then release that energy as light. Oxygen gives the classic green glow. Nitrogen contributes purples and reds. Put enough of these collisions together and you get curtains, arcs, and shimmering patches that dance across the night sky.

    It is a giant glow stick, only powered by the Sun and stretched across hundreds of miles.

    The Role of the Solar Wind

    The Sun is not quiet. Even on an average day it sends a steady stream of high-energy particles known as the solar wind. Sometimes the Sun gets rowdier. Large eruptions such as solar flares or coronal mass ejections accelerate far more particles toward Earth in a short period of time.

    When these bursts hit Earth’s magnetic field, the system gets overloaded, and the aurora becomes more intense and more widespread. Instead of staying pinned to the far north, the glow expands to latitudes that rarely see it.

    That is why a normally calm location like North Carolina can suddenly light up like a long-exposure photo from Anchorage.

    The Solar Cycle

    The Sun operates on an eleven-year cycle of magnetic activity. At the low point, solar activity is minimal. At the high point, known as Solar Maximum, the Sun produces more sunspots, more eruptions, and more opportunities for energetic particles to reach Earth.

    Not all solar maxima are equal. Some are modest. Others are powerful. The current cycle, Solar Cycle 25, has surprised almost everyone. Early predictions suggested a fairly average peak. Instead, it has ramped up faster than expected and is showing signs of being one of the strongest cycles in decades.

    A strong solar maximum means more geomagnetic storms. More storms mean more auroral activity. And that means a light show for people far south of the usual viewing zone.

    Why We Are Seeing Auroras So Far South

    Several factors have lined up at once.

    1. A Stronger Solar Maximum
    The Sun is simply more active than expected. It is generating more frequent and more energetic eruptions.

    2. Multiple Eruptions in Close Succession
    When several coronal mass ejections collide with Earth one after another, the magnetic field becomes compressed and stressed. This produces stronger and more expansive auroras.

    3. A Magnetosphere Under Pressure
    When solar plasma clouds hit Earth head-on, the magnetic field lines get pushed and stretched. This causes auroral ovals to widen significantly. People in regions that rarely see auroras suddenly find themselves under the glow.

    4. Better Reporting
    Smartphones and social media let millions of people capture and share the lights instantly. Stories that once lived only in northern communities now go viral in minutes. It creates the sense that auroras are new. They are not new. But they are definitely more frequent right now. My own current phone, a Samsung Galaxy S24 Ultra, has some pretty powerful light amplification and AI features for my pics that have been unavailable on any other phone I’ve owned. That’s how I captured the photo above from the Charlotte, NC area.

    What This Means Going Forward

    We are at the front edge of the peak of Solar Cycle 25. The activity will likely remain elevated for at least another year and possibly longer. That means continued opportunities for auroras at mid-latitudes.

    It also means a higher likelihood of short-lived disruptions to radio communications, GPS accuracy, and even power grid stability. None of this is cause for panic, but it is a reminder that we live next to a star. That star goes through seasons, and we are moving through one of its liveliest.

    For sky watchers, photographers, and anyone who appreciates the rare feeling of seeing the sky come alive, this is a special moment. It is not often that so much of North America gets a front-row seat to one of nature’s most spectacular shows.

    So if you have clear skies, low light pollution, and a compass that points vaguely north, it may be worth stepping outside on the next active night. The Sun is doing its part. The rest is just timing.

    Let me know what you think, and post your own pics in the comments!

  • Are We Headed Toward an AI Bubble?

    If you ask ten people whether AI is in a bubble right now, you will probably get twelve opinions. The excitement is real. The valuations are massive. The pace feels faster than anything most of us have seen since the dot com era. At the same time, the economic impact is undeniable. Entire workflows are being reorganized around automation and augmentation. Leaders are trying to understand whether this is sustainable growth or temporary inflation.

    I continue to hear talk predicting a collapse in AI valuations. It includes some sharp points, but it also undersells the momentum happening inside real organizations. That gap is worth exploring. So let us take a balanced look at the threat of an AI bubble, what could cause one, and what might prevent it.

    The Case for an AI Bubble

    There are some legitimate warning signs that resemble previous periods of irrational enthusiasm.

    1. Valuations Outpacing Revenue

    AI-related companies, especially those tied to foundation models, are trading at multiples that require long-term perfection. Many of these businesses are not yet demonstrating the economics needed to justify their price. If future efficiency gains or customer adoption slow down even a little, those valuations could come back to Earth very quickly.

    2. Surplus Solutions Without Clear Use Cases

    Every day brings a new AI product that looks impressive but fails to answer the simplest question. Does anyone need this? Much like the early internet, the novelty of the technology has encouraged some teams to ship before they understand the value. Without discipline, the market can inflate far beyond real utility.

    3. Cost Structures That Are Difficult To Sustain

    Running AI at scale is expensive. Compute, memory, storage, networking, and talent all contribute to high operational costs. Many startups are subsidizing early users by absorbing losses. If funding tightens, that model becomes fragile. When a sector grows on subsidized consumption, bubbles tend to follow.

    4. Public Narratives Getting Ahead of Practical Reality

    There is a difference between a technology revolution and a hype cycle. AI is absolutely the former, but some public expectations are starting to drift into science fiction territory. When expectations rise faster than capability, corrections tend to appear quickly and aggressively.

    The Case Against an AI Bubble

    On the other hand, there are strong arguments that today looks very different from the dot com era or the crypto boom.

    1. AI Is Impacting Real Systems Today

    This is not theoretical. AI-powered automation is shrinking operational hours across many industries. Coding assistance is increasing developer output. Customer service teams are using AI to triage requests at scale. These gains are measurable and tied directly to productivity.

    2. Infrastructure Is Maturing Faster Than Initial Predictions

    Cloud providers, chip manufacturers, and model labs are iterating at unprecedented speed. Better hardware, optimized runtimes, and more efficient models are lowering costs quarter after quarter. The direction of travel is toward sustainable growth, not runaway expense. That said…

    3. Natural Constraints in Power and Infrastructure Will Slow Overheating

    AI growth depends heavily on compute resources, energy availability, and physical infrastructure. As demand continues to rise, many regions are already approaching the limits of their power grids. Data center construction timelines are measured in years, not quarters. These constraints act as a natural regulator. They slow the pace just enough for organizations to adapt, refine their AI strategies, build internal enablement programs, and avoid reckless spending. Instead of fueling a bubble, infrastructure bottlenecks may end up preventing one by forcing a more measured and sustainable rate of adoption.

    4. Broad Demand Across Sectors

    This is not a fad concentrated in one niche. Healthcare, finance, retail, logistics, manufacturing, and education are all adopting AI. When adoption spreads across the entire economy, the ecosystem becomes harder to destabilize.

    5. Talent and Tools Are Becoming More Accessible

    The impact of democratized tools cannot be overstated. What once required a research team now requires a developer with access to an API and a solid set of prompts. That accessibility drives genuine value creation, which helps anchor the market.

    Why AI Enablement May Determine Whether a Bubble Forms

    There is one factor that does not get enough attention in mainstream commentary. Very few companies are doing AI well. The biggest threat is not the technology itself. It is the lack of organizational readiness.

    AI Enablement is the difference between superficial adoption and systematic value. Without it, enthusiasm becomes wasted. With it, organizations convert experimentation into repeatable impact.

    What Happens Without AI Enablement

    • Teams run disconnected experiments with no shared learning.
    • Tools are adopted without clear purpose or governance.
    • Models are deployed without understanding cost implications.
    • Budget grows without matching business outcomes.
    • Leaders lose confidence, and investment stalls.

    This is exactly how bubbles pop. Not because the technology fails, but because organizations overinvest without measurable returns.

    What Happens With Proper AI Enablement

    • Teams learn how to select the right use cases.
    • Governance, security, and compliance are built into every stage.
    • Costs are tracked and optimized in real time.
    • Employees understand how to use AI effectively rather than fear it.
    • Leadership sees actual productivity gains instead of slideware.

    In other words, AI Enablement turns AI from an expense into an asset. Scaled enablement programs stabilize the market by creating a foundation of real, durable value.

    If we want to prevent an AI bubble, enterprises cannot skip this step. They must invest in training, governance, internal tooling, documentation, and systematic feedback loops. It is the single best defense against irrational overinvestment.

    So Are We in a Bubble?

    Here is the honest answer. We might be. The conditions are present, but the outcomes are not predetermined. Unlike previous hype cycles, AI has already transformed core business functions. The question is not whether AI is useful. The question is whether organizations can adopt it wisely and sustainably.

    If they can, then the current market is not a bubble. It is an early phase of a long industrial shift. If they cannot, we will see a correction. Possibly a severe one.

    The dividing line between those two futures is the discipline of AI Enablement.

    Final Thought

    The companies that thrive in this decade will be the ones that treat AI as a capability, not a novelty. They will build internal programs that empower their teams, manage their costs, and align adoption with strategy. They will move deliberately, test responsibly, and learn continuously.

    A bubble is not inevitable. It is avoidable. It depends on how seriously we take the work of enabling people, not just deploying models.

    Let me know what you think in the comments.

  • Building an AI Enablement Program That Works

    Not long ago, “digital transformation” was the buzzword of the decade. Then came cloud migration. Today, the new frontier is AI Enablement.

    But while the promise of AI is enormous, so is the confusion. Many organizations have pockets of experimentation, isolated proofs of concept, or a handful of power users tinkering with chatbots. Few have a systematic way to turn these sparks into something that scales.

    That’s what an AI Enablement program is for.

    It’s the bridge between curiosity and capability. The structure that helps employees learn, apply, and measure how AI improves their work. Done right, it becomes the engine that keeps your company learning faster than the technology changes.

    So how do you actually build one?

    Step 1: Start With Purpose, Not Tools

    Every great enablement program begins with a clear “why.”

    If the goal is simply to “use AI,” you’ll end up chasing demos and headlines. Instead, identify the specific outcomes you want to accelerate.
    Examples:

    • Reducing the time to create customer proposals
    • Speeding up data analysis for decision-making
    • Improving support response quality and consistency

    Then, write these as success statements that are easy to measure.
    For example: “Reduce the average time to produce a first draft of a customer proposal from 3 days to 1 day using AI assistance.”

    Once those goals are clear, the tools almost pick themselves. The clarity of purpose protects you from the trap of shiny technology.

    Step 2: Build a Cross-Functional Core Team

    AI Enablement is not an IT project. It’s a cross-functional initiative that sits between technical capability and cultural adoption.

    The core team should include four essential perspectives:

    • Technology – A representative from data or IT who understands architecture, security, and integration.
    • Business – A domain expert who knows what “good” looks like in each use case.
    • Training/Change Management – Someone skilled in adult learning, communications, and internal adoption.
    • Executive Sponsor – A leader who can make fast decisions and remove obstacles.

    This group forms the nucleus of your AI Enablement Council. Their mission is to identify use cases, oversee pilots, and ensure that success stories are captured and shared.

    Step 3: Start With Low-Risk, High-Visibility Pilots

    Early momentum matters more than perfect architecture.

    Choose projects that:

    • Involve non-sensitive data
    • Deliver results within 30 to 60 days
    • Touch multiple departments or workflows
    • Have measurable outcomes tied to time or quality

    Good first pilots might include automating customer FAQs, summarizing meeting transcripts, or generating first drafts of internal reports.

    Each success builds credibility and curiosity. Each failure, if handled transparently, builds learning.

    Step 4: Create an Internal Learning Loop

    AI enablement is less about technical training and more about guided exploration. The best programs make it easy for employees to learn, share, and apply.

    Here are practical elements to include:

    • Internal AI Portal – A simple SharePoint or intranet hub with examples, policies, and quick-start guides.
    • Office Hours – Weekly open sessions where employees can bring questions or share how they’re using AI.
    • Role-Based Playbooks – Short guides on “how to use AI in sales,” “AI for analysts,” or “AI for HR.”
    • Story Repository – A shared library of wins, use cases, and lessons learned.

    The goal is to normalize exploration while maintaining guardrails for security and accuracy.

    Step 5: Align Policy With Possibility

    Many companies unintentionally slow adoption through vague or restrictive policies. Instead of starting with “what employees can’t do,” write policies that enable safe experimentation.

    That means defining:

    • What data can be shared with external AI systems
    • Which tools are approved and monitored
    • How outputs should be reviewed or validated before use

    Think of policy as an accelerator, not a constraint. The best policies tell people what they can do safely, so they move forward with confidence.

    Step 6: Measure and Market Internally

    You can’t manage what you don’t measure, and you can’t sustain what no one sees.

    Create a simple success dashboard that tracks:

    • Number of active AI use cases
    • Hours saved or tasks accelerated
    • Qualitative feedback from users

    Then, market those wins internally. A quarterly AI Showcase or short video highlights reel can reinforce that this is a company-wide movement, not a side project.

    People adopt what they see others doing. Visibility is the fuel of momentum.

    Step 7: Evolve From Program to Culture

    At some point, AI enablement stops being a project and starts being part of how you operate.

    That’s the real goal. You know you’re there when:

    • Employees bring AI ideas to meetings without being prompted
    • Managers discuss automation and augmentation in the same sentence
    • Data and creativity are no longer in separate departments

    At that stage, your organization is learning faster than any individual can, and that’s what keeps you competitive as the technology keeps changing.

    Closing Thoughts

    AI enablement isn’t about installing a new platform or holding a few workshops. It’s about creating the conditions where smart people can safely experiment, share what works, and multiply their impact.

    If you can build a structure that encourages learning, celebrates small wins, and measures value, you’ll have something far more sustainable than a one-time AI initiative. You’ll have a system that keeps adapting just like the technology itself.

    Let me know what you think in the comments!

  • If Everyone Feels Unprepared, Maybe the System Is the Problem: A Blueprint for Employer-Led Readiness

    The Conversation Is Changing, And It Should

    I recently read this article in US News and World Report that asked an alarming question, “Why Do Most U.S. Workers Feel Unprepared for Today’s Workforce?” It made a clear and valuable point. Workforce readiness is no longer something people achieve once and carry with them for the rest of their careers. It is an ongoing exercise. A moving target. A continuous skill reset.

    I agree with that premise.

    But there is an obvious next step in the discussion that often gets skipped.

    If readiness is no longer something workers arrive with, then it cannot remain something employers simply expect. The burden has shifted. Not to schools. Not to government. Not to individuals trying to self-educate in their spare time.

    It now belongs to the organizations that depend on a capable workforce to survive.

    In other words, if the future requires people who can keep learning at the speed of change, then companies must become places where learning is not an afterthought but an operating norm.

    The Real Shift: From Talent Acquisition to Talent Velocity

    For decades, the default answer to skill shortages has been some version of “hire people who already know it.” That strategy worked when change moved slowly and expertise aged gracefully.

    Today, the half-life of skills is shrinking. The shelf life of job roles is shorter. Entire categories of work evolve faster than traditional education can update its curriculum.

    So the competitive edge is no longer about who you hire. It is about how fast your people can grow.

    That is talent velocity, and it is not something you acquire. It is something you build.

    What Companies Must Stop Assuming

    1. Readiness is a pre-employment condition.

    It used to be. It no longer is.

    2. Learning is a personal responsibility.

    In reality, organizational learning is a business strategy.

    3. “Training” is the same as capability development.

    A single workshop does not produce a future-ready workforce.

    4. People resist change.

    Most people do not resist learning. They resist unclear expectations, unsupported growth, and environments that treat curiosity as a distraction.

    What Companies Need to Build Now

    1. Skills-based role design

    Stop defining jobs by titles and tenure. Start defining them by the skills that drive outcomes. This makes hiring clearer, internal mobility easier, and upskilling measurable.

    2. Internal learning infrastructure

    A slide deck and a lunch-and-learn calendar are not a learning system. Companies need modular, on-demand, role-relevant learning paths that fit inside the flow of work, not outside it.

    3. Upskilling as a built-in path, not a side option

    If learning is optional, only the most motivated will pursue it. When it is expected and rewarded, it becomes cultural.

    4. Incentives that reinforce growth

    If performance reviews reward output only, no one will invest time in becoming more capable. When growth is tied to advancement, people learn because it matters.

    5. Internal credentials and progression signals

    Instead of waiting for outside institutions to certify talent, companies can create their own progression ladders, levels, badges, or skill checkpoints. A Level 4 Analyst title that reflects verified capability is more meaningful internally than a degree collected 10 years ago.

    Learning Is Not a Perk; It Is Infrastructure

    Too many organizations still treat learning as a benefit they offer instead of a capability they depend on. The modern learning culture is closer to a system architecture than a training calendar.

    It needs:

    • dedicated time
    • leadership buy-in
    • accessible resources
    • clarity about what skills matter
    • visible opportunities to apply what is learned

    When learning becomes integrated into the operating rhythm, people stop asking, “When am I supposed to do this?” and start asking, “What should I master next?”

    That is the mindset shift.

    What Employees Should Expect Going Forward

    If companies need adaptable talent, then workers should also expect adaptable employers. A future-oriented company should:

    • Make skill pathways transparent.
    • Reserve time to learn, not just expect it off the clock.
    • Give managers the tools to coach, not gatekeep.
    • Allow people to move across teams when they outgrow their role.
    • Treat curiosity as an asset, not a distraction.

    The smartest career move may no longer be the highest salary. It might be the company that treats learning as part of the job description.

    Leadership’s New Responsibility

    Executives do not need another slide about “the changing future of work.” They need to accept a new reality: Readiness is no longer imported. It is developed.

    If an organization does not build internal learning capability, it will always be playing catch-up. The companies that win long term will be the ones that stop searching for “ready-made talent” and start engineering environments where readiness is renewable.

    That is not an HR function. It is a CEO-level decision.

    The New Definition of Ready

    If everyone feels unprepared, maybe it is not a talent shortage. Maybe it is a design flaw.

    Workers are not asking for certainty. They are asking for a way forward.
    Employers who build that way forward will not just fill roles. They will create capability.

    The future belongs to the organizations that treat learning as an ongoing system rather than a one-time event.

    Readiness is no longer what people bring with them.
    It is what companies enable over time.

    Let me know what you think in the comments.

  • AI Enablement: The Next Evolution of Technical Evangelism

    Why This Role Exists Now

    Every few years, the technology landscape produces a new kind of translator. In the 2000s, we needed Technical Evangelists to help developers and business leaders understand what new platforms could do. Later, Sales Enablement teams emerged to turn complex capabilities into practical conversations that drove adoption.

    Now, with artificial intelligence reshaping how organizations think about productivity, innovation, and risk, a new translator is emerging: the AI Enablement leader.

    The concept isn’t about deploying models or spinning up new tools. It’s about helping people inside an organization learn how to use AI effectively, responsibly, and at scale. AI Enablement is where strategy meets execution and where innovation becomes everyday behavior.

    When Technology Outpaces Adoption

    The need for AI Enablement comes from a familiar pattern: technology evolves faster than organizations can absorb it.

    Right now, many leaders feel the tension between potential and practicality. They’ve invested in AI pilots, experimented with copilots and chatbots, and maybe even built a few internal tools. But most admit adoption is patchy and inconsistent. Teams lack training, confidence, or clear guidelines. Governance is unclear. ROI is uncertain.

    That’s where AI Enablement comes in. It turns scattered experimentation into a repeatable, scalable capability.

    From Evangelism to Enablement

    If you ever worked in technical evangelism, this story sounds familiar. The evangelist’s job was to bridge the gap between engineering and adoption; to tell the story of new technology, translate its benefits, and help teams get value from it.

    AI Enablement picks up the same torch but applies it inside the organization. Where evangelists once explained APIs and SDKs to developers, AI enablers now help employees integrate large language models, copilots, and automation tools into their workflows.

    The job isn’t just about teaching prompts. It’s about building confidence and culture around AI. It’s evangelism turned inward.

    The Three Pillars of AI Enablement

    From studying early adopters, three consistent themes are emerging. Think of them as the pillars of AI Enablement:

    1. Capability: Equipping teams with the right tools, integrations, and data access. That includes everything from sanctioned AI platforms to safe experimentation environments.
    2. Confidence: Helping people build trust in AI through training, office hours, and clear usage guidelines. Employees can’t adopt what they don’t understand.
    3. Culture: Promoting responsible use, recognition, and shared learning. The best AI programs don’t just run on models; they run on curiosity and collaboration.

    When these three pillars align, organizations move past the novelty stage and into measurable outcomes. Productivity increases, employees feel empowered, and leadership gains visibility into what’s working.

    Lessons from Sales Enablement

    AI Enablement also borrows heavily from the playbook of Sales Enablement, another discipline designed to close the gap between strategy and daily execution.

    Sales enablement professionals learned that adoption doesn’t happen by memo. It happens through repetition, feedback, and context. The same is true with AI.

    For example, an AI Enablement leader might:

    • Develop short, role-specific learning modules on how AI can help marketing, operations, or finance.
    • Partner with compliance to define guardrails and ensure responsible use.
    • Create success stories and internal showcases that make adoption visible and rewarding.

    In both sales and AI, enablement isn’t a one-time event. It’s a sustained program that makes new behaviors stick.

    Where Tech Leaders Should Focus

    For enterprise leaders, the next challenge isn’t building more AI, it’s building the conditions where AI thrives.

    That means asking:

    • Who in our organization owns AI Enablement?
    • Do our teams know when and how to use these tools?
    • Are we tracking adoption and outcomes, not just deployments?

    Organizations that treat AI Enablement as a first-class function will outperform those that see it as optional training. The leaders who prioritize enablement today are creating the foundation for long-term competitive advantage tomorrow.

    The Coming Shift

    Just as companies eventually realized they needed DevOps to align development and operations, they’ll soon recognize that AI Enablement is essential to align technology and human adoption.

    This isn’t a temporary phase. It’s the new connective tissue between innovation and execution. AI may generate insights and automate tasks, but people still create the context, and context needs enablement.

    For those of us who have spent careers translating technology for others, this is familiar ground. The language has changed, but the mission hasn’t: help people make sense of the new and turn it into progress.

    Closing Thought

    AI Enablement is where technical evangelism, sales enablement, and organizational learning converge. It’s a discipline built on empathy, clarity, and results.

    If you’ve ever been the person who helps others bridge the gap between potential and practice, congratulations! You’re already part of the next wave. The world just gave your skill set a new name.

  • Why Founders Need a Technical Evangelist Sooner Than They Think

    You can build the most brilliant piece of technology on the planet, but if no one understands what problem it solves, it might as well be running in a cave. Every founder eventually learns that building is only half the job. The other half is translating that work into understanding.

    Early-stage startups often assume they need engineers first and evangelists later. The reality is that by the time you start building, someone should already be telling your story in a way that sticks. That person is the technical evangelist, and they can make or break your company’s first impression.

    The Bridge Between the Builders and the Believers

    When most people hear “technical evangelist,” they picture someone on a stage giving flashy demos or recording YouTube videos about new features. Those things can be part of the job, but the real role is deeper and more strategic.

    A good evangelist is the translator between engineering and the market. They are fluent in both dialects. They understand what your product actually does, why it matters, and how to explain it without losing the plot. They build trust through clarity.

    The goal is not to hype your product. It is to make it understandable and credible to the people you need most: early adopters, investors, and partners. Without that bridge, your startup’s message can easily become a mix of technical jargon and vague promises. That confusion slows momentum at the exact time you need it most.

    A Familiar Scene

    It’s a common scenario. A team spends six months building a brilliant proof of concept. They finally schedule investor meetings, only to find themselves trying to explain how it works instead of why it matters. They talk about throughput, containerization, or model efficiency, and the investors nod politely. What they are really thinking is, “So what?”

    A technical evangelist would have prevented that moment. They would have helped the team connect the dots between the technology and the impact. They would have shaped the story around outcomes, not just architecture.

    That gap between what is built and what is believed is where startups often stall.

    The Three Stages Where Evangelists Add Value

    Let’s look at where a technical evangelist fits in the startup lifecycle.

    1. Idea and Validation

    At this stage, the evangelist pressure-tests the pitch. They ask the hard questions about clarity and relevance before investors or customers do. If you cannot explain your product in plain English, it is not ready for prime time.

    A strong evangelist helps founders translate complex technical insight into simple, believable value. They also build credibility early by making sure the product’s story holds up under scrutiny.

    2. Early Build and Feedback Loop

    As you start to build, the evangelist becomes your storytelling engine. They document early wins, write developer blog posts, and turn customer feedback into meaningful insights. This creates visibility and helps attract the right audience.

    Evangelists are also feedback translators. They hear what customers are saying, strip away emotion or confusion, and bring actionable takeaways back to engineering. That loop prevents you from drifting too far from what the market actually needs.

    3. Growth and Scale

    When it is time to scale, the evangelist helps keep your story consistent across all channels. The product website, conference talks, investor updates, and user documentation all sound like they belong to the same company. That consistency builds trust and accelerates adoption.

    The Evangelist’s Toolkit

    If you want to understand what makes a great technical evangelist, think of them as a hybrid between engineer, storyteller, and ambassador.

    SkillDescription
    CommunicationExplain technical ideas clearly to non-technical audiences without watering them down
    StorytellingTurn features into relatable benefits and connect them to real-world problems
    Data fluencyUse numbers to make a story credible rather than overwhelming
    Community buildingCreate and engage user communities that grow your reach organically
    Feedback loopBring what the market says back into product design in a structured, usable form

    An effective evangelist is not there to make noise. They make understanding happen faster.

    When Founders Should Bring One In

    Most founders wait too long. They think they can handle it themselves until the message starts breaking down. Common signs include:

    • Customers say, “I’m not sure what you do.”
    • Marketing and engineering describe the product differently.
    • Investor decks sound either too technical or too vague.
    • The founder is the only person who can explain the company clearly.

    At that point, it is already costing you time and credibility. A technical evangelist brought in early can save months of miscommunication and keep everyone aligned on the “why.”

    Even if you cannot afford a full-time hire, you can bring in a mentor or advisor to play that role part-time. The key is to start the translation process early and refine it often.

    The Efficiency Angle

    Evangelism is not a luxury or a marketing extra. It is a form of operational efficiency. A good evangelist shortens the distance between what is being built and what is being understood.

    That means faster investor confidence, quicker customer onboarding, and fewer cycles wasted explaining the same thing in ten different ways. When everyone shares the same mental model of your product, progress compounds.

    In my own work in software, I have seen that moment when clarity clicks. People start to lean forward. Customers start describing your product back to you accurately. I especially love when they start to discuss amongst themselves how, “We could have fixed that issue last month with this!”. That is the moment your narrative and your reality line up. It is powerful, and it is measurable in both time and traction.

    The Takeaway for Founders and Mentors

    If you are a founder, you are already your company’s first evangelist. The question is how long you can wear that hat before it slows you down. If you are a mentor, help founders recognize that explaining the product is not the same as selling it. It is about creating understanding.

    The best evangelists are not hired to generate buzz. They are hired to generate belief.

    So if you think it is too early to tell your story, you are probably already late. The sooner you start translating your technology into understanding, the sooner others can believe in it as much as you do.

  • Why Do All These Comets Have the Word “ATLAS” in Their Name?

    Spoiler: It’s not a coincidence. It’s a set of telescopes keeping an eye out for trouble.

    The Curious Case of the “ATLAS” Comets

    If you have ever seen headlines like “Comet ATLAS Brightens in the Evening Sky” or “Astronomers Discover Interstellar Comet 3I/ATLAS”, you might have wondered why so many comets seem to belong to something called “ATLAS.”

    It turns out there is no mythological titan involved. The name comes from a network of telescopes called the Asteroid Terrestrial-impact Last Alert System, or ATLAS for short.

    ATLAS is designed to scan the entire sky each night and spot anything that moves in ways that might be dangerous, interesting, or both. When one of its telescopes discovers a new object, that discovery is recorded in the official name. So when you see a comet like C/2020 M3 (ATLAS), the “ATLAS” part simply means it was first spotted by that system.

    A Last-Minute Warning System for Space Rocks

    ATLAS was created for a specific mission: to find asteroids that are heading toward Earth with only days or weeks of warning. Most large asteroids are already being tracked years in advance by other surveys, but smaller ones can sneak up quickly. Many are invisible until they are close to Earth.

    Rather than competing with deeper, slower surveys, ATLAS fills the gap. It trades detail for speed and coverage. Its job is to make sure that if a smaller asteroid is on an inbound path, we still have some notice before it arrives.

    ATLAS is part of NASA’s Planetary Defense program. In addition to spotting potential impactors, it often finds supernovae, variable stars, and comets that happen to flare up in its nightly watch.

    The System Behind the Name

    ATLAS is not one telescope sitting on a mountain. It is a global network of small, wide-field robotic observatories. The first two were built in Hawaiʻi on Haleakalā and Mauna Loa. Two more later came online in South Africa and Chile, giving the system nearly full-sky coverage. A fifth telescope in Spain was added in 2025 to fill remaining gaps.

    Each telescope uses a mirror only half a meter across, but it can see an enormous portion of the sky. Its field of view is about seven degrees wide, which is roughly fourteen times the width of the full moon. The telescopes are designed to take quick, wide snapshots of the sky rather than long, deep exposures.

    This approach makes perfect sense. Asteroids move. You do not need extreme detail; you need to cover the sky quickly and catch changes from one image to the next.

    How ATLAS Spots New Objects

    Each night, ATLAS follows a predictable rhythm.

    1. It scans a section of the sky four times per night, with about fifteen minutes between exposures.
    2. It looks for changes in position or brightness.
    3. Its software compares images, identifies which points of light have moved, and calculates a preliminary orbit.
    4. If the moving object does not match a known asteroid, an alert goes out to the Minor Planet Center for confirmation and follow-up.

    Most of what it finds are harmless main-belt asteroids. But every so often, something new appears. That could be a previously unseen near-Earth asteroid, a long-period comet, or even, as in 2025, an interstellar object.

    That is why comet names often include “ATLAS.” Each one began as a faint, moving dot in one of these automated sky sweeps.

    How Much Warning Does It Really Give?

    The warning time depends on how big and how bright the object is.

    • A 100-meter asteroid might be visible for several weeks before a close pass.
    • A 10-meter object, similar in size to the Chelyabinsk meteor, might only be detected a few days before arrival.
    • For very small, fast objects, ATLAS might spot them only a day ahead.

    That may not sound like much, but it can be enough time to issue alerts or move people away from an expected impact zone. The entire system is built around one trade-off: it gives up faint sensitivity in exchange for rapid, repeated coverage of the whole sky. In other words, it patrols every street in the cosmic neighborhood instead of staring deeply into one alley.

    Why “ATLAS” Keeps Appearing in Comet Names

    Because ATLAS runs every clear night, it ends up discovering far more than asteroids. Its wide-field cameras also catch supernovae, variable stars, and of course, comets.

    Whenever a new comet is officially confirmed, it is named for whoever discovered it. That is why you see names like C/2019 Y4 (ATLAS) or C/2020 M3 (ATLAS). The “ATLAS” tag simply means that one of the ATLAS telescopes was the first to spot it.

    The discovery of Comet 3I/ATLAS in 2025 was especially exciting because it turned out to be only the third interstellar object ever detected, meaning it came from outside our solar system. Not bad for a project originally built as an asteroid alarm.

    Strengths, Challenges, and Trade-Offs

    ATLAS works so well because it accepts its limitations.

    Strengths

    • Global coverage that allows nearly the entire sky to be scanned every 24 hours
    • High repetition rate, which makes it excellent at catching moving objects
    • Low cost compared to large observatories
    • Automatic data processing and near real-time alerts
    • Complements deeper surveys rather than replacing them

    Challenges

    • It cannot detect very faint, distant objects early on
    • It has difficulty spotting objects coming from near the Sun’s direction
    • Weather or technical problems can interrupt coverage
    • Occasionally false positives occur from satellites or camera noise
    • For very small asteroids, even detection might come only hours before arrival

    Despite these limits, ATLAS fills a crucial niche that no other system covers as efficiently.

    The Unsung Hero of Planetary Defense

    ATLAS does not take pretty pictures. Its data are functional, not artistic. But night after night, its telescopes quietly sweep the heavens, watching for change.

    Since 2015, it has discovered hundreds of near-Earth objects, dozens of comets, and even a few surprises from interstellar space. Each discovery makes our catalog of small bodies a little more complete and our warning system a little stronger.

    It is not glamorous work, but it might be some of the most practical science being done anywhere on Earth.

    Why It Matters

    The odds of a large asteroid impact are low, but smaller impacts happen on human timescales. Having even a few days of warning can mean the difference between surprise and preparation.

    That is the power of ATLAS. It gives us eyes on the sky that never tire and never blink. It helps scientists track what is coming and alerts humanity before danger arrives.

    So next time you hear about a new Comet ATLAS glowing in the night sky, you will know what that name really means. It is not a mythic god holding up the heavens. It is a tireless set of telescopes quietly scanning the stars, giving us a little more time to look up before something unexpected looks back.