What the Wright Brothers Can Teach Us About AI’s Future
AI Disclaimer I love exploring new technology, and that includes using AI to help with research and editing! My digital “team” includes tools like Google Gemini, Notebook LM, Microsoft Copilot, Perplexity.ai, Claude.ai, and others as needed. They help me gather insights and polish content—so you get the best, most up-to-date information possible.
Part of: AI Learning Series Here
Quick Links: Resources for Learning AI | Keep up with AI | List of AI Tools
Subscribe to JorgeTechBits newsletter
I also wrote a shorter version on my Substack Newsletter
From Kitty Hawk to Silicon Valley
The departure board at Atlanta’s Hartsfield-Jackson flickers with destinations: Seoul, Amsterdam, São Paulo. I’m wedged into a terminal restaurant booth, laptop open, working on an application while my flight boards in forty minutes. On my screen, a local AI model helps me debug a particularly stubborn function, suggesting optimizations I hadn’t considered. Outside the window, a Delta 777 pushes back from the gate while a fuel truck navigates beneath its wing. An American regional jet taxis past. The control tower rises in the distance, orchestrating an invisible ballet of metal and kerosene.

I take a bite of an overpriced airport sandwich and watch an Airbus A320 lift off, its landing gear folding into the belly like a bird tucking its feet. My mind wanders to Kitty Hawk, December 17, 1903. Orville Wright, prone on the lower wing of a fragile wooden contraption, manages to stay airborne for twelve seconds and cover 120 feet. Wilbur runs alongside, steadying the wing. A handful of witnesses stand in the cold North Carolina wind, watching something that most of the world still believed was impossible.
Could they have imagined this? Not just the scale, but the entire ecosystem: the terminals stretching for miles, the sophisticated radar systems, the international treaties governing airspace, the TSA lines, the airline loyalty programs, the fact that millions of people would someday complain about the inconvenience of traveling hundreds of miles per hour through the sky while eating peanuts?
The AI on my laptop finishes processing, returning results that would have taken me hours to generate manually. And suddenly the parallel crystallizes. We’re in the Kitty Hawk moment of artificial intelligence. We have something that works, something that’s clearly transformative, but we have absolutely no idea what it will become.
The Barnstorming Era
In the years following the Wright brothers’ first flight, aviation entered what historians call the “barnstorming” era. Pilots traveled from town to town, performing stunts at county fairs, offering rides for a few dollars, and occasionally crashing into barns (hence the name). There were no rules, no licenses, no air traffic control. Anyone with enough mechanical aptitude and recklessness could build or buy an aircraft and take to the skies.
The parallels to today’s AI landscape are striking. We’re in a period of wild experimentation. Open-source models proliferate. Developers spin up applications with minimal oversight. Some creations are brilliant; others are digital barnstorming accidents waiting to happen. A teenager can fine-tune a language model in their bedroom. A startup can deploy an AI system that affects millions of users before regulators even understand what questions to ask.
Like those early aviators, today’s AI developers operate in a regulatory vacuum. The Wright brothers didn’t need a pilot’s license, a flight plan, or liability insurance. The Air Commerce Act that established basic federal oversight of aviation didn’t arrive until 1926, twenty-three years after that first flight. Today’s AI developers navigate a similar landscape. The EU AI Act is emerging, the US has various executive orders and proposed legislation, but there’s nothing approaching the comprehensive regulatory framework that governs aviation. We’re making it up as we go along.
Infrastructure Built on Sand
When I boarded my first flight this morning, I didn’t think twice about the infrastructure that made it possible. The standardized jet bridges, the universal baggage systems, the coordinated weather monitoring, the internationally recognized transponder codes, the meticulously maintained runways engineered to precise specifications. This infrastructure took decades to develop, required international cooperation, and was often built in response to disasters that made the need for standards tragically clear.
Early aviation had none of this. Pilots landed in farmers’ fields. Navigation meant following railroad tracks and hoping you recognized landmarks. Weather forecasting was primitive. Maintenance standards were whatever the individual operator decided they should be. It was dangerous, inefficient, and severely limited aviation’s utility.
AI today operates with comparable infrastructure deficits. We lack standardized benchmarks that meaningfully measure model capabilities. We have no universal protocols for AI safety testing. Data provenance is often opaque. Model governance frameworks are nascent at best. We’re still figuring out basic questions: How do you verify an AI system is behaving as intended? What constitutes adequate testing? Who’s liable when an AI makes a consequential error?
I’m running my AI model locally, which gives me control but also means I’m responsible for everything. It’s like being one of those early pilots who had to understand every aspect of their aircraft because there was no mechanic certification, no parts suppliers, no maintenance schedules. Liberating and terrifying in equal measure.
The Fear Factor
A plane roars overhead, and I don’t flinch. Nobody in the terminal does. We’ve collectively internalized that flying is safe, that those aluminum tubes hurtling through the air at 500 miles per hour are actually statistically safer than the drive to the airport. This acceptance took generations to build.
Early aviation faced visceral public fear. The idea of heavier-than-air flight violated common sense. When it proved possible, many still considered it a reckless novelty, suitable only for daredevils and fools. Insurance companies refused to cover pilots. Parents forbade their children from even watching air shows, fearing proximity to such dangerous foolishness. Newspapers regularly published editorials about the aviation menace.
The public discourse around AI today echoes these fears with remarkable fidelity. Artificial intelligence will take all our jobs. AI will become superintelligent and enslave humanity. AI will destroy truth itself through deepfakes and misinformation. AI will concentrate power in the hands of a few tech companies. Some of these fears are legitimate concerns requiring serious attention. Others are the modern equivalent of believing that traveling faster than 30 miles per hour would cause human bodies to disintegrate.
What’s particularly interesting is how the specific fears evolved in aviation. Early concerns focused on the physical danger, crashes and casualties. As aviation matured, anxieties shifted to noise pollution, then to environmental impact, then to privacy concerns about aerial surveillance. Each era had its particular fear, often legitimate, but rarely the existential threat that would end aviation itself.
AI fears are following a similar progression. We’ve moved from “can computers really think?” to “will AI destroy all jobs?” to increasingly nuanced concerns about bias, privacy, and autonomous weapons. The conversation is maturing, even if we haven’t solved the underlying tensions.
The Incumbent’s Dilemma
Outside my window, a cargo plane lands, likely carrying packages ordered online yesterday. The shipping and logistics industry was utterly transformed by aviation. But they didn’t welcome it.
When airmail began in 1918, the railroad companies and shipping magnates viewed it as an expensive gimmick that would never threaten their dominance. Rail had the infrastructure, the established routes, the proven business model. Why would anyone pay premium prices to ship things through the air when trains worked perfectly well?
This same pattern repeats throughout aviation history. Railroads lobbied against airline subsidies. Ocean liner companies insisted that intercontinental air travel would never be practical for passengers. Even within aviation, established carriers fought upstarts, legacy hub-and-spoke models resisted point-to-point efficiency.
Today’s AI incumbents face similar resistance. Traditional software companies insist their established approaches are more reliable than AI systems. Knowledge workers argue that AI lacks the nuance and judgment that humans provide. Industries from healthcare to law defend their gatekeeping mechanisms against AI encroachment. Some of this resistance is self-serving, but some reflects genuine concerns about quality, safety, and accountability.
History suggests the incumbents rarely win these battles, but they do shape how the transformation unfolds. Aviation didn’t eliminate trains or ships; it forced them to find their appropriate niches. AI likely won’t eliminate human expertise, but it will force us to redefine what human expertise means and where it’s most valuable.
The Laggards and the Leapers
My boarding announcement sounds. I pack up my laptop, leaving the AI model running a final optimization. In twenty minutes, I’ll be at 35,000 feet, traveling at speeds that would have seemed like pure fantasy to the Wright brothers.
But here’s what’s interesting about aviation’s adoption curve: it wasn’t smooth. Some countries and industries leaped ahead while others lagged decades behind. The United States led in commercial aviation infrastructure, but the Soviet Union pioneered long-range passenger helicopters. Island nations adopted aviation rapidly out of necessity, while countries with extensive rail networks were slower to invest. Military applications drove development during wartime, then stagnated during peace.
AI adoption is following similar patterns. Estonia has built government services around AI while other nations debate basic principles. Some industries, like customer service and content creation, are being transformed rapidly. Others, like healthcare and law, are adopting AI cautiously, constrained by regulation, liability concerns, and professional resistance. The laggards aren’t necessarily wrong. Sometimes moving fast means breaking things that shouldn’t be broken.
The companies and countries that will thrive aren’t necessarily the first movers but those who figure out the right pace and approach for their specific context. Pan Am was an aviation pioneer that no longer exists. Southwest Airlines, founded decades later with a completely different model, became one of the most successful carriers in history.
Imagining the Jet Age
So where does this take us? If we’re in AI’s Kitty Hawk moment, what does the jet age look like?
Aviation’s mature form wasn’t simply bigger, faster planes. It was an entire reconfiguration of human geography and economics. Cities that had been peripheral became crucial because they had airports. Industries restructured around the assumption of rapid long-distance travel. Tourism became democratized. Fresh salmon could be eaten in landlocked cities. Families separated by continents could reunite for holidays. The world contracted in ways that would have been inconceivable to Orville and Wilbur Wright.
Mature AI will similarly reshape reality in ways we can barely glimpse. It won’t just be better chatbots or more efficient code completion. It will be ambient intelligence woven into the fabric of daily life, probably invisible and unremarkable to those who grow up with it. Education might become genuinely personalized, adapting in real-time to each student’s needs. Scientific research could accelerate exponentially as AI systems identify patterns humans miss. The creative process might become collaborative in new ways, with AI serving as a thinking partner rather than a tool.
But this transformation will also bring an infrastructure we can’t yet imagine. Just as aviation required air traffic control, international treaties, safety protocols, and professional certification, mature AI will demand its own governance frameworks. We’ll need AI safety standards, algorithmic audit requirements, data provenance verification, and probably entirely new professional categories. The AI ethicist might become as common as the pilot. The model auditor might be as essential as the aircraft mechanic.
The economic transformation will be profound. Aviation didn’t just create airlines; it created airport construction companies, aircraft manufacturers, tourism industries, air freight logistics, and countless other sectors. AI will similarly generate entirely new categories of work, many of which don’t currently exist. For every job displaced, new forms of value creation will emerge, though the transition will be painful for those caught on the wrong side of the shift.
Returning to Earth
I’m in my seat now, laptop stowed, waiting for pushback. The flight attendant goes through safety procedures that are the result of decades of accidents, investigations, and incremental improvements. Every detail of this flight, from the pilots’ training to the maintenance schedule to the air traffic control protocols, exists because someone learned a hard lesson about what happens when systems fail.
AI will have its own hard lessons. We’ll experience our AI equivalents of early aviation disasters, the crashes that force us to develop standards we should have had from the beginning. We’ll have our Hindenburg moments, spectacular failures that reshape public perception. We’ll have our near-misses that reveal vulnerabilities we didn’t know existed. This is how complex technologies mature—not through perfect foresight but through iterative learning, hopefully before the catastrophic failures rather than after.
But we’ll also have our own version of the first transatlantic flights, the breaking of the sound barrier, the wide-body jets that made air travel routine. We’ll have moments where AI capabilities leap forward in ways that reshape what seems possible. We’re already having some of those moments.
The Wright brothers’ twelve-second flight didn’t immediately change the world. For years afterward, many people still doubted that practical, sustained flight was possible. But the trajectory was set. Once you prove something can be done, the question shifts from “if” to “how soon” and “how well.”
The AI model on my laptop represents our twelve seconds at Kitty Hawk. It works. It’s useful. It’s far from perfect, but it’s enough to prove the concept. Everything else is just a matter of time, infrastructure, and learning from our mistakes.
The plane pushes back from the gate. I look out at the airport complex, this cathedral of aviation infrastructure that emerged from a fragile wooden glider on a North Carolina beach. Then I close my eyes and try to imagine the equivalent AI infrastructure my grandchildren will navigate as thoughtlessly as I navigate this airport.
I suspect I’m vastly underestimating what’s coming. The Wright brothers certainly did.
UPDATE:
Update: I’m now sitting inside the plane, cruising smoothly at 32,000 feet, a cup of coffee balanced beside my laptop. The clouds drift silently below as I continue coding, researching, and communicating — seamlessly connected, without pause or interruption. It strikes me that what was once the stuff of pure imagination is now my everyday reality. The very marvel I was admiring from the terminal window has become the quiet backdrop of my workday.
Somewhere between earth and sky, I realize I need to stop wondering when the future will arrive — because I’m living in it right now. The Wright brothers dreamed of flight; I’m flying while building with artificial intelligence in real time. Our innovations don’t just transport us anymore — they keep us connected while we move. And that’s the real wonder of progress.
What about you — where do you find yourself living the future today?
