Welcome to the Napster Era of AI
What Happens When Technology Outpaces Regulation
I can't believe this is the timeline we're living in.
There's a pattern in modern technology—a window that opens when something powerful becomes accessible before anyone figures out how to regulate it. Napster had that moment. Two teenagers built a simple piece of software, and suddenly millions of people could download nearly any song ever recorded, for free, in minutes. The music industry didn't see it coming. By the time they figured out how to respond, 80 million people were already using it.
Early social media had a similar window—before the algorithms learned to squeeze value out of the full spectrum of human emotion and experience. For a few years, regular people could build audiences, launch businesses, create careers out of nothing but a camera and an idea. The platforms rewarded early adopters because they needed the content. Then they changed the rules, reduced organic reach, and started charging for what used to be free. We saw a mini frontier open again when TikTok really took off around 2020—another window where early adopters were rewarded, another gold rush before the rules caught up and the platform became saturated with bigger brands.

We got comfortable. The platforms settled into their grooves. Facebook became the place your parents posted. Instagram became a shopping mall. Twitter became whatever Twitter became. The wild frontier feeling faded, replaced by something more predictable: feeds optimized to keep us scrolling, ads targeted with unsettling precision, the same creators recycling the same content. It felt like the internet had been mapped, parceled out, owned. You could still build something, but the paths were well worn and the gatekeepers were firmly in place.
Then the floor fell out.
In the span of about two years, AI models went from research curiosities to tools anyone could use. ChatGPT launched and hit 100 million users in two months, becoming the fastest-growing consumer application in history. The Wild West never really ended—the internet was never tamed—but platforms made it feel settled. We forgot it was wild. AI ripped up the pavement and revealed the frontier was there all along. Not because the technology is magic, but because the models became available to the general public and something regular people could actually work with.

Suddenly there's open ground again. Someone who would never have dreamed that they could learn to code is able to launch into Claude code and build an MVP. One-person teams can use persona frameworks to scale their outputs to that of an entire team. White space opens up on the board to sell AI education, certification, ethics and security trainings to AI eager leadership teams. And so on and so on into every industry and sector across the globe. There are a lot of people in the US who understand what the capabilities of these tools are and see a frontier to conquer.
But the thing about conquering is that it comes at a steep price. There is no free lunch. One major cost is the increase in demand for land and water and minerals needed in order to build the computing capacity for all of this technology. Another cost was paid by the artists, writers, and IP creators everywhere, whose work trained these models without consent or compensation. The AI frontier seems to be available because it’s being taken from others. (Sound familiar? It should.)

So…is there any justice? Maybe. The lawsuits are starting to pile up. The New York Times is suing OpenAI. Over 50 copyright cases are pending in federal courts right now. The first major settlement has already landed—Anthropic agreed to a $1.5 billion class action settlement with authors in September 2025, the first acknowledgment at scale that this training-on-stolen-work model has real costs.
The rules are still being written in courtrooms while the systems we depend on—our employers, our industries, our institutions—push us to adopt the technology anyway, throwing us into the fray and asking us to be complicit in something we don’t fully understand or feel good endorsing.
Meanwhile, the federal government is actively dismantling state-level AI regulation, calling it "onerous" and threatening to withhold funding from states that try to set guardrails. A DOJ task force now exists specifically to sue states that regulate AI. The technology is already disruptive on its own. The government pouring fuel on the fire by preventing any guardrails is going to screw a lot of people over who can’t adapt their careers to the shifting market conditions and demands for certain skills.
I don't know how long this phase of AI adoption will last. We will have to wait and see if OpenAI, Anthropic or any other company will have to retroactively pay for using licensed materials to train their models. Given what's happening with regulation here in the states, I'm not sure we will be getting to a place of responsible usage anytime soon, if ever.
But I do know the rules changed. And I'm paying attention.
I've Seen This Before
I’ve been smack dab in the middle of major tech turning points before. I just didn't realize it at the time.
In 2014, I worked for a social media startup called Meet Edgar. We sold software that reposted content—built on the premise that almost no one sees your posts, so why not post them more than once? Eventually it became ‘why not use machine learning to generate posts?’. Yes—we were using AI before most people knew what that meant. I had no idea how big it would turn out to be at the time. But I was inside a company that helped shape what social media became—the endless churn, the optimization, the content-as-commodity logic that now feels suffocating.
I watched the window open. I watched people who were no smarter than me start businesses during that early adoption phase, ride the wave, and never give up until it worked. I had ideas too. Good ones, probably. I let them stay ideas, to focus on clients and agency work.

What did it cost me? The low-friction, low-hanging-fruit opportunities that emerge with technological breakthroughs. The chance to take bigger risks when I had fewer responsibilities. The realization—too late—that consistency beats accuracy. That showing up matters more than being right in many cases.
This time, I'm watching carefully. Not because I have it figured out—I don't. But because I'd rather be wrong while trying than right in hindsight.
I refuse to watch another technological shift happen to me instead of with me.
So here's what I'm actually doing.
I'm building something in the middle. Not the die hard AI evangelism, not the reflexive rejection we see in online social spaces. Something that tries to hold all the truths at once.
A lot of my friends hate AI. When I press them, they'll admit it has its place—in science, in medicine, in problems too big for human brains alone. But on social media, the discourse is flattened: AI is theft, AI is slop, AI is the end of creativity. I understand the anger. A lot of it is earned. But I watch people rage against AI without knowing what they're actually raging against—or without being equally enraged by the internet itself, which has been extracting from us for decades. If we can agree that AI art is bad, but using AI to cure cancer is good, then can we agree the ethics of using the tools exist on a spectrum? Let’s get specific about what we do and don’t like and let’s flex the power of our collective attention on forcing these companies to produce products in a way that improves society, not hastens its downfall.

Here's the thing: AI fails at the same things humans fail at. Not having enough context. Not knowing what it doesn't know. Moving too quickly without thinking critically. It's encoded with our errors. But the hype is real and there is a gap between people who understand how real this is and people who don't. AI solved a 50-year-old biology problem that won a Nobel Prize. It's predicting floods for 2 billion people across 150 countries. It's finding brain lesions that radiologists miss—"like finding one character on five pages of solid black text," one researcher said. This isn't hypothetical. It's happening now. And most people are still arguing about whether AI can write a decent email.
I don't want to be the person who defends AI. I don't want to be the person who dismisses it either. I want to be the person who looks at it clearly—what it can do, what it can't, who benefits, who gets crushed—and helps other people see it clearly too.
That's what I'm building. A consulting practice. A newsletter. A body of work that says: the technology is real, the stakes are high, and you deserve someone who will tell you the truth instead of selling you a solution.
I'm upskilling. I'm taking care of my young family. I'm building something that lets me support them doing work that matters. And I'm not waiting to see if I'm next on the chopping block.
I'm moving like I might be.
The Napster era didn't last long.
The industry adapted, the lawsuits came, the streaming consolidation happened. But for a chaotic moment it was a free for all and regular people had access to something powerful.
We're in that moment with AI right now. I don't know how long it lasts. I don't know what comes after. I don't know if we're headed toward regulation, consolidation, or permanent chaos.
If you're also trying to navigate this—if you're also wondering what it means for your work, your sense of purpose, your future—you're not alone in that.
This won't be a space that's blindly pro-AI adoption, or pro-big-business. What I can offer is a nuanced take that tries to capture the moment as it is and advice on how this technology can be used to make the world better.
Let's see what we can figure out.