The Room Where AI Happens
A weekend at the Curve conference—an AI insider gathering in Berkeley revealed how people think about AGI timelines, China, creativity, and writing
There’s a song in Hamilton that magnetized me more than any other: “The Room Where It Happens.” Every smart and ambitious person wants to be in that metaphorical room—to witness the decisions that change the world, to understand the alchemy of how they’re made, the arguments that happen before the verdict.
Last weekend, I was in that room for AI.
The Curve conference, held at Lighthaven in Berkeley, was arguably the most concentrated gathering of AI insiders I’ve encountered. Ben Buchanan—former White House Special Advisor on AI—sits across from you at lunch. Yoshua Bengio, Turing laureate, lingers by the talks. Joseph Gordon-Levitt wanders the grounds like any other attendee.1
That’s the room, I told myself. This is where it happens.
I came to this conference as a writer with specific questions. I wanted to understand how AI insiders think about and deal with AI’s trajectory. More specifically, I wanted to understand how they engage with China’s AI progress. By the end of the weekend, I had answers. But the original questions had sprouted more complex branches than I anticipated. This piece is written for readers outside the room—people without technical backgrounds, without AI insider status, who aren’t already “where it happens.” If you find some literature too entry-level, you might not be the intended readers.
Here are some of my reflections on the rooms:
The timeline debate—and why the question of “when” might matter less than the shape of transformation itself
The China conversation split across separate rooms—where geopolitics and technical reality never quite met
Three writers I revere, three incompatible views on AI and creativity—and why I find myself nodding to all three

Room One: The Timeline Devotees
The most crowded session was a two-hour debate between what I’ll call the “AI 2027” camp and the “AI as normal technology” camp. The discussion room seats maybe forty people; twice that many crammed in. The air grew thick. Someone near me shifted weight from foot to foot. I could feel the collective attention, taut as a wire. The technical crux: will AI capabilities “go vertical” once research and development is fully automated—what some call the “humans-in-the-cloud” moment?
The AI 2027 worldview runs on recursion. Once you have AI researchers that learn as flexibly as humans, you spawn thousands of copies running at 100× human speed. The world changes in months, not decades. Daniel Kokotajlo—the fast-takeoff proponent—put his timeline at ≥70% probability within ten years, conditional on modest gains in online learning or a new architecture breakthrough.
The normal-technology camp believes friction governs everything. Automation will be piecemeal. Every time AI masters one slice of research, humans shift to the remaining slices. Real-world bottlenecks—experiments, regulation, data-center build-out, clinical trials, human taste—keep progress on a multi-decade arc, maybe 2-10× faster than today, but nothing vertical. Sayash Kapoor—the continuous-progress advocate—kept returning to the same refrain: “decades, not years.”
The governing metaphor appeared in my notes later, courtesy of Claude: two AI researchers walk across the Berkeley campus at dusk. One sees the lab lights glowing brighter each month and says, “Any day now the bulb will explode into a sun.” The other sees the same lights and says, “Bulbs get brighter gradually; the wiring, the city grid, the utility board—those are the real story.”
Both sides deployed formidable technical specificity. Online-learning sample-efficiency curves. Serial compute depth. Hardware-based governance via remote GPU attestation. This type of discussion that slicing AI’s potential into infinitely granular technical predictions is perhaps the most worshipped format at Curve. The bar for following was high. I understood maybe every third technical term. What I understood completely: this was the most attended talk, and people were riveted.
In one post-conference reflection, Zvi Mowshowitz mentioned being in a four-person conversation where someone suggested changing the subject and talking about AI timelines. Zvi eventually left.
Helen Toner gave a different kind of talk I found helpful, one that quietly undermined the entire premise. She spoke about AI jaggedness: the phenomenon where models perform PhD-level work on one task, then catastrophically fail at problems a five-year-old solves. Her provocation cut clean: “We treat this as temporary. What if we assume AI will continue to be worse than humans at some things, even as it becomes superhuman at others?” What if this isn’t bug but feature? Instead of debating when the “humans-in-the-cloud” moment arrives, maybe we should ask: will what goes “in the cloud” even resemble that uniformly capable, human-level intelligence?
Jaggedness cuts against both camps. Recursion assumes uniform capability growth—once AI is smart enough to improve itself, it improves everything about itself. Friction assumes gradual, broad-spectrum advancement. Jaggedness suggests a third possibility: radical capability in narrow domains, permanent and unpredictable gaps elsewhere. 2
Room Two: The China Conversation That Didn’t Happen
I came to Curve with specific questions about China. I set my intention to observe—both intellectually and anthropologically—how people talked about it in this particular room. By the end, “China” as a keyword surfaced less than I expected, though I deliberately sought out private conversations on the topic. Clearly, China wasn’t the priority compared to other high-stakes debates.
But the more revealing pattern wasn’t how much China was discussed. It was which conversations about China were happening where.
Nathan Lambert, an ML researcher who writes Interconnects runs the most comprehensive tracking of Chinese open-source AI projects since the DeepSeek moment in December 2024. His talk on open-source AI should have been essential viewing for anyone serious about US-China AI competition—because virtually all competitive Chinese AI models are open-source. Qwen, DeepSeek, Kimi, GLM. This is the technical frontier where Chinese labs are actually competing, the architecture choices they’re making, the efficiency gains they’re achieving under constraint.3
The room was maybe half-full.
Meanwhile, panels on geopolitics and chip export controls drew larger crowds. These are legitimate, important topics—I’m not dismissing them. But here’s what puzzled me: if the question is “how quickly is China catching up,” then understanding what Chinese AI labs are actually building and deploying—not what export controls are preventing—requires understanding how they iterate on open-source architectures under compute constraints that Silicon Valley labs don’t face. How they diffuse capabilities while American labs chase the “ASI moment” with astonishing amounts of capital.

I don’t think this was deliberate absence. Curve hosted more than sixty sessions across two days; no one can attend everything. But the pattern feels diagnostic of something deeper: a mismatch in how technical researchers and policy-focused attendees frame the China AI question. For many in the policy crowd, “China” enters the conversation primarily through geopolitics —an adversary whose frontier AI capability must be contained. Export controls. Chip restrictions. National security. This is one legitimate frame, and I don’t dismiss it.
But for people like Nathan who tracks what’s actually happening in Chinese AI development, a different analogy becomes crucial: Is today’s Chinese AI like Chinese EVs from a decade ago? Back then, those vehicles were cheaper, less sophisticated, unsexy compared to Tesla—but deployed at massive scale and improving fast enough to eventually pass an inflection point. Now BYD outsells Tesla globally.
From what I’ve observed, this is exactly what China’s AI landscape looks like right now: most-used among the open-source community, “not-the-best-but-good-enough-and-free” quality. And China emphasizes practical diffusion over frontier capabilities. As Joe Tsai, the chairperson of Alibaba Group, mentioned in a recently released All-In podcast episode, the Chinese government is focused on adoption rates—they’re “all in, embracing AI... in five years, the government wants to see 90% penetration of AI agents in society.”
Chinese AI isn’t chasing the “AI 2027” moment. As the national “AI+ initiative” shows, it’s targeting the vast, untouched terrain where most people and businesses actually operate.4 The geopolitical conversation and the technical-development conversation operate under such different mental models that they arrive at entirely different conclusions about what “winning” even means. And right now, at Curve, those two conversations were happening in separate rooms.
Room Three: The Writing Room
The biggest gift at Curve was meeting Ted Chiang. I spent a shameful amount of time standing near him, absorbing his presence, listening to the questions people asked and how he answered.
Chiang came for a panel on AI and creativity with Joseph Gordon-Levitt, another creative figure who pays great attention to AI’s development. In short: both worry about AI, deeply.5
Gordon-Levitt framed it clearly: “AI will win at anything quantifiable—box office, Rotten Tomatoes scores, likes, churn. Those metrics will fall. But creativity is also the realm of the unmeasurable: the thing you make because you must, even if no one sees it.” Chiang’s position is starker. AI, he argues, is harmful to human creativity.
He’s written about this on The New Yorker, provoking heated online debate. At Curve he elaborated as something like, AI takes away the astonishment, the awe, when you create something that’s never been created before. He invoked Annie Dillard’s line about writing:
“It is hard to explain because you have never read it on any page; there you begin. You were made and set here to give voice to this, your own astonishment.”

AI writing, Chiang believes, can never produce that astonishment because it involves unintentional plagiarism—recombination without the struggle of original thought.
Art, he said, is the concentration of human intention. The threat generative AI poses isn’t aesthetic—yes, the outputs often look impressive—but that production-at-the-push-of-a-button erodes the intentionality that makes us value art in the first place. Chiang is involved in a lawsuit against Anthropic; the company recently agreed to pay book authors a collective $1.5 billion after his books—among 500,000 others from shadow libraries—were used to train Claude. To be clear, Chiang isn’t anti-AI. He thinks AI could be helpful for “nuisance” tasks but corrosive for art, which requires what AI fundamentally cannot provide: the concentration of human intention expressed through a series of deliberate choices.
As an aspiring writer, I think constantly about AI and writing as craft. Two other writers have shaped my view on technology as profoundly as Chiang: Kevin Kelly and Ken Liu. I’m drawn to the mystery threaded through their language and work, as if they’re authors writing from a higher dimension, transmitting secrets through their prose. Yet Kelly and Liu hold almost opposite positions from Chiang on AI and creativity. Known as a tech prophet, Kevin Kelly believes in the autonomy and directionality of technology as a living system—a view that shapes his stance on AI. He recently wrote that he would be “paying AI to read my books” to express his stance in the Anthropic lawsuit. He writes:
“If AIs become the arbiters of truth, and if what they trained on matters, then I want my ideas and creative work to be paramount in what they see. I would very much like my books to be the textbooks for AI. What author would not?”
Meanwhile, Ken Liu offers a thoughtful historical analogy. In an essay, he compares today’s AI creative capabilities to the Lumière brothers’ 1890s films—static perspective, primitive machinery, mimicking how humans watch theater. His point, as I understand it: AI’s creative evolution will mirror the journey from Lumière to Christopher Nolan. What started as a fixed, objective viewpoint—workers leaving a factory—will transform into a storytelling universe that manipulates the viewer’s subjectivity, time, perspective, scale. Think about how many narrative elements Inception controlled simultaneously. That’s the direction AI creativity might move.

As someone who loves all three writers, I find myself nodding to all three positions.
I want my writing used to train AI—Kelly’s position. I take writing seriously enough that I never use AI to generate text, only to polish drafts—acknowledging Chiang’s concern about intention. And I believe AI might eventually unleash radically different forms and essences of “art”—Liu’s evolutionary view. Imagine, for instance, a novel with one hundred characters, each with their own Ulysses-style stream of consciousness—time-bending, emotionally saturated, fully realized subjective perspectives. As a reader, you could jump from character No. 1 to character No. 35 to character No. 72, navigating a narrative topology no human author could sustain alone. That kind of structural complexity might require AI assistance to achieve, even as the intentionality behind which character perspectives to develop, how they intersect, what the reader should feel—that remains irreducibly human.
Chiang and Gordon-Levitt’s conversation represented something that should happen more often in AI insider spaces: artists who take their craft seriously engaging directly with the people building the technology, not to celebrate or condemn wholesale, but to articulate precisely what’s at stake.
End Note
As a writer fascinated by frontier technology, I found the conference both complex and endearing. A persistent problem with mainstream AI coverage is its tone of aloofness, its exoticization of what gets dismissed as the “secret, lucrative, rationalist AI cult” happening in Lighthaven. This is not what I saw.
Lighthaven is wrapped in open grass, dotted with rose bushes. Each building is classic Berkeley—wooden construction, some with Victorian-era furniture and dark wood-paneled walls inside. The format varies by room: loose discussions in common spaces, formal lectures in the main hall, intimate conversations in attic nooks, casual exchanges on balconies overlooking the garden. High-trust norms mean you leave your bag on the floor and it won’t disappear. There are quiet zones for introverts, a parent room for diaper changes. The wooden chests contain specimens and curiosities. The bookshelves hold James Scott’s The Art of Not Being Governed, Max Tegmark’s The Mathematical Universe, and other books from what you might call the AI canon. At night the courtyard transforms—people gather around the fire pit, conversations about AI and futures continuing under the stars.

From an outside perspective, it might seem strange—even absurd—to have two rooms next to each other, one hosting a talk about “AI Will Eventually Kill Us All,” the other about “AI Will Change the World for Good.” How can both be true? How can the same people move between these rooms without cognitive whiplash?
But I find the contradiction interesting (and exciting to write about). The deep, transformative implication of AI means we need to train ourselves to think in multiple registers at once: the technical and the humanistic, the immediate and the long-term, the recursive and the jagged. If you remember, the Hamilton song ends with Burr singing, “No one really knows how the game is played, the art of the trade, how the sausage gets made.”6
(Big thanks to Clara Collier, Karson Elmgren, and Nathan Lambert for the feedback, and Steve Newman for getting me into the conference.)
The Curve 2025 is co-hosted by the Golden Gate Institute for AI and Manifund.
Allan Dafoe also gave an excellent presentation on jaggedness and developed a theoretical framework for the concept.
Nathan Lambert shared his slides, “Open Models in 2025,” here: LINK. He also wrote a piece reflecting the Curve conference:
Matt Sheehan wrote a piece dissecting China’s recent “AI+” plan:
The panel is moderated by Anna Wiener, the writer of “Uncanny Valley,” a memoir of her time in the tech industry.
I also found Bharat Chandar’s talk on labor market impacts very helpful. He presented his famous paper “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence” which shows the clearest evidence yet of AI-induced displacement: a 13-20% employment drop among 22-25 year-olds in AI-exposed occupations (software, customer service, auditing) after late 2022. Mid-career and older workers in the same occupations showed flat or rising employment.





I had no idea who Joseph Gordon-Levitt was and this is classic The Curve and I love it.
Had there been any mention of video generation via Sora, Pulse, etc. and how it’s linked to improving AI robotics? I listened to a talk where this was mentioned briefly but I can’t seem to find anything written about it