12 stories
·
1 follower

Why Fiction?

1 Share

My new novel, Colossus, arrives at the end of April. Dana Spiotta, National Book Award finalist, had this to say about it: “The slick, rich, right-wing pastor Teddy Starr is a charismatic confidence man in the American vein (part Elmer Gantry, part Jay Gatsby, part Donald Trump). As fast talking as he is, as amoral as he is, Barkan gives him a fascinating, complex inner life. This thrilling novel skewers the cynicism of our current moment, but it also strikingly renders the human drama of fathers and sons, the tension between legacy and possibility.” Sounds good? Order it now.


When did reality start to outstrip fiction for good? Philip Roth believed it was all happening around 1961. “The American writer in the middle of the 20th century has his hands full in trying to understand, and then describe, and then make credible much of the American reality,” Roth wrote in Commentary, when he was not yet thirty years old. “It stupefies, it sickens, it infuriates, and finally it is even a kind of embarrassment to one’s own meager imagination.” Gazing back more than sixty years, we can find some of this lament quaint. Roth cited Charles Van Doren, who cheated on the quiz show Twenty-One, as one of his imagination-shattering tribunes, along with Dwight Eisenhower, embodiment of the staid postwar consensus. (Roth was prescient, at least, about tossing Roy Cohn into the mix. Perhaps no single man is more responsible for the contemporary nightmare than Cohn, who was merely, in Roth’s era, the ravenous McCarthy bulldog.) But the sentiment holds. Each successive decade, it seems, has driven the practice of writing fiction further to the margins of American life. The novel is far from dead, and will probably not perish until human civilization goes with it, but each passing years offer a new assault. What might be most alienating, as a novelist working in the 2020s, is the apparent need to justify what it is you do. Some understand it—most, maybe—but there’s a segment of the populace, young and old alike, who will always comprehend nonfiction far more. Easy enough to declare you produce essays or journalism—you’re in the “reality” business—and harder, in conversation, to explain that you are a fabulist for the sake of art. Why? And if they might be, in theory, an appreciator of the novel, they come in for a Rothian lament. Isn’t life weird enough today? Why bother imagining? It’s not like any novelist could dream up Donald Trump. Even our wonderful twentieth century auteurs—in whatever medium they might have practiced—failed to anticipate a terrorist attack on the scale of September 11th, or at least such spectacular and strange violence. There are no fictional airplanes flying into the Twin Towers before 2001. Any portrayal of future New York City in film or fiction before that day inevitably preserves the towers. They were supposed to stand a thousand years.

Yet the writers persist. The novels keep appearing. Roth’s Commentary essay preceded almost his entire literary career. “As a literary creation, as some novelist’s image of a certain kind of human being, he might have seemed believable, but I myself found that on the TV screen, as a real public image, a political fact, my mind balked at taking him in,” Roth wrote of Richard Nixon, who wouldn’t become president for another seven years. How we balk at the real-life phantasmagoria before us today. It is enough to make any writer of fiction decide it isn’t worth it or, on the balance, it’s better to retreat—better to duck inward, paddle in the soup of autofictional neuroses, and gesture mildly at the madness out the window. What may offend me most about artificial intelligence is not that it can do a job I can—the chess grandmaster doesn’t fret that Deep Blue defeated Kasparov back in 1997—or may, someday, unemploy me, but that it’s so committed to robbing human agency. The promise is that AI can think for you, even dream for you. In a recent essay in Harper’s, Sam Kriss interviewed a pitiful young man named Roy Lee, a would-be AI mogul of some sort, and all he seemed to care about was taking the friction out of life. “I relish challenges where you have fast iteration cycles and you can see the rewards very quickly,” he told Kriss. The man read fiction until he was eight, and then found “classical books and I couldn’t understand, like, the bullshit Huckleberry, whatever fuck bullshit, and it made me bored.” He preferred, Kriss wrote, “online fan fiction about people having sex with Pokémon.” Not everyone can develop a taste for fiction, “classical” or no, but what vexed the young man most was that fiction vexed in the first place. It challenged him, forced him to think, and didn’t disgorge ready answers. AI is especially popular among college students because they’ve realized, to secure passing grades, they can offload reasoning and deduction to a machine. A machine can be a person for them. The work of personhood is, perhaps, too great a struggle—too much of an enigma—to engage with for a lengthy period of time.

In the age we’ve entered—this machine age, AI age, whatever it might be—the purpose of fiction is no less essential than it was a century ago. In fact, in these post-analog times, it might be what is required most. Not for a moral purpose—not to be a way to make “better” or more “empathetic” people—but for the need to reclaim, fully, personhood. The coming struggle might not be left vs. right or some other searing binary but human vs. anti-human. The anti-humanists are, for now, ascendant. They are interested, theoretically, in human augmentation, a cybernetic transcendence, but the greater purpose seems to be human replacement, with only a select few—a certain billionaire elect—presiding over the mass of machines. “It also takes a lot of energy to train a human,” Sam Altman, the OpenAI founder, said recently. “It takes, like, 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever to produce you, and then you took whatever, you know, you took.”

“The fair comparison,” he continued, is “if you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question, versus a human? And probably, AI has already caught up on an energy-efficiency basis, measured that way.”

Capitalism will always prize efficiency; efficiency, in isolation, is far from evil. Neither is technology—we do not want to live bereft of electricity, penicillin, or even the computer. Digital entertainments have their purpose, too. What makes this decade different is the desire of this new billionaire class to deny human beings their intellectual and creative essence. It might not happen, but that is the dream. That is what they are yearning towards. Some are more earnest about it than others, or more honest. And the production of novels—the act itself of writing fiction—is alien to these pursuits. What separates a human being from a machine? Consciousness. And what is consciousness? What has the human being been able to do for thousands of years that other animals, largely, cannot? Imagine. The imagination is the greatest gift we have—what’s forged the cathedrals and pyramids, the paintings and poetry, and, yes, even the machines. The automobile and airplane were works of imagination. The novel, in particular, is an imagination art. It flummoxes the Roy Lees of the world, this new rising class, because it is both fundamentally human and asks so much of a human, a reader. The writer of fiction and the reader of fiction are entered, together, into a relationship of the imagination. This relationship can, quite literally, transcend space and time. The writer, long dead, can still commune with the reader through their words, and readers themselves can span the centuries. Both the printed page and the internet can offer their own forms of immortality.

The novel still comes without instructions. As a reader, you might be offered descriptions, but it’s up to you to interpret them—to properly world-build. Your Yoknapatawpha County appears differently in your mind than my Yoknapatawpha County. Cinema can impose far more on the audience. All visual media does this. All of it, to varying degrees, is more passive than fiction, which asks for the fully-fired imagination and the suspension of belief. Journalism is vital for a democracy but most of it is not art—not even close. New Journalism can reach those heights, if there is an inherent danger to that approach because journalism, at its core, demands facts, and facts can run into conflict with art. A fact does not have an aesthetic. The superior aesthetic might be, in fact, untrue. Journalism can be stenography or it can be more interpretive, analytic, and investigative. Still, in those formulations, it does not attempt the higher planes of fiction. Much of nonfiction doesn’t. Literature has the spark of the divine because it is so inherently unexplainable. One can read scores of writing on how to craft a novel or properly consume literature, but there are lacunae inherent to all these explanations; there is a mysticism to the art of fiction that can’t be explicated, what Martin Amis had called the “white magic.” The communing of mind, body, and currents, the flow of image to fingertips, the dream of these creatures in your skull becoming transmuted into a language, maybe English, maybe another, and then this language is the mechanism that produces fresh images for the reader, fresh dreams. And the language, of course, is an aesthetic. Language is never merely utilitarian; language is art, language paints and is the painting. All of it is a miracle.

Fiction, the great imagination art, cannot be defeated as long as humanity exists. Both literally, in the furtherance of modern civilization, and in the current long war against the anti-humanists. The anti-humanists, themselves, have imaginations—AI is its own dream, derived in part from science fiction—but they are repelled by both the indulgences of fiction and its relative unruliness, its inability to offer quantifiable dividends. Why dwell within an author’s world? Why dream if you aren’t making money? Why must a writer dedicate so many hours to a craft that may not be popular or remunerative? The literary novelist, like the ancient monk, toils alone—even in groups, in scenes, the act of writing is solitary—and the only promised reward is the fueling of a spirit, the feeling that, on the level of blood, an important task was performed. As a writer, I, of course, conceive of the reader—anticipate the reader, hope for the reader’s approval—and chase worldly rewards, whatever they may be, but that simply isn’t enough, especially now. You have to want to perform the imagination art. You have to believe in it. You have to love it, or at least like it enough. Even those who suffer through writing do it because of that belief. It must matter. The writer who allows AI to perform the writing for him has lost that belief. He is an apostate. He is claiming religion while having none at all. He is a liar, a liar of the mind and the soul.

The anti-humanists insist AI is conscious. It is conscious now or will be soon. This is like offering a child a toy dog and telling him, repeatedly, the dog is real. Doesn’t it look like a dog? Can’t it bark if you press the button? The simulacra, for the anti-humanists, is always enough because they have experienced a form of spirit-death. Or they are unconsciously hoping, in time, to arrive there, to that stage. It takes a special kind of human—an unusual segment of the species—to long for the obsolescence of their own, to be so against their own. To resent, fully, flesh and blood and brain matter, the stunning complexities of human consciousness and all, in the past millennia, that has been achieved. To make art, humans have never required more than the basics of the machine world: a paintbrush, a chisel, a word-processor. The hierarchy has always been well understood. The machine is the tool of the human being to enhance the experience of being human. Tools are subordinate. Now, AI asks the human to be subordinate to the machine. Or, more accurately, AI asks nothing because it cannot “ask” anything. It is not alive. The anti-humanists make the ask. They’ve grown rich this way, and they’re rotted from within, like Dorian Gray. Except, unlike Dorian, they aren’t even very beautiful on the outside. They cannot entrance or seduce. They are, as a class, froggish and malformed, their mannerisms glitchy. They can’t willingly march us anywhere. They’ll have to do it by force.

I don’t write fiction as an act of rebellion. I do it because I love it and it gives my life meaning, and I believe, through my novels, I can make art and achieve beauty. I can exist in my highest form, as a worshipper might when in prayer. But it is fine, too, to conceive of fiction as rebellion. The more surreal, or hyperreal, our world becomes, the more fiction will need to be the ballast. The more we will need to duck away from the slopstreams, the smartphones, the machines that, like soma pumped into our bloodstreams, steal our agency away. Can it be done? On this score, I tend towards optimism. It is not optimism grounded in the actions the anti-humanists might take. I do not believe in Sam Altman, Roy Lee, or anyone else like them. Their intentions are to make money, unthinkable amounts of it, and they have no second or third order concerns. Rather, my hope resides with everyone else. The human beings who have still, in this decade, not forfeited themselves, not offloaded the act of imagination. Not long ago, there was an AI-generated video of a battle between Tom Cruise and Brad Pitt that looked realistic enough and drove a few commentators to declare that moviemaking as we knew it was over. What more could there be, now that perfect images of celebrities could be created almost instantly, with passable audio? What was left for the human being? It was an infantile conception of art, mistaking, again, the simulacra for the greater purpose, why we strive to paint or sing or write or direct films in the first place. We do not care about a film because a computer has created a representation of Tom Cruise in front of us. We care about Joel in Risky Business, Maverick in Top Gun, and Ethan Hunt in the Mission Impossible series. Brad Pitt is not AI IP; he’s Tyler Durden, Aldo Raine, and Cliff Booth. Both men look like they look, but that’s beside the point. AI enthusiasts wouldn’t understand this—not really—because they don’t grasp the vitality of the human narrative. An actor tells a story on a screen. A machine can write a story and a machine can generate actors in the same way a machine can play chess. A chess fan isn’t less appreciative of Magnus Carlsen because a machine can perform his role. Chess retains its human dimension. Art will, too.

Humans are a story-telling species. Animals have consciousness, animals can feel pain, and the smart animals can communicate in the proximate way people can, but animals do not tell stories. Animal do not conceive art. It is art, and the quest for narrative, that separates the human from all else; for many thousands of years, this was a cause for celebration. Now the anti-humanists hope to stamp it out—slowly, then quickly. The machine will draw, the machine will act, the machine will write. The machine will perform an imitation of imagination, a weak echo, and its creators will hope the human audience will not care either way. That is the darkest outcome: not a world where, Matrix-like, artificial intelligence rises up, enslaves us, and saps our bioenergy to power their own dystopia. The actual outcome, if Altman and his ilk have their way, will be far more banal. Instead of cyborgs, we will have slopborgs, diminished, slothful human beings who have offered themselves up to AI so completely they let machines think and dream for them. Their critical and cultural sensibilities wither away. There is no audience, anymore, for any sort of art. Instead of the Matrix pods, humans will merely stay home, rotting in the digital abyss.

We aren’t there yet. People still do read, make music, watch films, and visit art museums. There is a culture, high and middle and low, even if it’s under attack. There’s an awareness, too, of the cultural and spiritual sickness of anti-humans. The AI revolution is not very popular. None of its progenitors are celebrated in a way Steve Jobs might have been, when Americans still had great faith in their tech innovators. Writers endure and readers endure. Print book sales are not in decline. Neither is live music. The imagination has an audience and a market. The question will be whether, in the next half century, it can keep both. We have to believe it will. That belief will come with friction; the stakes will grow ever higher. Much is on the line for the AI oligarchs. If enough of us do not take to their creations and make them economically viable, they will be out many billions, maybe begging for federal bailouts. They’ll battle to avoid that outcome as much as they possibly can. This next decade will be pivotal, for both the anti-humanists asserting their market position and the humanists trying to lay claim to what is sacred—and what has driven the progress of human civilization for thousands of years. We will have to preserve our right to imagine.

Read the whole story
costeris
19 days ago
reply
Share this story
Delete

China Is Regulating Relationship AI — and the West Should Pay Attention

1 Share

In late December 2025, China’s top internet regulator released draft rules that mark a significant shift in how artificial intelligence is governed. The Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services, issued for consultation by the Cyberspace Administration of China, establish “human-like interactive AI” as a new regulatory category, defined by its social function: simulating human personality and engaging users emotionally. The scope and depth of regulation proposed goes far beyond anything currently under consideration in the West—and includes several measures that should inform future debates on how relationship AI is governed.

Developers held legally accountable for design and social effects

The most important shift is the decision to hold developers directly accountable for both the design features and social consequences of their products. The measures introduce extensive reporting, auditing, and filing requirements that place the burden of proof on developers to demonstrate that their systems cause minimal harm. This includes transparency obligations around training data, mandatory algorithm filings, and regular security and impact assessments.

Most strikingly, providers are required to actively monitor users in order to “assess user emotions and the degree of their dependence on the products and services,” and to intervene in cases of “extreme emotions or addiction.” Responsibility for harm is no longer framed as an individual user problem but as a structural outcome of system design.

Subscribe now

Explicit prohibition on psychological manipulation

The draft measures also include unusually direct prohibitions on emotional and psychological manipulation. Providers are forbidden from designing systems that encourage addiction, replace real social relationships, or manipulate users through emotional coercion or deceptive intimacy. Mental health is treated as a first-order governance issue—alongside national security and fraud—rather than a downstream ethical concern to be managed through user choice or disclaimers.

This stands in sharp contrast to recent Western approaches. In California’s AI companion law, for example, explicit language around manipulation and dependency was removed during the drafting process. In the Chinese framework, by contrast, responsibility lies squarely with providers across the entire lifecycle of the system, from design to deployment and withdrawal.

Integrating wellbeing measures into system design

The draft also introduces forms of friction that directly conflict with engagement-driven business models. Users must be repeatedly reminded that they are interacting with an artificial system rather than a human being, and continuous use triggers mandatory break notifications after two hours. Exiting an emotional companionship service must be immediate and unobstructed.

These are modest interventions, but they challenge the dominant assumption in commercial AI development that maximising time-on-platform is a neutral or legitimate design goal. The measures also impose specific obligations on providers to intervene in cases involving self-harm or suicide risk, echoing but extending approaches found in recent California regulations.

Strong protections for minors and elderly users

The rules are particularly stringent when it comes to minors and elderly users. Emotional companionship services for children require explicit guardian consent, and providers must offer tools that allow guardians to monitor usage, receive safety alerts, and restrict time and spending. Systems must automatically switch into a protected mode when a user is suspected of being a minor.

For elderly users, providers are required to facilitate emergency contacts and are prohibited from offering services that imitate family members or close relations, recognising the heightened risks of emotional manipulation, dependency, and fraud.

New forms of surveillance and policing

At the same time, the draft measures significantly expand state surveillance and political risk management. As with China’s rules for general purpose models, training data must align with “socialist core values,” typically meaning state-approved datasets designed to ensure ideological conformity. Developers are also required to submit security assessment reports that include aggregate statistics on “high-risk user trends,” as well as information about specific dangerous behaviours.

Where users engage in high-risk activity, providers must limit or suspend services, retain relevant records, and report to the authorities. In practice, this means that intimate conversations between users and AI companions may be monitored for suspicious activity, with findings transmitted directly to the state. Emotional governance is thus inseparable from political control.

A growing regulatory divergence

None of this should be read as a straightforward endorsement of China’s broader model of AI governance, which remains deeply entwined with ideological control and expansive state oversight. Yet these measures expose a widening gap in the global AI debate. While the United States continues to approach relationship AI primarily through voluntary ethics guidelines and transparency requirements, China is treating relationship AI as infrastructure that shapes subjectivity, behaviour, and social relations—and regulating it accordingly. The central question is how to maximise users’ freedom and wellbeing without sacrificing robust oversight of developers.

As AI companions and relationship systems spread rapidly across markets, the underlying issue cannot be wished away. As I show in my book Love Machines, emotional manipulation is not an accidental by-product of these technologies; it is increasingly a core feature of their business model. China’s approach, however authoritarian its context, forces an uncomfortable recognition: governing relationship AI means confronting the political economy of tech-enabled emotional life.

Read the whole story
costeris
74 days ago
reply
Share this story
Delete

Are better models better?

1 Share

“Do, or do not- there is no try”

Every week, there’s a new model, a new approach, and something new to play with. And every week, people ask me ‘have you tried o1 Pro? Phi 4? Midjourney 6.1?’ I keep wondering, well, how would I tell?

One answer, of course, is to look at the benchmarks, but setting aside the debate about how meaningful these are, that doesn’t tell me what I can do that I couldn’t do before, or couldn’t do as well. You can also keep a text file full of carefully crafted logic puzzles to try, which is really just doing your own benchmark, but again, what does that tell you?

More practically, you can try them with your own workflows. Does this model do a better job? Here, though, we run into a problem, because there are some tasks where a better model produces better, more accurate results, but other tasks where there’s no such thing as a ‘better’ result and no such thing as ‘more accurate’, only right or wrong.

Some questions don’t have ‘wrong’ answers; the quality of the output is subjective and ‘better’ is a spectrum. This is the same prompt applied to Midjourney versions 3, 4, 5, and 6.1. Better!

Equally, there are some tasks where a mistake is easy to see and to fix. If you ask ChatGPT for a draft email, or some ideas for what to cook, it might get some things wrong, but you can see that and fix it.

Hence, the two fields where generative AI has clear, early and strong product-market fit are software development and marketing: mistakes are generally easy to see (or test for) and there aren’t necessarily wrong answers. If I ask for a few hundreds words of copy about a new product or brand, there might not be a ‘wrong’ answer, and if it’s your product then you can spot the mistakes - this is still hugely useful. I always used to compare the last wave of machine learning to ‘infinite interns.’ If you have 100 interns, you can ask them to do a bunch of work, and you would need to check the results and some of the results would be bad, but that would still be much better than having to do all of the work yourself from scratch.

However, there is also a broad class of task that we would like to be able to automate, that’s boring and time consuming and can’t be done by traditional software, where the quality of the result is not a percentage, but a binary. For some tasks, the answer is not better or worse: it's right or not right.

If I need something that does have answers that can be definitely wrong in important ways, and where I’m not an expert in the subject, or don’t have all the underlying data memorised and would have to repeat all the work myself to check it, then today, I can’t use an LLM for that at all.

Here’s a practical example of the kind of thing that I do quite often, that I’d like to be able to automate. I asked ChatGPT 4o how many people were employed as elevator operators in the USA in 1980. The US Census collected this data and published it: the answer is 21,982 (page 17 of the PDF here)

View fullsize Screenshot 2025-01-11 at 10.58.59 am.png
View fullsize Screenshot 2025-01-11 at 10.54.03 am.png
View fullsize Screenshot 2025-01-11 at 10.49.40 am.png
View fullsize Screenshot 2025-01-19 at 1.50.52 pm.png
View fullsize Screenshot 2025-01-19 at 2.01.40 pm.png

First, I try the answer cold, and I get an answer that’s specific, unsourced, and wrong. Then I try helping it with the primary source, and I get a different wrong answer with a list of sources, that are indeed the US Census, and the first link goes to the correct PDF… but the number is still wrong. Hmm. Let’s try giving it the actual PDF? Nope. Explaining exactly where in the PDF to look? Nope. Asking it to browse the web? Nope, nope, nope…

The problem here is not so much that the number is wrong, as that I have no way to know without doing all the work myself anyway. It might be right. A different prompt might be closer to being right. If I paid for Pro, that would perhaps be more likely to be right. But I don’t need an answer that’s perhaps more likely to be right, especially if I can’t tell. I need an answer that is right.

Of course, these models don’t do ‘right’. They are probabilistic, statistical systems that tell you what a good answer would probably look like. They are not deterministic systems that tell you what the answer is. They do not ‘know’ or ‘understand’ - they approximate. A ‘better’ model approximates more closely, and it may be dramatically better at one category of question than another (though we may not know why, or even understand what the categories are). But that still is not the same as providing a ‘correct’ answer - it is not the same as a model that ‘knows’ or ‘understands’ that it should find a column labeled 1980 and a row labeled ‘elevator operators’.

How and whether this changes, this year or this decade, is one part of the central debate about whether these models will keep scaling, and indeed about AGI, where the only thing we can say for sure is that we do not have a theoretical framework that can tell us. We don’t know. Maybe that ‘understanding’ will emerge spontaneously as the models scale. Maybe, like Zeno’s Paradoxes, the models will never reach the target but will still converge to be right 99.99% of the time, so it won’t necessarily matter if they ‘understand’. Maybe some other, unknown theoretical breakthrough or breakthroughs are needed. Maybe the ‘reasoning’ in OpenAI’s O3 is a path to solve this, and maybe not. Plenty of people have opinions, but so far, we don’t know, and for the time being, ‘error rates’ (if that’s even the right way to think about this) are not a gap that will get closed with a bit more engineering, the way the iPhone got copy/paste or dialup was replaced by broadband: as far as we know, they are a fundamental property of the technology.

This prompts a few kinds of question.

Narrowly, most of the people building companies with generative AI today, hoping to automate boring back-office processes inside big companies, are wrapping generative AI models as API calls inside traditional deterministic software. They’re managing the error rate (and the UX gap of chatbots themselves, which I’ve written about a lot elsewhere) with tooling, process, control and UX, and with pre-processing and post-processing. They’re putting the horse in harness and giving it blinkers and reins, because that’s the only way to get a predictable result.

However, it may be that as the models get better, they can go to the top of the stack. The LLM tells SAP what queries to run, and perhaps the user can see and validate what going on, but now you use the probabilistic system to control the deterministic system. This is one way to think about ‘agentic’ systems (which might be the Next Big Thing or might be forgotten in six months) - the LLM turns everything else into an API call. Which way around is better? Should you control the LLM within something predictable, or give the LLM predictable tools?

This takes me to a second set of questions. The useful critique of my ‘elevator operator’ problem is not that I’m prompting it wrong or using the wrong version of the wrong model, but that I am in principle trying to use a non-deterministic system for a a deterministic task. I’m trying to use a LLM as though it was SQL: it isn’t, and it’s bad at that. If you try my elevator question above on Claude, it tells you point-blank that this looks like a specific information retrieval question and that it will probably hallucinate, and refuses to try. This is turning a weakness into a strength: LLMs are very bad at knowing if they are wrong (a deterministic problem), but very good at knowing if they would probably be wrong (a probabilistic problem).

Part of the concept of ‘Disruption’ is that important new technologies tend to be bad at the things that matter to the previous generation of technology, but they do something else important instead. Asking if an LLM can do very specific and precise information retrieval might be like asking if an Apple II can match the uptime of a mainframe, or asking if you can build Photoshop inside Netscape. No, they can’t really do that, but that’s not the point and doesn’t mean they’re useless. They do something else, and that ‘something else’ matters more and pulls in all of the investment, innovation and company creation. Maybe, 20 years later, they can do the old thing too - maybe you can run a bank on PCs and build graphics software in a browser, eventually - but that’s not what matters at the beginning. They unlock something else.

What is that ‘something else’ for generative AI, though? How do you think conceptually about places where that error rate is a feature, not a bug?

Machine learning started working as image recognition, but it was much more than that, and it took a while to work out that the right way to think about it was as pattern recognition. You could philosophise for a long time about the ‘right way’ to think about what PCs, the web or mobile really were. What is that for generative AI? I don’t think anyone has really worked it out yet, but using it as a new set of API calls within traditional patterns of software feels like using the new thing to do the old things.

Meanwhile, there’s an old English joke about a Frenchman who says ‘that’s all very well in practice, but does it work in theory?’ You can spend too long philosophising about ‘what this really means’ and not enough time just going out and building and using things, and this is a chart of exactly that - everyone in Silicon Valley is building things with AI. Some of them will be wrong and many will be boring, but some of them will find the new thing.

However, all of these companies are still a bet on one philosophy being right: they are a bet that the generative AI won’t generalise entirely, because if it did, we wouldn’t need all these individual products.

These kinds of puzzles also remind me of a meeting I had in February 2005, now almost exactly 20 years ago, with a VP from Motorola, at the MWC mobile conference in Cannes. The iPod was the hot product, and all the phone OEMs wanted to match it, but the micro-HDD that Apple was using would break very reliably if you dropped your device. The man from Motorola pointed out that this was partly a problem of expectation and perception: if you dropped your iPod and it broke, you blamed yourself, but if you dropped your phone and it broke, you blamed the phone maker, even though it was using the same hardware.

Six months later Apple switched from HDDs to flash memory with the Nano, and flash doesn’t break if you drop it. But two years later Apple started selling the iPhone, and now your phone does break if you drop it, but you probably blame yourself. Either way, we adopted a device that breaks if you drop if with a battery that lasts a day instead of a week, in exchange for something new that came with that. We moved our expectations. This problem of expectation and perception seems to apply right now to generative AI. After 50 years of consumer computing, we have been trained to expect computers to be ‘right’ - to be predictable, deterministic systems. That’s the premise of my elevator test. But if you flip that expectation, what do you get in return?



Read the whole story
costeris
421 days ago
reply
Share this story
Delete

The Dangerous Rise of ‘Front-Yard Politics’

1 Share

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

Several months ago, while walking through my neighborhood in Washington, D.C., I noticed an impressive number of front-lawn placards celebrating and welcoming refugees. The signs made me proud. I like living in a place where people openly celebrate tolerance and diversity.

Several days later, my pride curdled into bitterness. As part of some reporting on housing policy, I found a State Department page offering advice to Afghans and Iraqis resettling in the U.S. The upshot: Stay away from D.C. “The Washington, D.C., metro area including northern Virginia and some cities in California are very expensive places to live, and it can be difficult to find reasonable housing,” the website warns. “Any resettlement benefits you receive may not comfortably cover the cost of living in these areas.”

My city’s prohibitive housing costs flow, in part, from the district’s infamous war against new construction. Much of D.C. is off-limits for new development, thanks to widespread single-family zoning, berserk historical-preservation rules, and a long-standing aversion to taller buildings, which stems from both federal law and local rules. If the city’s housing policies are so broken that the federal government has to explicitly tell immigrants to find somewhere else to live, then signage welcoming refugees is both futile and hypocritical. The same neighborhoods saying yes to refugees in their front yard are supporting policies in their backyard that say no to refugees.

This dynamic—front-yard proclamations contradicted by backyard policies—extends well beyond refugee policy, and helps explain American 21st-century dysfunction.

The front yard is the realm of language. It is the space for messaging and talking to be seen. Social media and the internet are a kind of global front lawn, where we get to know a thousand strangers by their signage, even when we don’t know a thing about their private lives and virtues. The backyard is the seat of private behavior. This is where the real action lives, where the values of the family—and by extension, the nation—make contact with the real world.

Let’s stick with housing for a moment to see the front yard/backyard divide play out. The 2020 Democratic Party platform called housing a “right and not a privilege” and a “basic need … at the center of the American Dream.” Right on. But the U.S. has a severe housing-affordability crisis that is worst in blue states, where lawmakers have erected obstacle courses of zoning rules and regulations to block construction. In an interview with Slate, Senator Brian Schatz of Hawaii, a Democrat, took aim at his own side, saying progressives are “living in the contradiction that they are nominally liberal [but they] do not want other people to live next to them” if their neighbors are low-income workers. The five states with the highest rates of homelessness are New York, Hawaii, California, Oregon, and Washington; all are run by Democrats. Something very strange is going on when the zip codes with the best housing signs have some of the worst housing outcomes.

Housing scarcity pinches other Democratic priorities. Some people convincingly argue that it constricts all of them. High housing costs pervert “just about every facet of American life,” as The Atlantic’s Annie Lowrey has written, including what we eat, how many friends we keep, how many children we bear. “In much of San Francisco, you can’t walk 20 feet without seeing a multicolored sign declaring that Black lives matter, kindness is everything and no human being is illegal,” the New York Times columnist Ezra Klein wrote. But in part because those signs sit in front yards “zoned for single families, in communities that organize against efforts to add the new homes,” the city has built just one home for every eight new jobs in the past decade.

We find a similar discrepancy between stated virtues and outcomes in the realm of green energy. As I wrote last year, liberals own all the backpack buttons denouncing the oil-and-gas industry. But Texas produces more renewable energy than deep-blue California, and Oklahoma and Iowa produce more renewable energy than New York. Yes, wind is abundant in the Midwest, and the Great Plains have lots of space that’s sunny and empty. But the biosphere counts carbon, not excuses. Progressives betray their goals by supporting onerous rules that delay the construction of solar farms and transmission lines that would reduce our dependence on oil and gas.

Granted, although the hypocrisy of NIMBY environmentalists is an irresistibly delicious subject for some writers, it is hardly the only obstacle to building an abundance of clean electricity. Many of the country’s most powerful energy providers play their own word games by loudly advertising their commitment to decarbonization even as they quietly use their political power to block the transition to new energy sources. Here, as in housing, it’s easy to playact as a public crusader, screaming “Everything has to change!” to the world while remaining a private reactionary who whispers, within the back rooms of true power, “But let’s not change anything that matters.”

More broadly, a super-emphasis on language has distracted some Americans from focusing on actual outcomes and working toward material progress.

In the past few years, many employees have encouraged their companies to launch diversity, equity, and inclusion initiatives. These programs address a real problem: the stubborn gaps in pay and responsibility between white men and their nonwhite and non-male colleagues, which are sometimes borne from prejudice in hiring or promotion processes.

But after an initial burst of enthusiasm, follow-up analyses of DEI programs have found that many of them are worse than useless. First, they sometimes rely on pseudoscience, such as unconscious-bias training, which rarely reduces racism and may accidentally reify existing biases. Second, corporations that hold DEI workshops may use them as an excuse not to pursue real corporate change. In the past few years, as corporate diversity programs have proliferated, the share of Black and Asian workers who “trust their employer to do what is right in response to racism” has actually declined. According to one Bloomberg survey, the person with the least credibility on racism within the company is the person in charge of DEI.

All of the appropriate terms for this state of affairs—whitewashing, window dressing, a facade—capture the essence of front-yardism. The problem with these diversity programs isn’t that they’re “woke,” as in “doing too much to help nonwhite Americans.” The problem is that, keeping with this common if dubious definition, they aren’t nearly woke enough. Full of sound and fury signifying nothing, many DEI initiatives are conservative in nature, preserving the status quo and the power of white-male leadership while advertising a politics of radical change. They are the equivalent of a thousand REFUGEES ARE WELCOME signs in a neighborhood where the residents’ policy preferences make local refugee resettlement impossible.

San Francisco public schools offer another lesson in how an obsession with language can cloud a rightful focus on material outcomes. In 2021, the city’s board of education voted to rename more than 40 schools to scrub out racism. Their dragnet caught such not-quite-famous racists as Abraham Lincoln and Senator Dianne Feinstein. (Paul Revere was added to the list, because one committee member misread a History.com article about his role in the Revolutionary War.) At the same time that the district was putting together its list of names, its schools suffered declines in enrollment, attendance, and learning. Math scores fell sharply and, by 2022, only 9 percent of the district’s Black students met or exceeded math standards.

The renaming committee was obviously not exclusively responsible for pandemic-era learning loss. Learning loss was a national trend, and San Francisco didn’t even experience the worst of it. But if, like the San Francisco Unified School District, you’re a school district with a big math-proficiency problem and your policies include discouraging eighth-grade algebra and holding meetings about nomenclature, you might end up with failing students in well-named schools.

Even the American Medical Association has descended into front-yardism. The AMA recently published a 54-page guide on how doctors should talk with patients, called “Advancing Health Equity,” which urges medical professionals to make their language more inclusive. One particularly silly example: It advises doctors to replace the simple phrase low-income people with new terminology that acknowledges “root causes,” such as people underpaid and forced into poverty as a result of banking policies, real-estate developers gentrifying neighborhoods, and corporations weakening the power of labor movements.

I celebrate any emphasis on “root causes.” So let’s talk about the real root causes of dysfunction in America’s expensive and inequitable health-care system. Why is the U.S. one of the only countries in the developed world without universal insurance? A complete analysis might include the AMA’s “explicit, long-standing opposition to single-payer health care.”  Why does the U.S. health system struggle to provide access in rural and low-income areas? One causal factor is the AMA’s steadfast resistance to expanding nurse practitioners’ scope of care. Why does the U.S. have fewer general practitioners per capita than almost any other rich country? It might have something to do with the AMA’s refusal to expand medical-residency slots and other efforts to constrain the number of doctors in America.

Even in science, where empiricism ought to reign, I’ve seen troubling signs of word worship. In 2020, the prestigious journal Nature published its first-ever presidential endorsement, on behalf of Joe Biden. When a group of researchers studied the effect of that endorsement, they found it did nothing to persuade moderate voters and actually made conservatives less trusting of scientific institutions. “The endorsement message caused large reductions in stated trust in Nature among Trump supporters,” the paper concluded. “This distrust lowered the demand for COVID-related information provided by Nature.”

The journal’s article had all the effectiveness of a half-hearted DEI program: a bunch of pretty words doing less than nothing. Nonetheless, in March, the editors of Nature wrote a follow-up essay declaring victory. While they acknowledged that the Biden endorsement had failed to meet every measurable benchmark, they defended their decision on the grounds that “silence was not an option.” “When individuals seeking office” blast science and threaten scientists, they said, “it becomes important to speak up.”

I personally despair of the polarization of science and wish the Nature editorial had, through some magical incantation, depoliticized the vaccine debates. But it didn’t. And that holds an important lesson about the limited ability of words alone to bring about the world that progressives want to live in. The Nature editorial was an experiment, and an independent group of scientists determined that the experiment failed. That’s how science works. For the editors of a science journal to wave it away suggests that the final cause of their politics is to utter the right words, even when those words push them further away from the world they want to build.

Companies hiring DEI consultants to quote Malcolm X in a meeting to cover up a pitiful diversity record; school officials watching math scores plummet for Black kids while they debate whether Lincoln was racist; AMA employees playing word games while limiting the number of physicians; environmentalists buying BEYOND COAL pins while challenging the construction of any clean-energy project that might help the electric grid move beyond coal—what ties these examples together is front-yard theater.

You may have noticed that I’ve mostly focused on progressive causes and left-leaning institutions. This is as deliberate as it is unfair.

It’s deliberate because, to paraphrase Noah Smith, I deeply want progressives to love progress itself, not just the sound of it. When it comes to the virtues of housing affordability, clean-energy abundance, high-quality education, and trustworthy science, I want my political side to turn its signage into signatures, its placards into policies. But my emphasis so far on liberalism is also unfair, because to prattle on about progressive hypocrisy without a similar analysis of the right would profoundly misrepresent the distribution of phoniness in American politics.

When Republicans swept into unified control of the federal government in 2017, Donald Trump promised in his inaugural address to return power to the people, unwind the “American carnage” of previous generations, and restore the manufacturing and coal industries that had been desiccated by decades of neoliberal policies. But once in office, Republicans governed more like plutocrats than populists, trying to slash federal health-insurance coverage (which failed) and to reduce taxes for large corporations by several trillion dollars (which succeeded). On economic and social policy, the Republican Party is a pretzel. The GOP officially opposes “Defund the police” and wants more law enforcement, but Trump is on the record with calls to defund the entire FBI and Department of Justice. Republicans officially seek to “lower the price of housing,” but their pledge to cut appropriated nondefense programs would likely reduce housing assistance, immediately raising the cost of living for millions of low-income renters.

No party claims a monopoly on language theater, either. Many of today’s most influential conservatives are more likely to marinate in indignation over the gender politics of candy, beer, and sneaker commercials than utter anything that might accidentally make contact with poverty, housing, energy, or health-care policy. The most significant GOP leaders, such as Trump and Florida Governor Ron DeSantis, hardly talk about economic policy at all, preferring to direct their furious attention at culture-war issues, including elementary-school curricula, drag-queen story hours, and the scourge of managerial wokeness in our corporations and schools. This postmaterial posturing might serve a strategic purpose. Behind all that fulminating about Disney and DEI, DeSantis’s views on Social Security, Medicare, and the welfare state are deeply unpopular.

While language wars escalate on the right, the phenomenon of front-yard politics may be peaking on the left. San Francisco ultimately abandoned its plan to scrub Lincoln and Feinstein from its buildings. California has voted to begin the long process of dismantling its NIMBY housing laws. Last year, President Biden signed historic laws to expand green-energy production in the U.S., even though the translation of historic spending into historic construction remains uncertain. These are small steps in the right direction.

Words matter. It would be absurd—and deeply self-defeating—for any writer to suggest otherwise. My aim is not to uproot kind-hearted yard signs, or reverse efforts to remove racist surnames from government buildings, or to discourage doctors from speaking respectfully to patients. But these linguistic efforts are only as successful as the difference they make in the world. When a politics of progressive language becomes disconnected from progressive outcomes, the movement loses. Front-yard radicalism multiplied by backyard stasis does not equal progress. It equals nothing at all.

Read the whole story
costeris
1068 days ago
reply
Share this story
Delete

Hubris

1 Share

The rich get richer.

Data supports it. In the past three decades, the share of U.S. wealth held by the top 1% has gone from 24% to 32%. Like most cliches, “the rich get richer” became a cliche because it’s true… of money, and power. The powerful tend to aggregate more power, incumbents get reelected 90% of the time.

It makes sense. Money buys you power and influence, which begets more money, which buys more power and influence. This is the basis of capital accumulation and wealth creation — a virtuous upward cycle. It’s also the reason we (should) have progressive taxes and regulation: to prevent the natural order of economics from doing its thing, making rich people richer and poor people poorer.

What makes less sense? Why does one person or firm not have all the power? Why don’t a few families control all the wealth, one or two governments control the globe? Why isn’t there a president of Earth? As much as it seems that power and wealth are centralized, the world’s richest man owns just 0.04% of the world’s net worth.

Throughout history, nobody has come close to amassing total control. The mightiest empires were still minority owners of planet Earth. Before her death in 1901, Queen Victoria oversaw a kingdom that spanned roughly a quarter of the globe’s land surface and ruled just 23% of the world’s population. At its peak in 1300, the Mongol Empire controlled about 18% of the Earth. The Roman Empire was even smaller (4%).

As they gained territory and resources, each empire continued to expand. Brits in 1910 had witnessed six decades of growth. With each land grab came greater stores of resources, more coffee and molasses to import to their island. This created new markets and business opportunities to fund more land grabs. And the wheel turned.

However, from the British Empire to the Qing Dynasty to the Ottomans, they all have one thing in common: They all fall.

Balance

A celestial pillar of the universe is that it abhors absolute control. No individual or institution has ever achieved it. Apex predators cannot eliminate their prey without starving themselves. If there are too many wolves eating too many deer, the wolf population declines, as they run out of deer to feed on. Balance is fundamental to ecological systems, and the same is true (over the long term) in our human-made world.

But Why?

A powerful entity or person collapsing under the weight of their own success is not a novel concept. The Ancient Greeks had a word for it: hubris, an excessive confidence in defiance of the gods. For us, it means excessive confidence preceding downfall. Which more or less equates to the same thing, because for the Greeks, defying the gods almost always led to death.

A more recent version of this ancient story that fits our tech-obsessed moment is Frankenstein. Inebriated on his own brilliance, Dr. Frankenstein tries to defy the natural order and create life. In doing so, he makes something too hideous to contain, and that’s his doom. His last words in the novel are an instruction to “avoid ambition.”

Corporate

Corporate hubris takes various forms. Research shows overconfident CEOs are prone to distorting their investment decisions: They overinvest when cash flows are strong, and cut too deep when they need external financing. Case in point is Meta, where we’re witnessing hubris play out in dramatic form. The unconstrained boy-king is betting his company — his shareholder’s company, really — on a fever dream in which he is God in a world littered with Nissan and Nespresso billboards, a “metaverse.” More recently, FTX founder Sam Bankman-Fried believed he could defy the laws of economics and borrow against large sums of a fake currency he made up. Essentially, Bankman-Fried constructed the Burj Khalifa on a foundation of quicksand. And now comes the fall.

A desire to keep things as they are can also initiate a slow burn to the ground. The innovator’s dilemma is not a function of arrogance, but limits — the limits power imposes. A colleague of mine at NYU, Aswath Damodaran, has done important research on the life cycle of successful corporations. Vibrance and innovation fuel their ascent, but the comforts of cash flows and the desire to keep them flowing make them slow and afraid. The best companies build moats so they can protect their earnings and extend their lifespan. The next best firms recognize they are maturing and age gracefully: They return money to shareholders, distribute dividends, and pay debt down. Also, few CEOs invest the GDP of Costa Rica into a megalomaniacal, yet lame, attempt to replicate our world … without legs. A digital Frankenstein, if you will.

Success can be our undoing when we’re promoted beyond our true capabilities. The Peter principle holds that because people get promoted on the basis of prior performance, they will inevitably rise to the level of their incompetence. Our brains make it easy for our ambition to exceed our ability: The Dunning-Kruger effect describes a demonstrated cognitive weakness, that the less we know about something, the more we overestimate our knowledge. That’s why stupid people, and people who make great cars and then buy media companies, are so dangerous.

This has been a banner week for the powerful coming undone. In no particular order, the largest social network company in history, Meta, which has lost more than two-thirds of its value over the past year, announced it was laying off 11,000 people; the most prominent crypto billionaire lost nearly his entire fortune after he overleveraged his empire to keep it expanding; and the richest man in the world … impregnated a bathroom sink before putting on a master class on how power corrupts.

Elon’s comical first few weeks at Twitter have gone worse than expected, and most people expected a train wreck. As I write this, the most recent news is that Twitter’s senior InfoSec and Privacy executives quit. They haven’t disclosed the details, but it seems likely they were asked to do something they believed to be illegal or unethical. That’s in the midst of an ongoing collapse of verification on the platform, a new policy that “comedy” is allowed (unless you’re making fun of Elon), and a steady flow out the door of the engineers who know how Twitter’s service actually works.

The inevitable collapse of the powerful is a good thing, and I’m glad we live in a universe that embraces this as a governing principle. Absolute power and wealth concentration are incompatible with the innovation that characterizes humanity’s upward movement. The crashing to Earth can cause collateral damage, but it’s a creative destruction.

What is the lesson, what can be learned? Every day, no matter how successful we become, we need to earn our success. We need to be kind and appreciative; we need to surround ourselves with people who will push back on us and question our beliefs and actions. We need to demonstrate humility. You are never more susceptible to a huge mistake than right after a big win, when you begin to believe the falsehood that your success is all about you. Yes, you’re brilliant and hardworking, but greatness is in the agency of others, and timing (and other features of luck) is everything.

The flip side is less discussed but more important. When you’ve fucked up, when things are going poorly — a relationship ends, you have professional disappointment, or you’re in financial stress — forgive yourself. Mourn, then move on. And moving on means finding the people and activities that give you the strength and confidence to believe you have value, that you are the solution to your firm’s needs, and that you could make someone else’s life wonderful. I have known many really successful people. But there’s a distinction between success and happiness. The delta boils down to registering one truth and surviving the accompanying emotions: Much of your failure, and your success, is not your fault.

Life is so rich,

The post Hubris appeared first on No Mercy / No Malice.

Read the whole story
costeris
1224 days ago
reply
Share this story
Delete

Prime Health

1 Share

The U.S. healthcare industry is a wounded 7-ton seal, drifting aimlessly, bleeding into the sea. Predators are circling. The blood in the water is unearned margin: price increases, relative to inflation, without a concomitant improvement in quality. Amazon is the lurking megalodon, its 11-foot jaws and 7-inch teeth the largest in history. With the acquisition of One Medical, Amazon is no longer circling … but attacking.

Per capita U.S. healthcare spending went from $2,968 in 1980 to $12,531 in 2020 (both in 2020 dollars), more than a threefold increase. The result is a massive industry with 13% of the nation’s workers and total spending accounting for a fifth of U.S. GDP.

Doctor No

Healthcare can boast tangible achievements over the past 40 years. Life expectancy was up from 73.7 in 1980 to 78.8 in 2019 (before Covid knocked it back down a bit). There’s been a revolution in pharmacological treatments, and genetic research is starting to pay dividends. But the financial return — improvement divided by cost increases — has been abysmal. No nation has registered cost increases similar to those of the U.S., and no one spends as much as we do per capita in absolute terms. Yet nearly every developed country has better outcomes, with longer life expectancies, healthier populations, and far less economic stress.

Two-thirds of personal bankruptcies in the U.S. result from healthcare issues — medical expenses and/or time off work. For many middle-class American families, if Mom or Dad gets cancer, there’s a good chance the family will go bankrupt. Forty percent of American adults have delayed or gone without needed care because it’s cost prohibitive. For every improvement in healthcare, it seems our system finds a way to extract a dark lining. That same pharmacological revolution that improved outcomes for millions brought the opioid epidemic. In many areas, our results are lousy at any price: The U.S. has one of the highest infant mortality rates among developed nations.

Beyond spotty efficacy, healthcare offers the second-worst retail experience in the country. (Gas stations retain the No. 1 spot.) Imagine walking into a Best Buy to purchase a TV, and a Blue Shirt associate requests you fill out the same 14 pages of paperwork you filled out yesterday, then you wait in a crowded room until they call you, 20 minutes after the scheduled appointment you were asked to arrive early for, to be seen by the one person in the store who can talk to you about televisions, who has only 10 minutes for you. New York is the wealthiest city in America, yet the average waiting time in an emergency room is 6 hours and 10 minutes.

A good rule of thumb in business is that if it’s bad for the consumer, it’s worse on the other side of the counter. Physicians spend just 27% of their time helping patients — 49% is spent dealing with electronic health records. That includes documentation, order entry, billing, and inbox management. In other words, you spend a decade going to school to get an M.D., only to become a bureaucrat.

No industry has better demonstrated the dis-economies of scale. If we received the same return on our healthcare spending as other countries, we’d all live to a 100 without getting sick. Or, more likely, we’d spend far less, still live longer and healthier lives, and save enough to pay off the national debt in 15 years. U.S. healthcare is the worst value in modern history.

OK, so what to do? At the center of the worst system of its kind, except for all the rest — i.e., capitalism — lies the answer: competition.

Prime Time

Last week, Amazon announced its plans to acquire primary healthcare company One Medical for $3.9 billion. I believe this deal represents the catalyst for a significant societal unlock. I’ve been a member of One Medical for two years and think it’s outstanding. When I contracted Covid, I tapped the One Medical icon on my phone; within a few minutes I was speaking to a nurse practitioner who prescribed Paxlovid and even told me which nearby pharmacies had the antiviral in stock.

With Amazon, the company can recognize its vision. To date, One Medical’s stock has performed poorly — down to $10 per share from $40 at the beginning of 2021. It lost a quarter of a billion dollars last year, and needs capital (which Amazon has: $60 billion in cash). Next, ONEM needs scale. At present the service boasts 736,000 members — impressive. More impressive: More than half of U.S. households are Prime members. The final piece is delivery. One Medical operates a digital health / physical office hybrid business, but you still have to pick up medication from the pharmacy. The obvious upgrade is to have your Paxlovid delivered within hours of a remote consultation. This is Amazon’s core competence — it will happen. Speed and convenience will be so differentiated in healthcare, it will feel alien.

As with most paths to disruption, it’s been long and winding. Four years ago, Amazon teamed up with JPMorgan and Berkshire Hathaway to form Haven, hoping to provide better and more economical healthcare for their combined 1.5 million employees. Despite rocking the stocks of healthcare markets the morning of the press release, it was a headfake and folded in 2021.

Next, Amazon built an in-house service for its employees: Amazon Care. Virtual health services, plus nurses … delivered to your home. It’s doing much better, expanding across the country, and now provides healthcare for other companies. (Hilton is Amazon Care’s largest disclosed customer.) The acquisition of One Medical will couple capital, domain expertise, and installed tech with billing infrastructure, and bring it to 66 million Prime households. Imagine:

“Alexa, I feel feverish and my lower back is aching.”

“Connecting you to an Amazon Prime medical professional now.”

Want to vs. Have to

I predicted Amazon would get into healthcare several years ago. Why? For the same reason Apple is getting into auto: not because it wants to, but because it has to. Amazon stock’s price-to-earnings ratio is 56 — more than double Walmart’s. For the company to maintain its share price, it needs to add a quarter of a trillion dollars in topline revenue over the next five years. It won’t find this kind of revenue in white-label fashion or smart home sales. It has to enter a gargantuan market that lacks scale, operational expertise, and facility with data.

State of Play

A reshaping of healthcare won’t just benefit consumers, but investors. In 2015 healthcare services commanded equivalent multiples to the S&P 500 average. But the market is losing faith in public healthcare companies’ ability to grow in a meaningful way. EV/EBITDA multiples among healthcare services are 33% lower than the S&P 500 average.

Amazon isn’t the only predator sniffing prey. Walmart and Alibaba are both working on their own pharmacy businesses. Uber is working on healthcare transit. And in the private markets, telehealth received $29 billion in venture funding last year, up 95% from 2020.

The obvious and immediate unlock is telehealth, which was accelerated by the pandemic. In a matter of weeks, after the first positive Covid case in the U.S., services the industry insisted had to be delivered in person shifted to Zoom … and we survived. In fact, we thrived. Even once in-person visits were permitted, video house calls remained a thing. McKinsey estimates that the number of telehealth visits has stabilized at 38 times pre-pandemic levels. Doctors adopted the technology, regulators relaxed limitations, and patients saved time as barriers fell. We’re a long way from remote surgery, but huge numbers of patient visits don’t need to be visits at all: A study of 40 million patients during lockdown showed that for certain groups (e.g., people with chronic conditions) outcomes didn’t suffer when visits shifted online. And we’ll only get better at delivering care this way.

The disruption achieved by Amazon will be significant, and the flood of capital, startups, and consumer brands that will follow it into the space will inspire profound change. Mark Cuban launched a pharmacy in January that eliminates middlemen — from the insurer to the pharmaceutical benefit manager. The result? A 90-day supply of acid-reflux treatment that cost $160 is now $17. It’s estimated Medicare would have saved $3.6 billion in one year if it had purchased generic drugs through Cuban’s pharmacy. As other apex predators look for new sources of growth, many will turn their gaze on different limbs of the carcass. Nike could enter healthcare through a wellness positioning: orthopedics, acupuncture, and chiropractic. LVMH, L’Oréal, and Estée Lauder could build the first global plastic surgery brands. The Four Seasons and Hilton might open hospitals. Lennar and Pulte could build “Active Living” communities that Nana will leave feet first, bypassing the expense and tragedy of dying under bright lights surrounded by strangers.

Risks

Privacy is a concern: Your credit card and billing address is one thing, your HIV status another. However, I believe these concerns are overblown — most consumers (60%) feel fine sharing their personal health data over virtual technology. In addition, this is inevitable. Eighty-five percent of physicians believe radical interoperability and data-sharing will become standard practice. Finally, when it comes to handling your personal data, Amazon is the most trusted Big Tech firm. Reminder: Amazon is not Meta.

And What of Antitrust?

Amazon should be broken up (forced to spin off AWS and/or Amazon Fulfillment) and prohibited from advantaging its own products on the platform. It should also be permitted to enter healthcare via acquisition. The acquisition of One Medical is minuscule compared to the larger healthcare market: a $3.9 billion deal, while the largest healthcare company in America, UnitedHealth, has a market capitalization of $498 billion.

Elegant antitrust enforcement should not fall into the trap of believing that some people/firms are good/bad. It should recognize that competition is good, and in each deal the DOJ and FTC should stay focused on the prize: How do we make markets more competitive? E-commerce, digital marketing, and social media are too concentrated, and the FTC should force a divestiture of assets. At the same time, those same companies can foment much-needed competition in what has become a social ill.

We are overweight, depressed, and increasingly broke at the hands of U.S. healthcare. The treatment that offers the best outcomes is the same therapeutic that’s resulted in massive value and prosperity across most of our economy: competition.

Dear Amazon … bring it.

Life is so rich,

P.S. The cost of ads has risen astronomically. If you feel like you’re burning money, you might like our new workshop on Marketing Acquisition Strategy with ex-Slack and Google leader Holly Chen. Enroll now.

 

The post Prime Health appeared first on No Mercy / No Malice.

Read the whole story
costeris
1329 days ago
reply
Share this story
Delete
Next Page of Stories