FinTech Dystopia: A summer beach read about Silicon Valley ruining things

An Introduction

There’s a lot to worry about these days. Near the top of my list, though, is Silicon Valley’s slow-motion takeover of our financial system. You may have a very different ranking of worries, but Silicon Valley’s tentacles are most likely wrapping – if not fully wrapped – around the things that are near the top of your list. Those tentacles are only going to get harder to dislodge as Silicon Valley provides financial services to more and more Americans, with all the data, money, and power that come with that kind of business.

The Silicon Valley elite have been trying to “disrupt” finance for decades, and there’s increasing evidence that they’re succeeding. There’s also every reason to believe that if Silicon Valley does succeed, it will cock things up worse than Wall Street ever did. We can’t passively assume that the solutions Silicon Valley comes up with will make the world a better place – at least when it comes to finance, many of its solutions are deserving of the most withering skepticism and some of them should be outright banned. But that doesn’t mean we should remain content with the status quo. If we can resist the lure of shiny apps promising easy fixes, there are things we can do to make financial services work better for the American people.

There’s a lot to unpack there, so where do I start? Well I guess Washington DC, December 14, 2022 is as good a place and time as any…

Setting the scene

That morning, I was sitting in a green room in the United States Senate, making small talk with teen-heartthrob-turned-writer Ben McKenzie as Shark Tank’s Kevin O’Leary paced nearby (if twenty-something me knew that she’d one day get to hang out with Ben McKenzie, she’d have been very excited). I know that all sounds weird enough to be a hallucination, but December 14, 2021 had also been a strange one for me, and unusual December 14ths were becoming par for the course. On December 14, 2021, I had zoomed into the United States Senate from the basement of my childhood home in Australia. I was visiting Down Under after years of Covid-forced separation, but I didn’t want to turn down an invitation from the Senate Banking Committee to testify on the risks associated with a type of crypto known as “stablecoins.” Crypto was riding high in 2021, and amid all the techno-hype, I wanted to make sure that the Senate heard about crypto’s real risks, limitations, and harms.

The morning of December 14, 2022 found me getting ready to testify before the Senate Banking Committee in person in Washington DC, and my geographic location wasn’t the only thing that had changed. It had not been a good year for the crypto industry to put it mildly, with some reports estimating that $2 trillion in value had been erased from the crypto markets. McKenzie, O’Leary, Jennifer Schulp, and I had all been invited to discuss the implosion of the FTX crypto exchange…and the revelations that a hedge fund controlled by FTX CEO Sam Bankman-Fried had borrowed billions of dollars’ worth of customer assets from the exchange and bet (poorly) with them. It may be hard to remember now in 2025, as we sit in the middle of yet another crypto boom, but in December of 2022, people were more than ready to talk about crypto’s downsides. (Well, most people were. True to form, Shark Tank’s Kevin O’Leary spent a lot of his time hawking crypto investments).

I had tangled with Sam Bankman-Fried once, in May of 2022, when he was still Washington DC’s crypto darling. A financial regulatory agency called the Commodity Futures Trading Commission had convened a roundtable to discuss a new kind of business model that FTX had proposed, and Sam Bankman-Fried was there to explain it. I was one of only two public interest representatives at the roundtable of about 40 people. This was honestly a little intimidating, and kind of made it hard to get a word in edgewise. When I finally did get the microphone, though, I listed problem after problem with FTX’s proposal while Sam Bankman-Fried live tweeted “a lot going on here—simultaneously debating crypto, algorithms, computers, retail, 24/7, etc.…really unclear what her point is.” I walked away as unimpressed with him as he was with me – Sam Bankman-Fried seemed to be out of his depth when forced to step out of his math bubble and grapple with the messiness of how the financial system actually works. But I had no idea that he was perpetuating a multi-billion dollar fraud – and that he’d eventually be serving a 25 year sentence in prison.

By now, many books have been written about the grisly details of FTX, SBF (as Bankman-Fried is often called), and the rest of 2022’s “crypto winter.” These books tell the story of an industry practically quilted out of red flags, which combusted in a spectacular series of frauds and failures once the easy money of 2020 and 2021 ebbed away. But every time news broke about a fraud or a failed company, the rest of the crypto industry performed a curious rhetorical trick. The remaining players circled their wagons and said “they deserved it, they were bad crypto. Not like us – we’re good crypto.”

Personally, I’m more interested in the “good” crypto, which claims to solve every problem in our existing financial system but actually replicates and exacerbates the very worst of traditional finance. More generally, I’m concerned with Silicon Valley-style finance businesses that overpromise, underdeliver, and hurt people along the way – but somehow manage to get a toehold by manipulating their surrounding legal environment to subsidize their underwhelming tech. The grift here is more mundane and subtle than the crypto frauds exposed in 2022; it’s also a lot more mundane and subtle than the Trump family crypto emolument enterprise that grabbed headlines in 2025.  But the kind of grift I’m talking about is still corrosive even if it isn’t making headlines.

To state the bleeding obvious, a lot of things in our society need to be fixed. But our experience with crypto – and with many of the other business models we’ll explore in this book – makes it clear that many over-hyped technological solutions are at best a crutch and distraction, and at worst downright harmful. Actually improving people’s financial wellbeing, for example, will require us to pursue real, slow, piecemeal, democratic solutions. The same is true for many of the other pressing problems that our society faces: in areas ranging from education to healthcare to climate change, our assumption that “technology will fix it” (particularly that “AI will fix it”) is getting in the way of so many needed reforms. And so while this book focuses on crypto and the broader assortment of consumer-focused financial technologies known as “fintech,” these examples illustrate the dangers of a much bigger phenomenon: Silicon Valley-style techno-solutionism.

Someone once gave me the advice that I should write about what makes me mad. For many of the 10+ years I’ve been researching crypto and fintech, though, I didn’t quite have the word for what was gnawing at me. I knew that I was exasperated by the many businesses that promised to solve all our problems, and then didn’t deliver. I knew that I was angry that those same businesses were getting away with inflicting a lot of harms on consumers, because regulators were too afraid of getting in the way of innovation to rein them in. I knew that I was frustrated that the word “innovation” itself seemed to be sprinkled with pixie dust, treated as something that was always good and something we should never get in the way of. And I felt like I was taking crazy pills (depending on your generation, you’ll either recognize this line from the movie Zoolander or from the gif), when it seemed that so few people could see that tech businesses, and the venture capital industry that helps fund them, have incentives to do some not-very-good things in order to profit.

And then I came across the idea of “techno-solutionism.” In his 2013 book To Save Everything, Click Here, Evgeny Morozov explains that while humans have always had a tendency to look for easy fixes to complicated problems, the solutions offered by modern technology can be particularly hard to resist. The internet helps scale up solutions so that it seems like they really can solve all our problems, and the seeming wizardry of new technology discourages us mere mortals from asking whether the tech industry’s promises are too good to be true.

Many in Silicon Valley proudly champion this techno-solutionism – although they usually call it something else, like techno-optimism. Most (in)famously, venture capitalist billionaire Marc Andreessen wrote a “Techno-Optimist Manifesto” that states “We believe that there is no material problem – whether created by nature or by technology – that cannot be solved with more technology” (the sentence immediately before this one reads “We believe this is why our descendents [sic] will live in the stars” – I wonder if they will finally have spell-check when they get to the stars…). Morozov views this kind of perspective as dangerous techno-solutionism, though – and so do I.

The death of domain expertise

Techno-solutionism can warp our world view: if we think technology can solve all our problems, then the only problems that we’ll end up solving are the ones that lend themselves easily to tech fixes. In other words, we’ll end up flattening complex structural and political problems into things that computer code can address, and ignore all the messy elements it can’t. We’ll also delegate problem-solving away from our elected representatives, and to the tech elites. While I fully appreciate that people are not always all that confident in our elected representatives, putting the Silicon Valley elite in charge of solving our problems is a decidedly anti-democratic – and pretty scary – prospect. We’ll dig into this in much more detail later in the book; for now, the computer scientist/journalist/professor Meredith Broussard offers us an excellent summary of the values we’re buying into when we accept their techno-solutions:

Ayn Randian meritocracy; technolibertarian political values; celebrating free speech to the extent of denying that online harassment is a problem; the notion that computers are more “objective” or “unbiased” because they distill questions and answers down to mathematical evaluation; and an unwavering faith that if the world just used more computers, and used them properly, social problems would disappear and we’d create a digitally enabled utopia.

While technological solutions might seem at first blush to be clean and mechanical, free of human vices and foibles, technology is more than just the components of a machine or a line of software code. Technology is inextricably intertwined with the people who develop and deploy it, and so the incentives and beliefs of the Silicon Valley elites will impact us through the technological tools they fund, develop, and deploy. To state what should be obvious but often isn’t, their incentives are typically to grow and profit no matter what, and significant harms can be inflicted on society as a result. Yet tech-based businesses tend to benefit from a veneer of neutrality: we sometimes hear slogans like “software commits no crimes” (which have a whiff of the NRA’s “guns don’t kill people” rhetoric about them).

The venture capital industry, which decides which startups to fund and is therefore very influential in determining which tech solutions make it to market, is very prone to fads. Just in the last decade we have seen hype cycles around blockchain technology, and then AI (and for the real die-hards, there’s “AI on the blockchain”). These tech trends make techno-solutionism even more damaging, because the cart is put before the horse as the industry asks “how can X hot technology solve the problem?” instead of starting with the problem at hand and figuring out the best way to solve it. As this book will explore, fintech is rife with attempts to force the square pegs of blockchain and AI technology into round holes.

Even though they’re empty, some techno-solutionist promises have proved very effective in distracting elected officials and other public policymakers from pursuing real solutions. Perhaps the slow plodding changes of real reform – which require tough compromises and can take generations – seem so unappealing that policymakers are willing to suspend their disbelief in the face of a seemingly shiny silver bullet tech solution. Perhaps, in some cases, it is as simple as “money talks:” the Silicon Valley elite lobby heavily at all levels of government. But it's probably also true that a lot of our policymakers really do accept the hype about these solutions because they simply don’t feel qualified to push back.

In America today, technological innovation and technological expertise are revered and people are often demeaned for being behind the curve on technology – even if they are experts in other things. But Silicon Valley gets a free pass that allows it to remain ignorant of the domains in which it sticks its techno-solutions. After all, that’s how Elizabeth Holmes was able to get away with fraud at her healthcare startup Theranos for so long. As scientist Derek Lowe put it:

I think that working primarily in hardware and software can give a person an exaggerated and distorted view of reality and our ability to shape it…It must feel like being able to do magic, these acts of creation, and it's a natural error to assume that this is how the rest of the world works, too. But it doesn't…Theranos had not been in existence long enough, with enough people and enough facilities, to come anywhere close to what was needed to have accomplished anything of the kind. But if you're used to software-style innovation, you might not realize that. You just need a few folks stuffed in a room with a bunch of workstations, that's all. They'll stay up all night, flailing those keyboards; they'll get it done. That's how innovation happens, right?

Unfortunately for Theranos (and for the people it misdiagnosed with diabetes and HIV), domain expertise and understanding the context in which a technological solution will be deployed are critical for figuring out whether that solution will, in fact, deliver. People should not forget what they know, or think it is somehow less important than technological expertise. Think about the thing you’re an expert in, what you’ve learned from your own lived experience. You developed that expertise by studying something or doing something for a long time. Don’t throw it out the window just because someone selling something tells you that their tool can do it better.

And let’s be real – no single person from Silicon Valley is fully up on all the technologies that are out there, either. In a highly informative and entertaining blog post titled “I Will Fucking Piledrive You If You Mention AI Again,” one quasi-anonymous data scientist notes that tech experts do not have “the ability to trivially switch fields the moment the gold rush is over, due to the sad fact that we actually need to study things and build experience.” The post goes on to say that the technology hype men don’t have that problem because “the core competency of smiling and promising people things that you can't actually deliver is highly transferable.” So if the person who was pitching you blockchain solutions is now pitching you AI solutions, and is trying to make you feel small because you have questions about how these solutions work and whether you actually need them, then I hope you’ll stop and wonder (with apologies to Taylor Swift) if “they’re the problem, it’s them.”

Most of this book’s examples of techno-solutionism come from the world of finance. Crypto and other fintech startups received a significant chunk of the venture capital shelled out in Silicon Valley over the last decade, and because finance is so highly regulated, these examples expose how much “disruptive innovation” can come from skirting the law rather than technological superiority. Also, it just so happens that finance is the domain I know best. But techno-solutionism is everywhere, and now that you know the term, I’ll bet you’ll be able to identify your own favorite example from what you know.

One of my goals with this book is to empower you to ask questions and express concerns about what technological innovation is and isn’t enabling in our world. Many of us (myself included) have censored ourselves at times, not wanting to look foolish by questioning whether tech businesses can actually deliver on their hype. We tend to fear being labelled a “Luddite,” but after learning a little bit about the actual Luddites, you might decide that being a Luddite is not such a bad thing after all.

My path to Luddite enlightenment

The Luddite rebellions were staged in England in 1811-12, as craftsmen were losing their livelihoods to the new machine inventions of the Industrial Revolution. Before the Industrial Revolution, these craftsmen had “long traditions of autonomy and status,” and they understood that if their work could be replaced by machines, they would lose their occupations – or at the very least, be forced to become factory workers and lose a significant portion of their income and independence. That is indeed what happened, to tragic social effect, as recorded by the brilliant novelist Charlotte Bronte (not-so-fun fact: Bronte’s father witnessed the fatalities of at least one Yorkshire Luddite rebellion first-hand). In her book Shirley, Bronte writes:

As to the sufferers, whose sole inheritance was labor, and who had lost that inheritance – who could not get work, and consequently could not get wages, and consequently could not get bread – they were left to suffer on, perhaps inevitably left; it would not do to stop the progress of invention, to damage science by discouraging its improvements; the war could not be terminated, efficient relief could not be raised; there was not help then, so the unemployed underwent their destiny.

But the Luddites did not undergo that destiny willingly; nor did they go straight to violence. As Brian Merchant writes in his book Blood in the Machine: The Origins of the Rebellion Against Big Tech, “before the Luddites rose up, weavers and croppers and cloth workers tried for over a decade to get Parliament to pay attention to their plight, and they were ignored, accused of agitating illegally, and disparaged en masse.” Only then, having exhausted all other avenues available for exerting influence on the policies of the day, did the Luddites start smashing the machines.

Today, the term “Luddite” is usually used to ridicule people for being “anti-tech,” but you can love technology in general and still worry about the impact of a particular type of technology. You can still stop to ask whether some kinds of technological innovation should be paused or have their blow softened even as you support other kinds of technological progress. After all, was it really so silly for the original Luddites to demand a reckoning with the social consequences of the Industrial Revolution? Might the world have been better off if those in charge had offered some accommodations that addressed the disruptions the craftsmen faced? As the philosopher John Ralston Saul puts it:

The debate should not have been over whether there should be technological progress or not. It was more accurately a question of progress in what conditions: what progress, when, in what circumstances? Market extremists would argue that what happened was inevitable and eventually brought great prosperity. Their view ignores the social disorder, followed by suffering, followed by serious social disorder that this approach towards change brought on. Communism was the direct result.

By now, we’ve been conditioned to hear the word “disruptive” as a positive, as a way of taking down the old (by implication, worse) by replacing it with something new (by implication, better). But we shouldn’t forget that disruptive innovation can bring about disruption to livelihoods, to regulations that protect the public, to our natural environment, and much more. This disruption will inevitably benefit some players, but it won’t benefit everyone. There will be winners and losers – and the disruptions suffered by the losers are all the harder to stomach when the technological innovation doesn’t even deliver on its promises.

In 2025, we’re not smashing machines (at least, no one’s taken a hammer to a data center yet as far as I know). Today’s Luddites usually use milder methods of public participation to demand a reckoning with technological innovation and its impacts on society. When technological innovation does indeed bring benefits to society, today’s Luddites can draw attention to and call for measures that soften the negative disruptions that accompany those benefits.

Sometimes, though, technological innovation will be nothing more than the cutting edge of a long tradition of “rent-seeking,” where the developers create wealth for themselves without generating any corresponding social benefit. Manipulating the surrounding legal environment for rent-seeking purposes is a big part of many tech business models, and the Luddites among us are the ones who are ready and willing to call this out. And Luddites don’t just benefit society by focusing attention on technology’s negative disruptions; in some circumstances, their insights can actually serve to make the technology itself better. Luddites can supply domain expertise on how technology will be used by actual humans, which will help it to perform better in good times and bad. In short, we need Luddites in this world.

It took me a while to become one. When I first started researching the cryptocurrency Bitcoin in 2015, I focused on its financial fragilities – why it wouldn’t work as money, its defects as a Ponzi-like investment. But in the first paper I published on Bitcoin, I fell for the tech hype and wrote about the “truly innovative” blockchain technology associated with Bitcoin, arguing that it should be applied by financial institutions to make their payments processing more efficient. Whoops. I already had the financial chops to call out Bitcoin, but hadn’t yet honed my tech skepticism. As sci-fi author and tech commentator Cory Doctorow once colorfully tweeted, “The Venn intersection of "people who code" and "people who understand finance" is so small it's a *sphincter*”.

It took me a few years to get to a place where I could understand Bitcoin’s technological AND financial flaws. Over those years, I learned a lot from independent technologists about blockchain technology and its overwhelming limitations. Although no one cared that I didn’t have a computer science degree when I was parroting hype about “revolutionary” blockchain technology, as I have become more knowledgeable about fintech technologies, people from the crypto industry have increasingly suggested that I should stay in my (law) lane. And “suggested” is probably too tame a word – you should have seen my old Twitter feed. I have chosen to politely disregard these not-so-polite suggestions, though, because you can’t figure out how to regulate crypto if you only listen to the industry’s rosy depictions of what their technology will do. An outsider’s perspective is a critically important part of these policy debates: sometimes it is only with a little distance and a lot of context (and, dare I say it, no profit motive) that we can see things for what they truly are.

And while we’re on the topic of profit motive, in what I can only assume is an act of projection, some crypto bros cannot fathom that I might actually want to call out crypto BS just because I think it’s the right thing to do. I am regularly accused online of profiting from my criticism – of being on the payroll of the big banks, or Elizabeth Warren, or some shady conspiratorial figure, take your pick. But the truth is, I am a law professor and I would be paid the same no matter what I chose to research. My academic independence is a privilege, and one that I try to exercise in the public interest – like the Lorax who speaks for the trees because the trees have no tongues, I speak for the people who don’t want their financial system crashed but don’t have the time to learn about what might crash it. Even if you disagree with the arguments I make in this book, I hope you’ll accept that I come by them honestly. After all, as journalist Zeke Faux noted in his account of FTX and Sam Bankman-Fried, “there is no profit in being skeptical.”

Technology isn’t magic

Despite the fact that skepticism isn’t profitable, the good news is that more and more people are increasingly asking, “just because we can do something with technology, does that mean we should?” This is an important question, but there’s an even more fundamental question we need to ask first, and that is “can this technology actually do what we’re told it will?” That’s the question I think is often missing from debates about technology in our society, and one that this book will tackle. Sometimes, the reality is that a particular technology is incapable of doing what its developers and backers promise it can – but even technological “solutions” that can’t deliver can still be harmful. It’s certainly scary to think of many of the tech industry’s most outlandish and dystopian visions coming true, but we shouldn’t ignore the present harms associated with some of the mediocrities that Silicon Valley churns out, or the harms associated with the development process along the way.

In case you’re worried that this book will be an anti-technology screed, though, let me put you at ease. It is obviously true that many people around the world have benefitted enormously from many kinds of technological innovation. The development and commercialization of new technologies involves a lot of uncertainty, and the optimists in our world drive a lot of our technological progress because they are the ones willing to take a leap of faith in the face of that uncertainty. Part of the optimists’ job is to spin up exciting stories of potential to attract investors who will fund experimentation (and in some contexts, to convince the authorities to allow that experimentation). At a certain point, though, these stories need to be met with a reality check. My goal here is not to push back against technology itself, but to push back on the disproportionate and damaging optimism that animates techno-solutionism.

Unfortunately, our collective yin and yang of skepticism and optimism are badly out of whack these days. The pendulum seems to have swung so far in favor of unsubstantiated optimism about technological innovation that we fail to exercise healthy skepticism when confronted with Silicon Valley’s unrealistic or uninspiring promises – even as the passage of years provides ample evidence of how unrealistic and/or uninspiring those promises always were. We seem to have forgotten that technology isn’t magic, and that when real-world constraints can’t be wished away, the technology juice may simply not be worth the squeeze.

This collective delusion creates an environment in which it’s relatively easy for those pushing lackluster tech-based businesses to keep them alive – with subsidized funding and legal dispensations – long after they should have been taken off life support. In the meantime, we tolerate the harms perpetuated by these businesses and are distracted from pursuing real solutions to the problems that their just-over-the-horizon technological solutions will purportedly address. But maybe – just maybe – if we highlight examples of technology’s failed promises and show that they are not isolated incidents but part of a rich tapestry of overpromising and underdelivering, we can reignite our collective skepticism.

A huge proportion of our society already has a pretty low opinion of crypto, for example, but most people have chosen to just ignore it rather than to learn enough about its technological and financial infirmities to validate their skepticism. We all have limited bandwidths, and in many ways, I respect ignoring crypto. It’s a pretty solid life choice. But the problem is that while most people were busy living their lives, the crypto industry spent vastly more than any other industry ($245 million) on the 2024 election cycle in order to secure special legal treatment. Legal treatment designed to allow crypto to become more integrated with the rest of the financial system while avoiding the rules that the rest of the financial industry has to play by – and if crypto helps blow up the traditional financial system as a result, then even people who choose not to touch crypto will be impacted.

People who lived through the 2008 financial crisis and its aftermath don’t want to go through that again. Ironically, though, some of the people who are into crypto arrived there because they were so disgusted with the traditional financial industry after 2008 (others just want to make a quick buck, but I’ll come back to that in a few chapters’ time). It’s probably no coincidence that the fintech industry – with its goal of disrupting traditional ways of providing financial services – really rose to prominence after 2008. Given the global misery that the traditional financial industry unleashed at that time, it’s easy to understand people’s desire for a technological magic wand that could wipe the slate clean and start over. I mean, who wouldn’t want to replace the worst players in the financial industry with neutral, well-behaved technology?

But technology sadly isn’t neutral, and there’s no guarantee that the people who develop it and use it will be well-behaved. The same economic forces that unleashed the 2008 financial crisis on us will also drive the way fintech is developed and used – only with fintech, there is even less regulation to restrain it, in part because the people in charge don’t want to hold back technology’s seemingly magical potential. Science-fiction writer Arthur C. Clarke once wrote that “any sufficiently advanced technology is indistinguishable from magic,” but there’s a big difference between technology being magic, and technology being indistinguishable from magic. It is true that people often don’t understand how technology works and are wowed as a result, but unlike magic, technology doesn’t exist separate and apart from real-world incentives and constraints. It's important that we don’t allow our frustrations with the existing financial system to blind us to the flaws in a mirror image fintech-based system that replicates and exacerbates everything we didn’t like about finance in the first place.

So how do we shatter the mystique of technological solutions, so that we can debate their pros and cons in the same way that we debate the pros and cons of everything else, and make room on the table for other, non-technological options? At the individual level, we can sometimes protect ourselves by simply opting not to use the techno-solutions in question. But for more collective harms, there are no easy answers: affected communities and their allies need to organize and advocate for change, and doing so can be intimidating in the face of what seems like technological wizardry. Still, as Lao Tzu said, a journey of a thousand miles begins with a single step, and the first step is to start talking about technology differently. On issues ranging from tobacco to climate change, flipping the narrative has proved a precondition to making any policy changes, and we won’t be able to rein in Silicon Valley’s harms if the stories we keep telling about technology are couched in terms of reverence, awe, and magic.

Techno-solutionist solutions should instead be met with skepticism. At its most basic level, that skepticism should recognize that the developers of such solutions are first and foremost selling something, not trying to make the world a better place. We should therefore put the burden on them to convince us that their technology is not bad: not bad in the evil, harmful sense, and also not bad in the sense of just plain not sucking. Our skepticism should also be informed by domain expertise: we should be at least as contemptuous of tech developers who don’t understand how their solutions will be deployed as the tech elites are of us for not being tech experts.

On paper, this kind of approach doesn’t sound so hard; in many ways, it just seems like common sense. But in the real-world, changing the narrative around technology is going to be an uphill battle. Techno-solutionism is encouraged by our brains, extolled by our media, and fed by our politics; at the end of the book, I will talk about dismantling these and other kinds of supports for techno-solutionist narratives. In a chicken-and-egg-style dilemma, though, none of the necessary steps will seem worth it unless we already understand the dangers of techno-solutionism.

In many ways, the prognosis doesn’t look good: Silicon Valley is certainly ascendant right now and if we continue uninterrupted on our current trajectory, harmful laws that entrench Silicon Valley’s power will continue to gather support because of (sometimes willful) misunderstandings of technology’s limitations and a desire to promote technological innovation no matter what. But there is also reason to hope. In Blood in the Machine, Brian Merchant identifies the vanguard of a neo-Luddite movement that is increasingly demanding a reckoning with Silicon Valley’s social costs. This book encourages you to join these neo-Luddites by exploring and voicing your own skepticism about technology’s mediocrities and hollow promises.

In future chapters, you’ll start to notice a very repetitive pattern. First, develop a business model that centers a particular technology. Tell some stories about how that technology will solve a legitimate problem (preferably using the words “democratize” and “disrupt”). Bend or break some laws with that business model, and profit from not complying with the law. Get away with bending or breaking the law, and with harming people along the way, because lawmakers and regulators are too timid to stop “innovation.” Get big enough that you can convince lawmakers and regulators to change the law so that you never have to comply with it and those who are harmed have no recourse – because you haven’t actually solved the problem, and your business model isn’t good enough to survive if you have to follow the same rules as everyone else. Bonus points if the law is changed in a way that guarantees you a monopoly or oligopoly position. Lather, rinse, repeat.

I want to leave you with a word of caution before we start looking at these tech business models in detail. It is very easy to fall into the trap of accepting industry hype at face value, and then criticizing that hype rather than reality – as professor of science, technology, and society Lee Vinsel puts it, “it’s as if [criti-hypers] take press releases from startups and cover them with hellscapes.” But criticizing hype rather than reality can unwittingly amplify that hype and distract us from technology’s real but more mundane harms. That doesn’t mean that we shouldn’t be proactive and forward-looking: we absolutely should think about the downsides of realistic trajectories of technological development and use. But the word “realistic” is important here.

Criti-hype is a product of techno-solutionism because it uncritically assumes that technology will eventually do exactly what its boosters say it will, and then criticizes that. It fudges over present-day harms associated with the developing technology, and discounts the possibility that the technology is simply not capable of living up to the hype – not just that it’s just “early days,” but that the technology will never be able to deliver. Sometimes, the limitations of a particular technology can be hard to figure out in the moment, but sometimes they’re actually pretty clear if you care to look. In a few chapters, I will look at why the blockchain technology that gave rise to the crypto industry is simply incapable of delivering on many of its boosters’ promises, but the phenomenon of criti-hype is probably most evident right now when it comes to AI (which we’ll also look at later in the book).

In 2023, many AI industry personnel, academic experts, and other public figures signed on to a statement that read, in its entirety,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

In some ways, it’s hard to argue with that kind of statement – I don’t want the human race to be exterminated by the Terminator either. But the problem with this statement is that it implicitly endorses the view of AI as powerful enough to destroy humanity. Right now, we are very, very far away from computers displaying any real kind of general intelligence, let alone intelligence that is superior to human intelligence. The technology in use today doesn’t seek to establish causality, or engage in formal reasoning; it can’t reflect on or engage with its own existence in a world populated by others. Instead, it combs data sets for patterns, and then uses those patterns to formulate decision-making rules for the future. Acclaimed science fiction writer Ted Chiang (who wrote the short story that inspired Arrival, one of my all-time favorite movies) has suggested that it would be more accurate to describe what we currently call “AI” as “applied statistics.”

A statement that says “mitigating the risk of extinction from applied statistics should be a global priority” obviously doesn’t pack quite the same punch… and if we’re not distracted by the danger of extermination at the hands of our artificially intelligent computer overlords, then maybe we can devote more attention to addressing AI’s more pressing but more mundane harms (things like discrimination, misinformation, privacy violations, significant energy and water costs). In short, the harms and limitations of any technology-based business model need to be considered together so that our critiques focus on real harms – not just the dark side of the dreams the industry is selling – and aren’t derailed by excitement over unrealistic industry promises. Because, as we’ll explore in Chapter 1, unrealistic promises are Silicon Valley’s stock-in-trade.

Next Chapter →