Capitalism - A pseudo-aligned super AI?

"In economic terms, inputs and labor “want” to be more productive, to find higher-valued uses. But in terms of actual agency that’s no more true than saying that giraffes “want” to have long necks. The active agent in biological evolution is natural selection; the active agent in economics is entrepreneurship. Entrepreneurs relentlessly seeking profits search for resources that can be moved, transformed, or combined in ways that can be sold at a profit."

- Michael Munger

One reason I'm skeptical about AI doom because we're already living among massively powerful "non-aligned" systems.

Two examples:

  • evolution, i.e. the system by which genetic material makes copies of itself
  • capitalism, the system of markets and profit and loss, i.e. a ruthlessly efficient engine for transporting resources (and labor) from lower to higher values.

Both, like the super AGI that doomers fear:

  • are impossible to stop and extremely difficult to reliably control.
  • have had planet wide effects.
  • have created physical breakthroughs in nanotechnology (e.g. eukaryotic cells, spider webs, Dupont, Monsanto).
  • are further advancing (A)I.

Importantly, these these systems are doing this in the real world, as opposed to worries for AI, which have so far have been purely hypothetical.

Yes, it's unlikely either capitalism or evolution are conscious, but conciousness is overrated. A significantly powerful paperclip maximizer, AGI doomers point out, could wipe out humanity, conscious or not.

What about alignment?

It's hard to argue evolution is long-term humanity (or any species) aligned, given 99%+ of all species that have ever lived are now extinct.

Capitalism is trickier, I'd describe it as pseudo-aligned. When Michael Munger says above that resources move from "lower to higher values" he's talking about what humans value, i.e. in the marketplace. And humans value existing.

That said, it's not perfect and there are some interesting sub-phenomena. For one, some people desire destruction/crime/fraud etc. And the market accommodates that. But, as David Friedman points out, resources are usually more valuable to original owners than thieves, which is means there are markets in security too.

More interesting is the idea humans don't always value what's good for them. And capitalism is good at accommodating that too. For example, the race to capture attention and how it's wrecking the mental health of teenagers via dopamine hits + variable reward schedules in order to sell ads. Or selling outrage and partisanship (Fox News, CNN) ... also in order to sell ads.

Why this hurts the doomer argument

Powerful, unaligned forces != unprecedented

Powerful, superhuman forces beyond control are clearly not unprecedented; we're living among several. This doesn't mean a powerful AI couldn't be dangerous, of course, but sometimes I'll see someone say, "An AI will be 1000x smarter and faster thinking than the smartest human. It doesn't matter what it's goals are. If they're not aligned with ours we're toast."I think the fact we're already living among an incomprehensible resource moving engine that's more powerful than the smartest human shows that's not true.

Foom is less likely

Robin Hanson defines "foom" as the "idea a single super-smart un-controlled AGI with very powerful general abilities [suddenly] appears and is able to decisively overwhelm all other powers on Earth". By definition, if AGI-doomers are overlooking the power of capitalism (i.e. by narrowly focusing on how AGI will compare to human brains and abilities) it seems like the probability of foom has to be less likely than they think.

Is this the most important century?

Holden Karnofsky argues that AI will make this humanity's "most important century so far."

I think it's harder to make this argument once you appreciate how powerful capitalism is and how quickly it really took off — coincidentally also within a 100 year period between 1750 and 1850 — during the Industrial Revolution.

Plot

Plot from Luke Muehlhauser via Kelsey Piper and Vox.

Battle of the superforces - unaligned AGI vs capitalism?

Another interesting angle: because capitalism is "pseudo-aligned", if any potential malevolent super AI were to come about, it'd have to go through it to hurt humans.

The hypothetical future AI vs human battle isn't like chess, where stockfish can annihilate any human. Instead it's AI vs this super powerful resource engine working roughly on behalf of humans.

(Counterpoint: admittedly, if we do get a dangerous AI, it'll be because by this engine.)

(Counter-counterpoint: it's also only reason we're here — there's no way earth could support 8B humans without functioning markets and the technology they incentivized.)

Conclusion

People who are extremely worried about AI — like Eliezer Yudkowsy or Zvi Mowshowitz — are generally very intelligent individuals.

This is good, but I think it also gives them a tendency to overrate concious intelligence. As a result they imagine something 1,000x as smart as themselves being literally most powerful thing in the universe. But the point here is capitalism/evolution/other emergent phenomena are very powerful (if not consciously "smart") in ways much different from just cranking an IQ dial up.

Capitalism is a prion; it turns basically everything it touches into capitalism. Some people see that as a conspiracy or plot to take over the world; the prion only sees other proteins that would be more effective at reproducing capitalism if they were themselves capitalism.

- Patrick McKenzie

And — in a follow up that that tweet which I didn't see till after writing this essay:

Probably not new observation: fanfic about AIs inscrutable to humans with seemingly implausible ability to reshape the world around their preferences, with worries about them leading to heaven or hell, is made in epicenters of capitalism because of course it is.

- Patrick McKenzie