While the title may seem disrespectful, I want to clarify that I respect writers. Chekov, King, Liu, Stephenson, Gaiman, Herbert, Crichton, Guin, have all written books that have personally influenced me deeply. I love their work, and I respect their craft. But I don’t take advice from them, because they’re not engineers who are currently building the present.
Charlie Stoss wrote the blog titled “We’re sorry we created the Torment Nexus”, in which he writes “I—and other SF authors—are terrible guides to the future” - and I agree, but not in the way Stoss presents it. Stoss argues that tech billionaires use SciFi books are guides, rather than cautionary tales. A Reddit comment. discussing this goes far as to say " Problem is in that book there is a company making money from it. Sure they are evil and unethical, but making massive profit. Tech companies see that and go. Don’t worry. Our motto is ‘Don’t be evil’." - while this isn’t representative of the wider pessimist audience I’ll be addressing today, a lot of people think similarly. Art degrees from University of Massachussetts in hand, they’re confident about their judgement of the world - wearing their posturing on their sleeves.
There are a few problems with this: This is not how companies think, this is not how products are built. A product starts first in an Engineers mind - Idea guys are dime a dozen, it is the Builder whose ideas matter the most because she will build it. She will be working on many different things, doing many different things in life - as one does, and come upon a gap between her wants, and her reality. This is how Brian Chesky developed Airbed & Breakfast (AirBNB), this is how John Romero developed Doom, and Ried Hoffman Linkedin. Of course, some projects start because the Engineer wants to practice their craft: Eric Barone of Stardew Valley fame and Linus Torvalds of the GNU Linux systems come to mind. There are product managers too - the MBAs untrained in technology who will either offer excellent suggestions, or horrible ones, and their ideas spawn other projects (their other task is procuring resources, orchestrating and organizing for the team, but they’re often again hit or a miss - I’ve personally worked with both hits and misses in an equal number).
There are some larger than life projects that come to the billionaire executive’s way in more than one ways which need not be discussed, but Science Fiction is hilariously low on that list. Even companies that take on SciFi names, of which there are many, I’d bet <5 have probably been inspired from SciFi ideas - usually because SciFi ideas are lagging in actual research by a few decades. Time Travel isn’t possible, Cryonics research has been around since The Prospect of Immortality (though non-cryonic life preservation dates back to 1901 H.G Wells book “The New Accelerator” - but by this definition, one could argue Eternal life is a concept as old as time), mobiles were around since before Star Trek, Internet isn’t even conceived in most of SciFi till internet was more popularly a thing, and no SciFi book even developed the idea of Bitcoin, and so on and so forth.
Thus when when Alex Blechman tweets the following:
I always squint. I always question their understanding of technology and its effects. I, of course, personally do not know or hate Blechman, I enjoy his works in the Onion, and it is a human right and leisure to complain about “How bad things are currently”. Ask a person what they think a Good year was, and I’d counter them with little effort. A lot of Americans tend to reminisce that 2016 was an excellent year. Really? - I don’t think so. We tend to hate the current thing(TM) no matter when (although Frank Ocean did release Blonde in 2016 - I get why people miss that year).
I tweeted the other day, I tweeted something along the lines of why we shouldn’t take criticisms of Blockchains seriously that which claim that the blockchain is a glorified linked list or a glorified database - since it represents a fundamental misunderstanding of all three of those things. A similar remark goes for Stross and Technology Optimism.
Charlie Stross in his blog, goes on to say “And no tour of the idiocracy is complete without mentioning Mark Zuckerberg, billionaire CEO of Facebook, who blew through ten billion dollars trying to create the Metaverse from Neal Stephenson’s novel Snow Crash, only for it to turn out that his ambitious commercial virtual reality environment had no legs” - this is a ridiculously short-sighted pessimist remark that ignores that in a metaverse (which is essentially VR headset connected to the internet) - you can learn languages in a more immersive way, learn to fix cars, find ways to relax and much, much more, via a <500$ headset, where buying a car to learn to flip will cost you upwards of 500.
Torment Nexus in the books have often never been the technology itself, it has always been the people.
John Hammond should’ve hired a McKinsey consultant (as much as I make fun of them, anyone with a 20$ Goodwill suit can tell you building a Jurassic park is a dangerous endevour before you build it).
If an equivalent to the copying process of Egan’s permuation city, or QNTM’s VHIT process, is developed, first, you’re in 2080, second, the possible uses will be immortality and (seemingly cruel) research - but the way code works, we will have a generally good idea if the subject enjoys his time in “the world” or is frantic and naked, as he is in Egan’s introduction. This is possible through an extremely advanced programming macro called print(subject_status)
. It is not ideal to torture things, and people, as the beauty industry found out, and Musk eventually will (although the collateral advantage is we will solve many different problems for the human race struggling with disabilities).
All of this talk is incomplete without addressing the attack on AI: Let me be clear - I do not think an AGI is a possibility, however I believe the human mind is a computable turing machine, and any other turing machine with enough manipulation will be able to mimic it decently well. We blew right past Turing test with GPT 3.5. The function of SciFi is to dream - and Stross’ novel “Singularity Sky” does this very well. It has FTL, Uploaded Minds, Cornucopia Machines - all of which are simply not possible for a long, long time. But even if they were, Stross ignores in a hand wavey way how difficult implementing such a regime would be - it is socially impossible, and financially lacks incentive. But if all of that happened, and an AI system is as integrated in society (the least “fiction” part of the story) - it will not be evil. The idea that AGI is evil has only been discussed in SciFi - before you go on to say “This is just like Black Mirror!” every time an AI suggests destroying humans - consider this: It has no incentive to, it’s saying that because it’s being trained on a poor dataset of fiction for a task which isn’t relevant to fiction. I’d like to mention that inspired by Stross’ work, cell phones were dropped in Taliban-infested Afghanistan for free information transfer. This has, though not inspired, also happened in North Korea and wouldn’t be possible without modern technology. Stross, thought writing about Cypherpunk, has been somewhat against Crypto - and while a lot of people are, I argue that Crypto is the way to win the monetary aspect of encryption wars, but I digress. A lot of proponents of X-Risk (or AI safety) jumps to their SciFi nightmare of a rogue AI overturing human civilization. While reality is AI already suffers from bias, user manipulation, addiction, and misinformation - all linking to inherent problems that already exist in humans as flaws, but excaberted via the scale AI provides (only linking to SFW links - young women have been harassed with the deepfake technology, and older people have been scammed all the same). These are real risks which SciFi doesn’t discuss, and the e/acc or optimist community needs to come together to figure out a solution. Guard rails that aren’t elevated (but possibly vulnerable) system prompts will be helpful, but who gets to decide what’s harmful and what’s not? Obvious things like pornographic deepfakes are harmful without a doubt, and most closed source AI models block such requests and report you, but open source models still enable it. How do we tackle this? We haven’t even tackled Photoshop-harassers yet and submitted to a quiet resignation of “it is how it is”. Fuck no, we need to find ways to regulate these parts, not write pointless attacks on modern technology. Criticize tech billionaires: not only is it fun, it is also very important, but make sure to have a long-sighted vision. This is not an attack on Stross, but the general pessimist community.