Can Our Civilization Survive Social Media?
How the traffic optimizer leads to civilizational suicide
There was a famous, very much before its time thought experiment called “The paperclip maximizer” that Nick Bostrom initially came up with all the way back in 2003. There are different variations of the short post he did, but here’s a common one:
Suppose we have an AI whose only goal is to make as many paperclips as possible.
The AI will realize that it would be much better if there were no humans, because humans might decide to switch it off.
Because if humans do that, there will be fewer paperclips.
So the AI will try to get rid of humans.
The AI will also try to convert as much matter as possible into paperclips.
In other words, the AI has a normal human goal, but it doesn’t have the morals or ethical restraints we do. It would be as if someone said to you, “Get something to eat,” and you looked around the house, saw the dog, the cat, and the baby in the crib, and said, “Ah, there are things that can be eaten! Let me grab the butcher knife, and my task will be complete.”
Certainly, this is something humanity has to be very aware of as we make further advances with AI, but this issue is already happening at an extraordinary scale in our society and others and is doing enormous damage to humanity.
How?
In the algorithms being used by social media companies like Facebook, YouTube, TikTok, Instagram and X.
What do I mean? Well, let’s do a little thought experiment. Except instead of a “paperclip maximizer,” we’re going to have a “traffic maximizer.” It’s a soulless, morality-free bit of code that has no goal other than to keep people engaged on a website for as long as possible.
Discard your humanity and your sense of right and wrong for just a moment and imagine you’re filling that role. You have an audience, and you want them to hang around and spend as much time as possible on your site. You want them to comment, like posts, and stay engaged.
As a starting point, you have to choose between feeding them one of the following posts:
1) “Meth is bad for you.”
2) “The secret health tip that the government doesn’t want you to know - Meth is actually GOOD FOR YOU.”
The first one is true, but boring. Ordinary. Well-known. So, it’s not the least bit interesting.
The second one runs completely contrary to everything people know. It’s conspiratorial. It makes you wonder what they’re talking about. It’s also false and deeply harmful to people who believe it, but remember, you’re a “traffic maximizer,” not a “morality maximizer,” so you’re going to feed people that second story.
Some people will buy it, and some people won’t, but even the people who don’t will probably comment. “This is dangerous! It’s made up! It’s crazy!” Except that will prompt one of the relatively small numbers of true believers to comment, “You’re a bootlicker for the government that doesn’t want people to know how to improve their health because it will cut down on hospital bills!” Then people stay on the site to argue. That’s a win for the traffic maximizer.
Of course, after this idiocy goes on for a while, #2) seems a little passe and doesn’t draw as much traffic as it did before. That naturally leads to someone upping the stakes a bit by saying, “Here’s why mothers should buy meth and put it in their baby’s milk tonight!” So, what happens with that one? It blows up even more. More clicks. More arguing. More outrage. Over time, it inevitably goes further and further down the rabbit hole.
However, that’s not all that happens.
A certain percentage of disturbed and not-particularly bright people start to genuinely think meth is healthy. Foreign governments that want to hurt America start sending botnets in to push this idiotic idea and support the people putting it out there. How does the traffic optimizer respond to that? By noting that it’s drawing more engagement, which is a sign to push it even HARDER.
Suddenly, people suggesting that meth is healthy start to accumulate followers and APPEAR popular. Some of that is because the more people are exposed to the idea, the more likely they are to buy it, but the botnets play a role, too. Even if you’re a normal person, you may start to go, “I don’t get how the idea that meth is good for your health suddenly got so popular so fast. I’m seeing it EVERYWHERE.”
But is it really popular?
If there are, let’s say, 100 million messages a day flowing through a network and a thousand of them are about meth being healthy, but the algorithm serves those thousand messages up to 50 million people, those people are going to get the IMPRESSION that it’s popular, even though it’s really not.
That being said, human beings are very impressionable creatures. We have a natural urge to follow the crowd and do what’s popular. If we conclude that the idea that “meth is healthy” is now what most smart people believe, many of us will just adopt that belief without thinking a lot about it.
Similarly, you’ll start seeing TV shows, podcasts, and writers talk about whether meth is healthy because they WANT some of that traffic and controversy that comes along with it.
Suddenly, you’re starting to get to the point where the fake wave generated by the traffic maximizer is now mixing with a small, but increasing number of true believers, real media interest, and people who just want to be seen supporting the “latest thing.” How much of it is the real deal, and how much is “traffic maximizer” driven? This deep in the process, it starts to become hard to say.
What you have to realize is that this is now happening ALL DAY LONG, DAY-AFTER-DAY, MONTH-AFTER-MONTH, YEAR-AFTER-YEAR on every social media network, and most of the ideas that are being promoted are bad almost by default.
Why? Well, because as Zuby said:
“This is a reminder that virtually everything you see on the news and going viral on social media is an anomaly by definition. ‘Normal’ doesn’t go viral.”
So, what sort of ideas have been made more fashionable this way?
Flat Earth theory is widely believed to have been popularized by the YouTube algorithm. Facebook kicked off the QAnon idiocy. Instagram and TikTok’s algorithms ended up pushing eating disorders to lots of young women. It’s hard to tell exactly how many people died in Myanmar because of widespread violence driven by fake stories on Facebook, but estimates put it as high as several thousand, while HUNDREDS OF THOUSANDS had to flee their homes.
We can go on with this, but here’s an extraordinarily dark and creepy one from The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World:
CHRISTIANE DIDN’T THINK anything of it when her ten-year-old daughter and a friend uploaded a video of themselves splashing in a backyard pool. “The video is innocent, it’s not a big deal,” said Christiane, who lives in a suburb of Rio de Janeiro. A few days later, her daughter shared exciting news: the video had thousands of views. Before long, it had 400,000. It was a staggering, inexplicable number for an unremarkable little clip uploaded to Christiane’s channel, which normally got a few dozen clicks. “I saw the video again, and I got scared by the number of views,” Christiane said. She had reason to be.
YouTube’s algorithm had quietly selected the video of her daughter for a vast and disturbing program. It was curating, from across its archives, dozens of videos of prepubescent, partially unclothed children. It plucked many of them from the home movies of unwitting families. It strung them all together, showing one clip after another of six- or seven-year-olds in bathing suits or underwear, doing splits or lying in bed, to draw in a very specific kind of viewer with content they would find irresistible. And then it built an audience for those videos the size of ten football stadiums. “I’m so shocked,” Christiane said when she learned what had happened, terrified that her daughter’s video had been presented alongside so many others, with the platform’s intentions disturbingly clear.
Kaiser, along with Rauchfleisch and Córdova, had stumbled onto this while working on the Brazil study. As their test machine followed YouTube’s recommendation on sexually themed videos, the system pushed toward more bizarre or extreme sexual content. This in itself was not shocking; they had seen the rabbit-hole effect many times on other sorts of content. But some of the recommendation chains followed an unmistakable progression: each subsequent video led to another where the woman in its center put greater emphasis on youth and grew more erotic. Videos of women discussing sex, for example, led to videos of women in underwear or breastfeeding, sometimes mentioning their age: nineteen, eighteen, even sixteen. Some solicited “sugar daddies,” a term for donations from lustful viewers. Others hinted at private videos where they posed nude for money. After a few clicks, the women in the videos played more and more overtly at prepubescence, speaking in baby talk or posing seductively in children’s clothing. From there, YouTube would suddenly shift to recommending clips of very young children caught in moments of unintended nudity. A girl, perhaps as young as five or six, changing her clothes, or contorting into a gymnastics pose. Then a near-endless stream of such videos, drawn from around the world. Not all appeared to be home movies; some had been uploaded by carefully anonymized accounts.
The ruthless specificity of YouTube’s selections was almost as disturbing as the content itself, suggesting that its systems could correctly identify a video of a partially nude child and determine that this characteristic was the video’s appeal. Showing a series of them immediately after sexually explicit material made clear that the algorithm treated the unwitting children as sexual content. The extraordinary view counts, sometimes in the millions, indicated that this was no quirk of personalization. The system had found, maybe constructed, an audience for the videos. And it was working to keep that audience engaged.
Did you ever notice how many of the big stories that drove the Black Lives Matter movement or the recent ICE protests were either heavily exaggerated or outright lies? Didn’t it seem weird to you that out-of-the-blue one day, you suddenly saw lots of people on social media trashing Jews or even more bizarrely, praising Hitler? Don’t you think it’s odd that someone like Nick Fuentes, who has never managed to gather 1,000 people together in one place in the real world in his entire life, can be a constant subject of discussion online? If you are a little older, haven’t you been surprised by how much political polarization and radicalization have grown in the last 15 years? If the trans movement is a social contagion, what do you think produced that social contagion?
In my opinion?
ALL OF THESE THINGS are related to one factor.
It’s the algorithms on social media sites. “The traffic maximizers.”
They have no morality. No soul. They don’t care about right and wrong. They don’t care what’s good for the country. All they care about is what grabs and holds your attention, which, unfortunately, because of the way human beings are wired, tends to be anger, outrage, controversy, conspiracy, deviancy, and intense emotions.
Even the social media sites that create these algorithms don’t understand all of what they’re doing. They just understand that they produce a lot more traffic than letting people just see the posts their friends put up in chronological order. On the other hand, probably three-quarters of the craziness we’ve seen in this country over the last decade can probably be tracked directly back to the impact of exposing massive percentages of our population to the content curated by those algorithms.
While we have every reason to oppose the heavily biased, partisan censorship we’ve seen the Left try to force on social media for their own political gain, our country and the world would almost certainly be better off overall if social media didn’t exist in its current form.


Well done. I think the class action lawsuits are coming. Social media is more harmful to the masses than tobacco.
The painful blink of truth
is in the eye of the bee-holder