Twitter's bot problem is getting weird with ChatGPT
- 15 min read - Text OnlyChatGPT is so affordable abusive actors can use Open AI's technology to spew junk content in every direction. And like a network program that searches for a responsive remote application, these actors will find ways to monetize social systems and scam vulnerable people. The layoffs (archived) and consistent focus towards rewarding stock holders (archived) are eroding systems in place for safety (archived.) Content on the web, videos on YouTube, music on Spotify, any platform distributed content is being poisoned by bad actors with generative technology.
Weird Tweets
Monday evening, I find out that some one's bot is posting my content with factually incorrect summaries.
Ed25519 was published in 2011, not 2015. Rather, Curve448 was published in 2015. Your model is terrible at reading comprehension.
That is not what I said or suggested at all. The early designs of service accounts did not clearly identify the key tied to the JWT. Their documentation lacks the kid
field in the JWT Header. The real APIs today do include a kid
field. I reported their documentation as inaccurate.
Ed25519 is not new anymore and was not made for cryptocurrencies. In fact, there are terrible problems in consensus protocols, Ed25519 should not be used in cryptocurrencies. Consider Ristretto255 or Decaf448.
Whatever this truncated nonsense is...
Others too
Turns out, they're doing it to others in cryptography too.
Filippo, you'd raise a brow at this, right? Where in your article do you mention man-in-the-middle?
Soatok absolutely would not promote PGP. In fact, he states:
Next to the weird tweets
And oddly... each and every one has these "If you're smart, click this short link"
Each profile's tweets are so similar in how arbitrarily diverse its content that I suspect the diversity is used to mask these phishing tweets. That is my take, though another is also plausible.
@cendyne @soatok Assume this is Twitter, right? Its to farm engagement, like all the bots that now reply to posts from popular accounts. I dont fully understand how it works from an algorithmic standpoint, nor do I know exactly how successful it is in doing so, with regards to what is gained out of it.
I just know its to farm the (new) engagement system on Twitter for some sort of gain.
Aside to my referenced articles
It is also a thing to see bots react to bots.
These are scams and Twitter no longer prioritizes content safety and halting scam proliferation.
Elsewhere on Spotify
This problem is not limited to twitter, or even written media. It affects music and videos too.
Even in 2019, Spotify wrestled with fake artist streams (archived). These platforms can funnel miniscule amounts of money to creators. Abusive actors have and continue to create fake artists with generic and tasteless tracks to divert revenue away from honest musicians.
If this generative tech isn't used to defraud vulnerable people, it is used to defraud the platforms.
There are even theories that these spammy leaches are beneficial to the platforms, which lets the platform look the other way by deprioritizing fraud prevention. After all, engagement numbers are up for shareholders and the bottom line for the platform is even higher. At least, that's one popular theory going around.
Spotify is addressing the AI generated music problem (archived.) Or at least, they say they are. Obvious examples shared by Adam Faze above remain on Spotify.
Popular videos on YouTube and TikTok
We're entering into this weird stage where algorithms promote meaningless content. Before, it was created by humans to get ad dollars. Open TikTok or YouTube shorts without and account and you'll be recommended weird content. A mix of visually stimulating or even hypnotic content coupled with something unrelated tied to it. Either by splitting the video in half or someone narrating an average or mundane story for the audio. Below is a short sample. I will not waste your time with the full thing.
The above video is one I was randomly given. Half way through I realized... the meat cooking and the audio were unrelated. And in fact, this is just reading off a post on a reddit. The uploader of this video did not create anything novel. They cobbled together unrelated content that equally had some form of relatable stimulation. This is "sludge." This is a person abusing a platform for ad revenue with sludge content. It is also the most forgettable content I have ever seen, yet the most prized content to algorithms for engagement.
Sludge content is proliferating more and more, either to abuse a platform or to hide scams from the platform. ChatGPT and other generative technologies accelerate sludge production in an even more dangerous direction: misinformation, misrepresentation, and threat that will break down how we as people communicate with one another.
Lastly, elsewhere on the web.
I got a surprise hit the other week from this strange website called "britneyblog." Somehow, this author found me and linked to my most cited article: A Deep dive into Ed25519 Signatures.
I will not directly link it, though if you're curious in 142 pages of sludge SEO spam, see www.britneyblog.online
yourself.
Truly, I am baffled in how this person is benefiting. Is this a ChatGPT powered bait and switch (archived?) Will "Britney" sell the website to someone once it has enough SEO clout with junk content?
Are they chewing through Fiverr contracts to enhance arbitrary products online?
Who would pay for a review about a plugin for ten year old vulnerable email software? Or is this some SEO poisoning honeypot up for sale?
Or is it all sludge and I have yet to find the scam hidden inside?
Why is Sludge so effective?
Hank Green shares an interesting take on why generative images do not get called out until someone announces at item is generated.
In his short TikTok essay, Hank describes a few reasons this nonsense passes human bullsh*t filters.
In short, this content does not flag enough subconscious vibes and people do not look closely because:
- Most people do not care about the Pope, or the subject in question. They likely do not know much about the subject.
- This content does not challenge the worldviews of the viewers. There is little cause to investigate it.
- Or, it confirms a preexisting bias.
Each of these reasons apply to sludge as well. Sludge content is not something most people care about. It does not challenge the content consumer's views. And, it is a passable imposter next to the views, based in reality or fiction, that people hold.
If you were not experienced in or cared about cryptography, would you have spotted the factual incorrectness produced in those tweets above for Filippo, Soatok, and myself?
Conclusion on Sludge
Sludge is now produced by people with generative technology and is published through disposable bot accounts on the platforms we use and express ourselves on. Not only platforms provided by big-tech, but even websites which are found through search engines also provided by big-tech. These abusive actors waste our finite human time (often calculated as engagement) with sludge. Sludge adds platform-positive noise to hide scams and abuse to real people.
Sludge is spam. Sludge is platform-positive spam.
More abusive actors will p*ss in the pool of information society. Guard your time, guard your attention, you only have so much sludge to trudge through in your finite life.