General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsWe Tried to Detect Bots in 500 Comments. We Found a More Interesting Problem.
https://www.musubilabs.ai/post/ai-comment-detection-what-we-foundThoughtful.
I'll just quote this observation.
We're seeing this trend in every online space with user generated content - contributions became cheap, and the cost is moving downstream onto the people trying to read, review, and maintain.
Many platforms have built ranking systems that reward engagement. A bot comment that says "Spot on, couldn't agree more" counts as engagement and boosts the post. The poster benefits from visibility and has little reason to report it. The platform benefits because it looks like activity. In the short term, the reader is mainly the one who loses but as the integrity and quality of the platform starts to fail, everyone does.
The model that incentivizes platforms to produce empty engagement at scale isn't sustainable. A feed full of plausible-sounding noise trains readers to skim, mute, or leave. The scarce resource online is no longer content. Platforms succeed by building trust through consistent judgement, curation, and protecting attention.
Mods!
Attilatheblond
(9,161 posts)*New motto of the United States?
erronis
(24,383 posts)Perhaps less so for regimes with tight internet security/firewalls.
Flo Mingo
(516 posts)Ads on social media with super generic top comments like: I tried this and it was great. Or Just got mine and I love it.
usonian
(26,336 posts)AZJonnie
(3,952 posts)Advertisers expect to pay rates based on actual eyeballs/real people with money to spend. As non-human, BS bot posts on a platform rise as a % of total, it will lead to the advertiser's conversion rates on these ads falling lower and lower. This will lead to them be less willing to pay what the website is asking to advertise on them.
On today's internet, there's so much end-to-end tracking of the whole advertising/engagement/conversion process that the advertisers have means by which to determine whether it's likely a particular provider has an inordinate amount of bot-posts simply because their ads there are underperforming (not leading to conversions). So the people who run the sites DO have an incentive to cut back on the BS level because they're not going to get away with misrepresenting their 'traffic' (which bots contribute to) forever.
CrispyQ
(41,068 posts)Is there a real human at the end, somewhere? How many accounts do they manage? Do they have to type each response in manually, or is there a program to do it?
Or, are these accounts run by a computer or AI, & how exactly does that work?
usonian
(26,336 posts)Here, we are talking about social bots.
Short (from WIkipedia)
An Internet bot (also called a web robot or robot), or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale
Many just go out and grab content for AI (large language models)
And there are auction bots (which is why I avoid auctions like the plague.) and trading bots, and attack bots (that clobber websites) and so on and so on.
More here
Lots more.
https://en.wikipedia.org/wiki/Internet_bot
It's estimated that half the internet traffic at this point is bots, software agents, not humans (though humans initially program those agents/programs).
Some call the internet "dead" already on account of them.
https://builtin.com/articles/the-dead-internet-theory
erronis
(24,383 posts)Not in the operation of the bot. It is usually totally automated by computer programs. These programs can interact with thousands of businesses (web sites such as DU) at the same time. The typing of information is done at computer speed - 100s of characters almost instantly.
Sometimes a human will augment the bot to supply information needed for the interaction.
The programs are initially written by human beings (and more recently by AI programming) but once they are unleashed they pretty much have full control.
usonian
(26,336 posts)So I bitched to central services that it should be the opposite.
They should be rejecting FAST password entry, because an attacker (bot) often just repeatedly tries to log in to accounts using the most common "don't make me think" passwords (1) and slowing them down raises the cost of the attack versus the benefits.
Like these


(1) Here are some of the top 10
123456
admin
12345678
123456789
12345
password
1234567890
1234567
123123
111111
erronis
(24,383 posts)between interactions. And would try not to overload the host with my queries. Nowadays, I think all of this is passe.
Of course a lot of form filling on the web is now being done by password managers and browser remembered fields as well as control-paste operations. No delays in the inter-character typing there.
usonian
(26,336 posts)Which, of course, totally discourages long passwords, such as good practice demands and which password manager apps facilitate.
The internet is a soccer match where own goals predominate.
That part never changes.
flvegan
(66,465 posts)Information age of hysteria
It's calling out to idiot America
usonian
(26,336 posts) George Orwell
1984. Part 2, Chapter 9. Winston reading from Emmanuel Goldsteins book The Theory and Practice of Oligarchical Collectivism.
------
The book within a book, "The Theory and Practice of Oligarchical Collectivism" by the fictional Emmanuel Goldstein is available from the Internet Archive. (PDF)
https://dn721904.ca.archive.org/0/items/the-theory-and-practice-of-oligarchical-collectivism_202408/THE%20THEORY%20AND%20PRACTICE%20OF%20OLIGARCHICAL%20COLLECTIVISM.pdf
muriel_volestrangler
(106,499 posts)I hope the OP article was human-written, but it does seem that their personal solution will be AI-based, whether or not it really helps.
blubunyip
(297 posts)"holistic, AI-enabled moderation" ....
hmmm
erronis
(24,383 posts)GreatGazoo
(4,668 posts)User growth has been flat for a long time. CW is that their karma system is both gamed by AI and hostile to real users who can't compete with "karma farming" and fake posts that leverage LLMs and machine learning to generate high engagement fictions on subs like AITAH and AIO.
Melon
(1,602 posts)It such garbage. I have walked away from all of my accounts. Its just all made up bs. Real users get banned frivolously for not having group think while bots post the same questions repeatedly to generate engagement. Reddit is horrible.