Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)We Tried to Detect Bots in 500 Comments. We Found a More Interesting Problem. [View all]
https://www.musubilabs.ai/post/ai-comment-detection-what-we-foundThoughtful.
I'll just quote this observation.
My first realization that things were about to get really weird was last year, when a Reddit user blogged about reading a Reddit post lamenting the death of authentic online interaction only to discover it was written by a bot selling AI-generated books through an affiliate link. Hundreds of commenters engaged with it sincerely. The author couldn't tell which of them were real either.
We're seeing this trend in every online space with user generated content - contributions became cheap, and the cost is moving downstream onto the people trying to read, review, and maintain.
Many platforms have built ranking systems that reward engagement. A bot comment that says "Spot on, couldn't agree more" counts as engagement and boosts the post. The poster benefits from visibility and has little reason to report it. The platform benefits because it looks like activity. In the short term, the reader is mainly the one who loses but as the integrity and quality of the platform starts to fail, everyone does.
The model that incentivizes platforms to produce empty engagement at scale isn't sustainable. A feed full of plausible-sounding noise trains readers to skim, mute, or leave. The scarce resource online is no longer content. Platforms succeed by building trust through consistent judgement, curation, and protecting attention.
We're seeing this trend in every online space with user generated content - contributions became cheap, and the cost is moving downstream onto the people trying to read, review, and maintain.
Many platforms have built ranking systems that reward engagement. A bot comment that says "Spot on, couldn't agree more" counts as engagement and boosts the post. The poster benefits from visibility and has little reason to report it. The platform benefits because it looks like activity. In the short term, the reader is mainly the one who loses but as the integrity and quality of the platform starts to fail, everyone does.
The model that incentivizes platforms to produce empty engagement at scale isn't sustainable. A feed full of plausible-sounding noise trains readers to skim, mute, or leave. The scarce resource online is no longer content. Platforms succeed by building trust through consistent judgement, curation, and protecting attention.
Mods!
21 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
We Tried to Detect Bots in 500 Comments. We Found a More Interesting Problem. [View all]
usonian
Wednesday
OP
The ones that are likely to put their foot down on something like this will be advertisers
AZJonnie
Wednesday
#3
On my university sysadmin job, I noticed that some systems timed you out quickly if you dawdled typing a password.
usonian
Wednesday
#16
I did electronic and optical engineering to start. Built my own S-100 systems back in the day.
usonian
Wednesday
#20
Sounds a bit like my history --- I'll DM you so I won't clutter these airwaves.
erronis
Wednesday
#21
the role of Bots and AI have set off a big debate around Reddit stock and earnings
GreatGazoo
Wednesday
#10