#ai

See tagged statuses in the local Gatti Ninja community

RE: https://tldr.nettime.org/@w0bb1t/115905221426023536

The website invites users to manipulate tones in real time, simulating climate-induced language change and tonal erosion. Through this interaction, users generate “glitched” speech that remains comprehensible to human listeners yet becomes unparseable to speech recognition systems. This deliberate mismatch exposes the limits of machine listening and proposes tonal instability as a form of digital resistance.

Resistance never disappears—it only continuously transforms:

Under authoritarian surveillance: encoded folk songs
Under algorithmic surveillance: glitched tones

Hey Guys, gals, people and pets, Satya Nadella is getting tired of people calling AI Slop output AI Slop. If everyone can stop that's be great as it is upsetting Microslops share holders.

Don't start calling Microslop Microslop or use the tag as it might burst the AI bubble and then Microslop might have to take SlopPilot out of Windows. As you know, that would be terrible.

https://www.windowscentral.com/artificial-intelligence/microslop-trends-on-social-media-backlash-to-microsofts-on-going-ai-obsession-continues

If you want a rigorous analysis of why statistical #AI models collapse when continuously trained on their own data without external supervision and constraints, read this amazing paper from last year.

If you want to get a visual intuition of how model collapse looks like, look at this video.

When AI stares at its own reflection for too long, and its inference is purely rooted on statistics rather than reasoning, this becomes statistically inevitable.

Keep this in mind whenever you hear someone talking about “AI models learning from their own outputs” without addressing the statistical parrot issue.

If I were on , I’d cancel my subscription/account immediately.

King Gizzard pulled their music from the platform so Spotify has replaced it with knockoffs of their music. As if paying almost nothing for streaming wasn’t evil enough, they’re now using LLMs trained on stolen content to pay artists nothing at all.

https://futurism.com/future-society/king-gizzard-spotify-ai-knockoff

L'imitazione dell'IA è molto più comune di quanto si pensi. Oggi, Data Workers' Inquiry pubblica un potente saggio di Michael Geoffrey Asia, un lavoratore di Nairobi (Kenya) che rivela una forma intima e inquietante di lavoro nascosto sui dati IA: impersonare un partner sessuale IA.
L'articolo offre uno sguardo raro sulle realtà psicologiche ed economiche dietro uno dei settori in più rapida crescita, ovvero l'intimità assistita dall'IA.

https://data-workers.org/michael/

Before the automobile industry invented the catalytic converter, the costs of reducing air pollution seemed astronomical, enough to bankrupt the entire industry. After they invented the catalytical converter, the costs were manageable. And they only invented it because they were faced with the threat of being shut down.

Industries creating harms often claim that controls and regulations are impossible, would bankrupt them, etc., trying to make their existence into a zero-sum game (for some people to have the benefit of our industry, other people must suffer). AI companies claim they must steal copyrighted works because they could not exist otherwise; or be allowed to use as much electricity as they demand in spite of the costs. But it's B.S., and we should stop accepting this rhetoric. Forced to innovate to reduce harms, these industries have innovated, and made themselves even more profitable than they were when they were dragging …