#ai

See tagged statuses in the local Gatti Ninja community

The rhetoric that limiting or banning AI/generative AI/LLM/diffusion model use is "ableist" or "gatekeeping" is the latest desperate attempt to find an angle through which to force this technology into our lives against our collective will. We need to reject this narrative. Common as it is it simply doesn't scan. It reads to me as an attempt to co-opt the language of social justice to shame people into accepting an unjust and largely failing technology that they are rightfully rejecting.

Think it through. If you don't accept the use of climate-destroying, electricity-and-fresh-water-sapping, job-destroying, economy-thrashing--and yet mediocre or poorly performing!--technology created by multi-trillion-dollar sociopathic entities, then you are preventing people with less privilege than you have from living their best lives. You are preventing them from learning how to code. You are preventing them from obtaining coveted jobs in the tech sector. You are preventing them from having access to information. …

Mel Andrews on the connections between a naive belief in scientific objectivity (facts and data are "real" and "correct" and "neutral") and eugenics:

Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).

From The Immortal Science of ML: Machine Learning the Theory-Free …

The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.

From The reanimation of pseudoscience in machine learning and its ethical repercussions here: https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0. It's open access.

In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.

@DataKnightmare su

Una volta che la Commissione tratta come "ricerca scientifica" qualsiasi attività che il sedicente ricercatore definisce tale - ivi compresa quella a scopo commerciale - e ciò può fungere da eccezione al GDPR, il diventa lettera morta.

https://www.agendadigitale.eu/sicurezza/privacy/europa-giu-le-mani-del-gdpr-ecco-i-rischi-che-corriamo/

Un esempio: questo progetto di sorveglianza a Trento, volto a identificare con i () i comportamenti criminali dei pericolosi trentini che passavano per strada così da poterli prevenire, era un progetto di ricerca scientifica, finanziato dall'Unione Europea.

https://www.robertocaso.it/2024/02/05/verso-una-citta-smart-e-dispotica-no-del-garante-privacy-ai-progetti-marvel-e-protector-del-comune-di-trento/

Se il fosse stato in vigore, avrebbe potuto essere sanzionato e bloccato dal Garante italiano?

The Verge article about CoreWeave by Elizabeth Lopatto is amazing.

Let’s start with some very recent history. CoreWeave is a data center company that pivoted in 2022 from crypto. (In 2021, CoreWeave made its money by… mining Ethereum.) Essentially, CoreWeave is a landlord for compute: companies pay for the use of its server racks for AI projects.

...

CoreWeave chief executive officer Michael Intrator, a former hedge fund manager,

...

“They have to continue to borrow to pay interest on the last loan.”

So,
- CoreWeave sits at the center of the AI bubble;
- it used to be a crypto company and also gets its (electric) power from a Bitcoin mining company that makes no money and has CoreWeave as its only customer
- it's positioned itself as a rentier;
- its interest payments on previous loans exceed its revenue by a significant …

🗞️🇪🇺 "Europe is set to streamline its and privacy laws in a move critics say will appease Big Tech and U.S. President Donald Trump. 127 civil organisations called the proposals 'the biggest rollback of digital fundamental rights in EU history'."

👉 https://www.reuters.com/sustainability/boards-policy-regulation/eu-ease-ai-privacy-rules-critics-warn-caving-big-tech-trump-2025-11-19/

Groundbreaking discovery from Italian researchers at University : they’ve uncovered how genetic and epigenetic mechanisms, driven by protein and modification, fuel a rare pediatric leukemia (T-cell ALL).

This insight could revolutionize targeted therapies, improving outcomes for youngest patients.

Precision medicine is the future,
stop using for bullsh*t,
use it for this.

Mozilla announces more AI in Firefox, still failing to understand that at this point their market share is made of tech savvy people who do not want that.

Mozilla, please. Stop chasing the fads. Get your own values.

People use Firefox to get access to an open web, use a sturdy browser, and not to be tracked. Build in that direction and that direction only.

Nobody from your current user base will recommend Firefox with AI to their friends.

https://blog.mozilla.org/en/firefox/ai-window/