The rhetoric that limiting or banning AI/generative AI/LLM/diffusion model use is "ableist" or "gatekeeping" is the latest desperate attempt to find an angle through which to force this technology into our lives against our collective will. We need to reject this narrative. Common as it is it simply doesn't scan. It reads to me as an attempt to co-opt the language of social justice to shame people into accepting an unjust and largely failing technology that they are rightfully rejecting.
Think it through. If you don't accept the use of climate-destroying, electricity-and-fresh-water-sapping, job-destroying, economy-thrashing--and yet mediocre or poorly performing!--technology created by multi-trillion-dollar sociopathic entities, then you are preventing people with less privilege than you have from living their best lives. You are preventing them from learning how to code. You are preventing them from obtaining coveted jobs in the tech sector. You are preventing them from having access to information. …
The rhetoric that limiting or banning AI/generative AI/LLM/diffusion model use is "ableist" or "gatekeeping" is the latest desperate attempt to find an angle through which to force this technology into our lives against our collective will. We need to reject this narrative. Common as it is it simply doesn't scan. It reads to me as an attempt to co-opt the language of social justice to shame people into accepting an unjust and largely failing technology that they are rightfully rejecting.
Think it through. If you don't accept the use of climate-destroying, electricity-and-fresh-water-sapping, job-destroying, economy-thrashing--and yet mediocre or poorly performing!--technology created by multi-trillion-dollar sociopathic entities, then you are preventing people with less privilege than you have from living their best lives. You are preventing them from learning how to code. You are preventing them from obtaining coveted jobs in the tech sector. You are preventing them from having access to information. You, personally, are responsible for all this. Not the multi-trillion-dollar sociopathic entities who've not only created this technology and forced it on us but contributed to creating the less-privileged conditions of the people you are supposedly responsible for with your individual choices. Not the governments that neglected to enforce existing laws that would have prevented such multi-trillion-dollar sociopathic entities from forming in the first place, let alone creating such a technology--while also creating the conditions that led to people being less privileged. No, they are not responsible. You are. I am.
That doesn't make any sense.
Neoliberalism's greatest trick has been to shift responsibility for any problems away from the powerful and onto individuals who are not empowered to fix anything, all while convincing everyone that this is right and proper. Large corporations do not cause a plastic pollution problem; you and I do, by not separating our recycling. Large corporations, governments and militaries do not cause CO2 pollution and climate damage; you and I do, by using incandescent lightbulbs and non-electric/non-hybrid cars or eating meat. Lack of regulation and large agribusiness practices are not to blame for poor food quality; you and I are, for buying what they sell instead of going organic and joining a CSA. Etc. ad infinitum. Large, powerful entities routinely generate a problem, then tell you and me that we are responsible for the problem as well as for fixing it. Never mind that these entities could nudge their own behavior a bit and move the needle on the problem far more than masses of people could no matter how organized they were. Never mind that these entities could be constrained from causing such problems in the first place.
We are watching a new variation of this pattern come into being right in front of our eyes with AI. We should stop accepting these fictions. You are neither ableist nor a gatekeeper for resisting AI. You are, instead, attempting to forestall the further degradation of conditions for everyone, which starts this same cycle anew.
Mel Andrews on the connections between a naive belief in scientific objectivity (facts and data are "real" and "correct" and "neutral") and eugenics:
Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).
From The Immortal Science of ML: Machine Learning the Theory-Free …
Mel Andrews on the connections between a naive belief in scientific objectivity (facts and data are "real" and "correct" and "neutral") and eugenics:
Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).
From The Immortal Science of ML: Machine Learning & the Theory-Free Ideal.
I've lost the reference, but I suspect it was Meredith Whittaker who's written and spoken about the big data turn at Google, where it was understood that having and collecting massive datasets allowed them to eschew model-building.
The core idea being critiqued here is that there's a kind of scientific view from nowhere: a theory-free, value-free, model-free, bias-free way of observing the world that will lead to Truth; and that it's the task of the scientist to approximate this view from nowhere as well as possible.
The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.
In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.
The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.
In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.
Massive compute power applied to massive data sets can produce outcomes that are worse at the task they’re (ostensibly) intended for than much simpler, easier to understand, less wasteful, and less intrusive data-light methods. It requires an extreme form of bias to believe that big compute + big data is always better.
Una volta che la Commissione #EU tratta come "ricerca scientifica" qualsiasi attività che il sedicente ricercatore definisce tale - ivi compresa quella a scopo commerciale - e ciò può fungere da eccezione al GDPR, il #GDPR diventa lettera morta.
Un esempio: questo progetto di sorveglianza a Trento, volto a identificare con i #SALAMI (#AI) i comportamenti criminali dei pericolosi trentini che passavano per strada così da poterli prevenire, era un progetto di ricerca scientifica, finanziato dall'Unione Europea.
Una volta che la Commissione #EU tratta come "ricerca scientifica" qualsiasi attività che il sedicente ricercatore definisce tale - ivi compresa quella a scopo commerciale - e ciò può fungere da eccezione al GDPR, il #GDPR diventa lettera morta.
Un esempio: questo progetto di sorveglianza a Trento, volto a identificare con i #SALAMI (#AI) i comportamenti criminali dei pericolosi trentini che passavano per strada così da poterli prevenire, era un progetto di ricerca scientifica, finanziato dall'Unione Europea.
This Thanksgiving I am celebrating the AI revolution by putting the turkey, mashed potatoes, cranberry sauce, and bean casserole through a blender and serving Generative Alimentary Infusion to all my guests.
Let’s start with some very recent history. CoreWeave is a data center company that pivoted in 2022 from crypto. (In 2021, CoreWeave made its money by… mining Ethereum.) Essentially, CoreWeave is a landlord for compute: companies pay for the use of its server racks for AI projects.
...
CoreWeave chief executive officer Michael Intrator, a former hedge fund manager,
...
“They have to continue to borrow to pay interest on the last loan.”
So, - CoreWeave sits at the center of the AI bubble; - it used to be a crypto company and also gets its (electric) power from a Bitcoin mining company that makes no money and has CoreWeave as its only customer - it's positioned itself as a rentier; - its interest payments on previous loans exceed its revenue by a significant …
Let’s start with some very recent history. CoreWeave is a data center company that pivoted in 2022 from crypto. (In 2021, CoreWeave made its money by… mining Ethereum.) Essentially, CoreWeave is a landlord for compute: companies pay for the use of its server racks for AI projects.
...
CoreWeave chief executive officer Michael Intrator, a former hedge fund manager,
...
“They have to continue to borrow to pay interest on the last loan.”
So, - CoreWeave sits at the center of the AI bubble; - it used to be a crypto company and also gets its (electric) power from a Bitcoin mining company that makes no money and has CoreWeave as its only customer - it's positioned itself as a rentier; - its interest payments on previous loans exceed its revenue by a significant amount, so it's paying off loans with more loans and has already defaulted once; - it has essentially two customers, Microsoft and NVIDIA; - it has a loan from one of the actors implicated in the 2008 financial crash (Magnetar) - it's run by a finance guy, not a tech person - yet it's in the position of someone who takes out a new credit card to pay the interest on the previous credit card
Yeah. Looks like crypto, and crypto's Ponzi scheme way of thinking, has slimed its way into the "real" economy after all.
Oh and welcome back, global financial crash. We missed you. And eyyy, how you doing Enron long time no see:
CoreWeave isn’t alone in its complex finances. Meta took on debt, using a SPV, for its own data centers. Unlike CoreWeave’s SPVs, the Meta SPV stays off its balance sheet. Elon Musk’s xAI is reportedly pursuing its own SPV deal.
"Complex finances" are what companies engage in when there isn't any there there (SPVs were Enron's "financial innovation" too).
Peter Thiel pulling his investments out of NVIDIA makes far more sense after reading this. Looks wobbly.
It is perhaps time to discuss the enormous stock sales from CoreWeave’s management team. Before the company even went public, its founders sold almost half a billion dollars in shares. Then, insiders sold over $1 billion more immediately after the IPO lockup ended.
...
“It’s noteworthy that people who have a good view on that business are cashing out,” says Leevi Saari, a fellow at the AI Now Institute.
and of course
It makes a certain kind of cynical sense to view CoreWeave itself as, effectively, a special purpose vehicle for Nvidia.
There is no such thing as artificial intelligence. If it is intelligent it is not artificial. LLMs are plagiarising bullsh*t generators and using them in search engines is replacing knowledge with stupidity. #AI#LLMs#GarbageInGarbageOut#ThatsAI
🗞️🇪🇺 "Europe is set to streamline its #AI and privacy laws in a move critics say will appease Big Tech and U.S. President Donald Trump. 127 civil organisations called the proposals 'the biggest rollback of digital fundamental rights in EU history'."
🗞️🇪🇺 "Europe is set to streamline its #AI and privacy laws in a move critics say will appease Big Tech and U.S. President Donald Trump. 127 civil organisations called the proposals 'the biggest rollback of digital fundamental rights in EU history'."
Groundbreaking discovery from Italian researchers at University #LaSapienza: they’ve uncovered how genetic and epigenetic mechanisms, driven by #Notch3 protein and #RNA modification, fuel a rare pediatric leukemia (T-cell ALL).
This insight could revolutionize targeted therapies, improving outcomes for youngest patients.
Precision medicine is the future, stop using #AI for bullsh*t, use it for this.
Good morning, everyone. Oh, look: Today, the @EUCommission launches their plans to gravely undermine our fundamental rights because tech lobbyists asked them to.
Mozilla announces more AI in Firefox, still failing to understand that at this point their market share is made of tech savvy people who do not want that.
Mozilla, please. Stop chasing the fads. Get your own values.
People use Firefox to get access to an open web, use a sturdy browser, and not to be tracked. Build in that direction and that direction only.
Nobody from your current user base will recommend Firefox with AI to their friends.
"#SiliconValley isn’t building apps anymore. It’s building empires."
"Under the banner of "patriotic tech", this new bloc is building the infrastructure of control—clouds, #AI, #finance, #drones, #satellites—an integrated system we call the #AuthoritarianStack. It is faster, ideological, and fully privatized: a regime where corporate boards, not public law, set the rules."