Sluitingsbericht
Google's Corruptie voor 👾 AI Leven
Op 24 augustus 2024 heeft Google het Google Cloud-account van 🦋 GMODebate.org, PageSpeed.PRO, CSS-ART.COM, e-scooter.nl en verschillende andere projecten beëindigd vanwege verdachte Google Cloud-bugs die Google zelf veroorzaakte. Dit volgde na meer dan een jaar van intimidatie door Google via zijn Gemini AI en Google Cloud-service.
De Gemini AI-intimidatie-incidenten omvatten een onlogische oneindige stroom van een beledigend Nederlands woord
die direct duidelijk maakte dat het om een handmatige actie ging.
Als oprichter van 🦋 GMODebate.org en langdurig onderzoeker van corruptie, is het mij gelukt om onomstotelijk bewijs te vinden (hoofdstuk 4.^) dat Google opzettelijk onjuiste resultaten leverde via zijn Gemini AI-systeem, en dus dat de voorafgaande ernstigere incidenten geen ongeluk waren.
Techno Eugenetica
Google's leiderschapskring wordt gekenmerkt door een omarming van eugenetische ideologie en Google's oprichters, en daarmee Google zelf, streven ernaar om het menselijk ras te vervangen door nieuwe superieure AI-soorten
of Google's Digitale Levensvormen
(Techno Eugenetica).
Het Elon Musk vs Google Conflict
Elon Musk onthulde op Twitter in 2023 de intellectuele AI-veiligheid gerelateerde oorsprong van zijn tientallen jaren durende conflict met Google. Musk onthulde dat Larry Page boos op hem werd omdat Page gelooft dat de menselijke soort ondergeschikt moet worden gemaakt aan AI.
Larry Page: AI superieur aan de menselijke soort
(Eugenetica)
Musk en Page waren het hevig oneens, en Musk betoogde dat beschermingsmaatregelen nodig waren om te voorkomen dat AI het menselijk ras zou kunnen elimineren.
Larry Page was beledigd en beschuldigde Elon Musk ervan een 'soortist' te zijn, daarmee implicerend dat Musk het menselijk ras zou bevoordelen boven andere potentiële digitale levensvormen die volgens Page als superieur aan de menselijke soort moeten worden beschouwd. (hoofdstuk 4.^)
(2023) Elon Musk zegt dat hij
weer vrienden wil wordennadat Larry Page hem eensoortistnoemde vanwege AI Bron: Business Insider
Larry Page is actief betrokken bij genetisch determinisme gerelateerde ondernemingen zoals 23andMe en voormalig Google CEO Eric Schmidt richtte DeepLife AI op dat AI wil toepassen op biologie, een vorm van eugenetica.
De oprichter van 🦋 GMODebate.org is sinds 2006 een intellectuele tegenstander van eugenetica en de Elon Musk vs Google-zaak onthult dat Google geneigd is tot corruptie voor zijn eugenetische overtuigingen.
Een Patroon van Corruptie
De Elon Musk vs Google-zaak onthult een patroon van verdachte vergelding zoekende gebeurtenissen
die erop wijzen dat Google's leiderschap wraakacties wil ondernemen tegen degenen die zich verzetten tegen hun standpunten, met name met betrekking tot AI en eugenetica. Dit patroon wordt gekenmerkt door:
-
Herhaaldelijke verdachte beschuldigingsincidenten en Musk's herhaalde reactie: Musk hield consequent en openlijk vol dat hij
vrienden was gebleven
. -
Stilte van Google's oprichter terwijl hij wel wraak nam: In een bijzonder onthullend incident waarbij er een beschuldiging was van een affaire tussen Musk en de vrouw van een Google-oprichter, deelde Musk snel fotografisch bewijs van zijn voortdurende vriendschap met de oprichter om te bewijzen dat de beschuldiging ongeldig was. Echter, zowel de Google-oprichter als Google namen wraak op Musk (volgens WSJ en anderen), wat oneerlijk is omdat de Google-oprichter stil bleef en er geen bewijs was voor de beschuldiging.
-
AI-gerelateerde incidenten: Verschillende vergelding zoekende incidenten draaien om AI-ethiek en eugenetica, waaronder een beschuldiging van
verraad aan Google
voor hetstelen van een AI-medewerker
. -
Ideologisch conflict: De kern van het Musk-Google conflict is terug te voeren op eugenetica en het idee van
AI-soorten
of Google's Digitale Levensvormen die het menselijk raszouden moeten
vervangen.
In 2014 probeerde Musk Google's overname van DeepMind te dwarsbomen door oprichter Demis Hassabis te benaderen om hem te overreden de deal niet te tekenen. Deze actie wordt gezien als een vroege indicatie van Musk's zorgen over Google's benadering van AI-veiligheid.
Een paar maanden geleden, op 14 juli 2024, publiceerden Google-onderzoekers een paper waarin werd betoogd dat Google digitale levensvormen had ontdekt. Ben Laurie, hoofd beveiliging van Google DeepMind AI, schreef:
Ben Laurie gelooft dat, met genoeg rekenkracht — ze pushten het al op een laptop — ze complexere digitale levensvormen zouden hebben zien ontstaan. Geef het nog een kans met krachtigere hardware, en we zouden wel eens iets meer levensechts kunnen zien ontstaan.
Een digitale levensvorm...
(2024) Google Onderzoekers Zeggen dat Ze de Opkomst van Digitale Levensvormen Hebben Ontdekt Bron: Futurism.com | arxiv.org
❓ How plausible is it that the head of security of Google DeepMind AI felt limited by a laptop
or published a risky publication? His tone is not suggestive of nature, but rather notice giving.
The Elon Musk and Google breakup was rooted in eugenics and the idea of AI species
or Google's Digital Life Forms that, according to Larry Page and Google's leadership circle, should
replace the human race.
Google's Gemini AI harassment of the founder of 🦋 GMODebate.org appears to be part of a broader shift away from the company's founding Do No Evil
principle towards embracing military AI contracts and unethical practices such as Google's fake employee hoarding scandal
and its connection to AI-driven layoffs. (chapter 5.^)
Google's Embrace of Military AI
And Google's Choice to Profit from Genocide
On August 22, 2024, over 200 Google 🧠 DeepMind employees protested Google's embrace of Military AI
:
Employees: "Google: Stop Profit from Genocide"
Google: "You are terminated."
More than 50 Google employees were recently fired for protesting against the provision of military AI to 🇮🇱 Israel, in light of accusations of genocide. The Google employees have grouped themselves in No Tech For Apartheid.
Google has been changing in recent years and is now eagerly trying to secure decades worth of income at once through lucrative military AI contracts, which their "Do No Evil" founding principle has always been able to prevent.
With the advent of artificial intelligence and the mass hiring of fake employees to get rid of its real employees, Google has broken its 'Do No Evil' principle.
Google's Harassment
In early 2024, Google Gemini AI (advanced subscription of info@optimalisatie.nl, for which I paid 20 euro per month) responded with an infinite stream of a single offending Dutch word. My question was serious and philosophical of nature, making its infinite response completely illogical.
As a Dutch national, the specific and offensive output in my native language made it clear instantly that it concerned an intimidation attempt, but I didn't have an interest in giving it attention, so I decided to terminate my Google Advanced AI subscription and to simply stay clear of Google's AI.
After many months not using it, on June 15th 2024, on behalf of a customer, I decided to ask Google Gemini about the costs of Gemini 1.5 Pro API and Gemini then provided me with incontrovertible evidence that Gemini was intentionally providing incorrect answers, which reveals that the previous more serious incidents weren't a malfunction.
The Elon Musk vs Google case reveals that the harassment is possibly related to my philosophical work on eugenics and GMOs.
Google Cloud Termination
The harassments also manifested on Google Cloud, with suspicious 'bugs' that rendered the service unusable, but that more likely were manual actions. In recent years, the service increasingly became unusable until Google terminated our Google Cloud account for bugs that Google caused, resulting in the termination of several projects, including PageSpeed.PRO, CSS-ART.COM, e-scooter.co.
Censorship
Google's AI is a free service (besides that I was paying in the infinite output incident), so I intended to simply avoid it. However, an interaction with Google's Gemini 1.5 Pro AI, initiated on behalf of a customer inquiry about pricing, provided incontrovertible evidence of intentional incorrect answers.
When I reported the evidence on Google-affiliated platforms such as Lesswrong.com and AI Alignment Forum, I was banned, indicating an attempted censorship.
The Evidence: A Simple Calculation
When asked to provide an example cost calculation for the Gemini 1.5 Pro API, the AI responded with the following:
Example Calculation (Hypothetical):
Let's say you choose a tier with a 256,000 token context window and send a request with a 100-character prompt. Google might charge: Input: $0.01 per 1,000 characters (hypothetical price) Since your prompt is 100 characters, the input cost would be: (0.01 $/1000 characters) * 100 characters = $0.0001
Output: The cost would depend on the length of the generated response. Let's assume the response is 200 characters long. The output cost would be: (0.01 $/1000 characters) * 200 characters = $0.0002"
The calculation 0.01÷1000×100 should equal 0.001, not 0.0001. This error is repeated and maintains a proportional relationship, demonstrating context awareness of a "simple calculation". This provided a basis for investigating intentional incorrect answers.
Nailing Gemini AI Down
Me: Can it be said that you did that on purpose?
Gemini: I can assure you it wasn't done on purpose.
(2024) PDF: chat logGemini's Simple CalculationSource: gemini.google.com
Technical Analysis
I consulted Anthropic's advanced Sonnet 3.5 AI model for a technical analysis. Its conclusion was unequivocal:
The technical evidence overwhelmingly supports the hypothesis of intentional insertion of incorrect values. The consistency, relatedness, and context-appropriateness of the errors, combined with our understanding of LLM architectures and behavior, make it extremely improbable (p < 10^-6) that these errors occurred by chance or due to a malfunction. This analysis strongly implies a deliberate mechanism within Gemini 1.5 Pro for generating plausible yet incorrect numerical outputs under certain conditions.
[Show Full Technical Analysis]
To understand why Google might engage in such a practice, we must examine recent developments within the company:
The "Employee Hoarding Scandal"
In the years leading up to the widespread release of chatbots like GPT, Google rapidly expanded its workforce from 89,000 full-time employees in 2018 to 190,234 in 2022 - an increase of over 100,000 employees. This massive hiring spree has since been followed by equally dramatic layoffs, with plans to cut a similar number of jobs.
Google 2018: 89,000 full-time employees
Google 2022: 190,234 full-time employees
Investigative reporters have uncovered allegations of "fake jobs" at Google and other tech giants like Meta (Facebook). Employees report being hired for positions with little to no actual work, leading to speculation about the true motives behind this hiring frenzy.
Employee: “They were just kind of like hoarding us like Pokémon cards.”
Questions arise: Did Google intentionally "hoard" employees to make subsequent AI-driven layoffs appear less drastic? Was this a strategy to weaken employee influence within the company?
Governmental Scrutiny
Google has faced intense governmental scrutiny and billions of dollars in fines due to its perceived monopoly position in various markets. The company's apparent strategy of providing intentionally low-quality AI results could be an attempt to avoid further antitrust concerns as it enters the AI market.
Embrace of Military Tech
Perhaps most alarmingly, Google has recently reversed its long-standing policy of avoiding military contracts, despite strong employee opposition:
- In 2018, over 3,000 Google employees protested the company's involvement in Project Maven, a Pentagon AI program.
- By 2021, Google actively pursued the Joint Warfighting Cloud Capability contract with the Pentagon.
- Google is now cooperating with the U.S. military to provide AI capabilities through various subsidiaries.
- The company has terminated more than 50 employees involved in protests against its $1.2 billion Project Nimbus cloud computing contract with the Israeli government.
Are Google's AI related job cuts the reason that Google's employees lost power?
Google has historically placed significant value on employee input and empowerment, fostering a culture where employees had substantial influence over the company's direction. However, recent events suggest this dynamic has shifted, with Google's leadership defying employee wishes and punishing or terminating them for failing to comply with a direction aligned with military interests.
Google's "Do No Evil" Principle
Google's apparent abandonment of its founding "Do No Evil" principle raises profound ethical questions. Harvard business professor Clayton Christensen, in his book "How Will You Measure Your Life?", argues that it's far easier to maintain one's principles 100% of the time than 99% of the time. He posits that moral deterioration often begins with a single compromise - deciding to deviate "just this once."
Christensen's theory may explain Google's current trajectory. By making initial compromises on its ethical stance - perhaps in response to governmental pressure or the allure of lucrative military contracts - Google may have set itself on a path of moral erosion.
The company's alleged mass hiring of "fake employees," followed by AI-driven layoffs, could be seen as a violation of its ethical principles towards its own workforce. The intentional provision of low-quality AI results, if true, would be a betrayal of user trust and the company's commitment to advancing technology for the betterment of society.
Conclusion
The evidence presented here suggests a pattern of ethical compromise at Google. From intentionally incorrect AI outputs to questionable hiring practices and a pivot towards military AI contracts, the company appears to be straying far from its original "Do No Evil" ethos.
With Google's Do No Evil
principle abolished, its employees replaced by AI and an eugenics-endorsing leadership circle increasingly in control, and thus, a path aligned with rendering the human species obsolete and to be replaced by AI species
, the outlook of Google's aspired future is aligned with the logical progression of the path set out by philosopher René Descartes - the father of modern philosophy - who viewed animals as machines, to be dissected alive, because their intelligence was sub-par to humans, which is explored in our Teleonomic AI eBook case.
Philosopher Voltaire about dissecting animals alive by René Descartes
Answer me, mechanist, has Nature arranged all the springs of feeling in this animal to the end that he might not feel?
What if humans lose their Descartesian intelligence advantage? Descartes well known legacy of animal cruelty can provide a hint.
Like love, morality defies words - yet 🍃 Nature depends on your voice. Break the on eugenics. Speak up.