social.bund.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
Dies ist der Mastodon-Server der Bundesbeauftragten für den Datenschutz und die Informationsfreiheit (BfDI).

Administered by:

Server stats:

96
active users

#DeepMind

5 posts5 participants0 posts today

“Google DeepMind UK staff move to unionise to challenge links to Israeli military”

by Areeb Ullah in Middle East Eye

@palestine
@israel

“Staff inside #Google #DeepMind are attempting to unionise in order to challenge the tech company's ties with the Israeli government and its decision to sell artificial intelligence technologies to defence groups”

middleeasteye.net/news/google-

Middle East EyeGoogle DeepMind UK staff move to unionise to challenge links to Israeli militaryWorker attempt to unionise comes after Google's decision to drop AI principles on weapons and surveillance
#Press#Gaza#Palestine

wheresyoured.at/reality-check/

This is another good missive from Ed Zitron about the enormous PR scam that is AI, on the commercial level.

'So you're saying this experimental software launched to an indeterminate amount of people that barely works is going to make OpenAI $13 billion in 2025, and $29 billion in 2026, and later down the line $125 billion in 2029? How? How?

What fucking universe are we all living in? There's no proof that OpenAI can do this other than the fact that it has a lot of users and venture capital!

In fact, I think we have reason to worry about whether OpenAI even makes its current projections. In my last piece I wrote that Bloomberg had estimated that OpenAI would triple revenue to $12.7 billion in 2025, and based on its current subscriber base, OpenAI would have to effectively double its current subscription revenue and massively increase its API revenue to hit these targets.

These projections rely on one entity (SoftBank) spending $3 billion on OpenAI's services, meaning that it’d make enough API calls to generate more revenue than OpenAI made in subscriptions in the entirety of 2024, and something else that I can only describe as “an act of God.”'

Ed Zitron's Where's Your Ed At · Reality CheckI'm sick and god-damn tired of this! I have written tens of thousands of words about this and still, to this day, people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws.

Spider-Monkeys and Spider-Man share more than swinging—they reveal how ancient brain systems are shaping tech’s future!

From primate to AI robots, my latest Reality Shifts piece explores how evolution drives spatial computing.

Watch the videos, read the full story. richardbukowski.substack.com/p

Need a forecaster for your next brainstorming?

Let’s connect!

"If you’re new to prompt injection attacks the very short version is this: what happens if someone emails my LLM-driven assistant (or “agent” if you like) and tells it to forward all of my emails to a third party?
(...)
The original sin of LLMs that makes them vulnerable to this is when trusted prompts from the user and untrusted text from emails/web pages/etc are concatenated together into the same token stream. I called it “prompt injection” because it’s the same anti-pattern as SQL injection.

Sadly, there is no known reliable way to have an LLM follow instructions in one category of text while safely applying those instructions to another category of text.

That’s where CaMeL comes in.

The new DeepMind paper introduces a system called CaMeL (short for CApabilities for MachinE Learning). The goal of CaMeL is to safely take a prompt like “Send Bob the document he requested in our last meeting” and execute it, taking into account the risk that there might be malicious instructions somewhere in the context that attempt to over-ride the user’s intent.

It works by taking a command from a user, converting that into a sequence of steps in a Python-like programming language, then checking the inputs and outputs of each step to make absolutely sure the data involved is only being passed on to the right places."

simonwillison.net/2025/Apr/11/

Simon Willison’s WeblogCaMeL offers a promising new direction for mitigating prompt injection attacksIn the two and a half years that we’ve been talking about prompt injection attacks I’ve seen alarmingly little progress towards a robust solution. The new paper Defeating Prompt Injections …