social.bund.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
Dies ist der Mastodon-Server der Bundesbeauftragten für den Datenschutz und die Informationsfreiheit (BfDI).

Administered by:

Server stats:

99
active users

#responsibleai

3 posts3 participants1 post today

🤖 Fascinating development in autonomous AI: Manus AI is setting new benchmarks in task automation, outperforming GPT-4 in the GAIA tests. But here's the real question: how do we balance unprecedented productivity gains with responsible AI deployment?

While it shows impressive 86.5% accuracy in basic tasks, should we consider the human oversight needed for complex decisions?

What safeguards would you want to see in autonomous AI systems?

In an interview (in German/dubbed) for ARD’s Weltspiegel (23.03.2025), Prof. Aimee van Wynsberghe highlights how AI systems, while transformative, consume vast amounts of energy and resources like water. This raises vital questions about sustainability, ethics, and environmental impact. It’s crucial to address these issues in research and public debates.

📺 Watch here: ardmediathek.de/video/weltspie

"We, the undersigned researchers, affirm the scientific consensus that artificial intelligence (AI) can exacerbate bias and discrimination in society, and that governments need to enact appropriate guardrails and governance in order to identify and mitigate these harms. [1]

Over the past decade, thousands of scientific studies have shown how biased AI systems can violate civil and human rights, even if their users and creators are well-intentioned. [2] When AI systems perpetuate discrimination, their errors make our societies less just and fair. Researchers have observed this same pattern across many fields, including computer science, the social sciences, law, and the humanities. Yet while scientists agree on the common problem of bias in AI, the solutions to this problem are an area of ongoing research, innovation, and policy.

These facts have been a basis for bipartisan and global policymaking for nearly a decade. [3] We urge policymakers to continue to develop public policy that is rooted in and builds on this scientific consensus, rather than discarding the bipartisan and global progress made thus far."

aibiasconsensus.org/

Scientific Consensus on AI BiasScientific Consensus on AI Bias

After all these recent episodes, I don't know how anyone can have the nerve to say out loud that the Trump administration and the Republican Party value freedom of expression and oppose any form of censorship. Bunch of hypocrites! United States of America: The New Land of SELF-CENSORSHIP.

"The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”"

wired.com/story/ai-safety-inst

WIRED · Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful ModelsBy Will Knight

Our 3rd keynoter will be Enrico Motta from the Knowledge Media Institute of UK’s Open University. He pioneered with hybrid computational architectures integrating #ML with knowledge representation, being applied in application domains as e.g. news & scholarly analytics, and cognitive robotics.

Apply here: 2025.semanticwebschool.org/
Deadline: March 15, 2025

#knowledgegraph #AI #llm #generativeAI #responsibleAI @lysander07 @fiz_karlsruhe #semanticweb #lod #semanticweb #neurosymbolicai #robotics

Our second keynoter will be Natasha Noy from Google Research. YOu might know Natasha from her work on Google Dataset Search datasetsearch.research.google. or from her time at Stanford, where she was contributing to the protege ontology editor or the famous Ontology Development 101.

Apply here: 2025.semanticwebschool.org/
Deadline: March 15, 2025

#knowledgegraph #AI #llm #generativeAI #responsibleAI @albertmeronyo @lysander07 #summerschool @fiz_karlsruhe @fizise @lysander07 @AxelPolleres #semanticweb #lod