Tech

Critics query tech-heavy lineup of recent Homeland Safety AI security board

[ad_1]

A modified photo of a 1956 scientist carefully bottling

On Friday, the US Division of Homeland Safety announced the formation of an Synthetic Intelligence Security and Safety Board that consists of twenty-two members pulled from the tech {industry}, authorities, academia, and civil rights organizations. However given the nebulous nature of the time period “AI,” which may apply to a broad spectrum of laptop expertise, it is unclear if this group will even have the ability to agree on what precisely they’re safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to determine the board, which is able to meet for the primary time in early Might and subsequently on a quarterly foundation.

The basic assumption posed by the board’s existence, and mirrored in Biden’s AI executive order from October, is that AI is an inherently dangerous expertise and that Americans and companies have to be protected against its misuse. Alongside these strains, the aim of the group is to assist guard in opposition to international adversaries utilizing AI to disrupt US infrastructure; develop suggestions to make sure the secure adoption of AI tech into transportation, vitality, and Web companies; foster cross-sector collaboration between authorities and companies; and create a discussion board the place AI leaders to share info on AI safety dangers with the DHS.

It is value noting that the ill-defined nature of the time period “Synthetic Intelligence” does the brand new board no favors relating to scope and focus. AI can imply many various issues: It could energy a chatbot, fly an airplane, management the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a terrific recreation of chess. It may be all these issues and extra, and since lots of these functions of AI work very in another way, there is no assure any two individuals on the board might be enthusiastic about the identical sort of AI.

This confusion is mirrored within the quotes offered by the DHS press launch from new board members, a few of whom are already speaking about several types of AI. Whereas OpenAI, Microsoft, and Anthropic are monetizing generative AI programs like ChatGPT based mostly on massive language fashions (LLMs), Ed Bastian, the CEO of Delta Air Strains, refers to completely completely different courses of machine studying when he says, “By driving revolutionary instruments like crew resourcing and turbulence prediction, AI is already making vital contributions to the reliability of our nation’s air journey system.”

So, defining the scope of what AI precisely means—and which functions of AI are new or harmful—could be one of many key challenges for the brand new board.

A roundtable of Massive Tech CEOs attracts criticism

For the inaugural assembly of the AI Security and Safety Board, the DHS chosen a tech industry-heavy group, populated with CEOs of 4 main AI distributors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of high AI chipmaker Nvidia, and representatives from different main tech firms like IBM, Adobe, Amazon, Cisco, and AMD. There are additionally reps from huge aerospace and aviation: Northrop Grumman and Delta Air Strains.

Upon studying the announcement, some critics took challenge with the board composition. On LinkedIn, founding father of The Distributed AI Analysis Institute (DAIR) Timnit Gebru particularly criticized OpenAI’s presence on the board and wrote, “I’ve now seen the total listing and it’s hilarious. Foxes guarding the hen home is an understatement.”

[ad_2]

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button