Search results
Results from the WOW.Com Content Network
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures, including Alan Turing, [a] the most-cited computer scientist Geoffrey Hinton, [121] Elon Musk, [12] OpenAI CEO Sam Altman, [13] [122] Bill Gates, and Stephen Hawking ...
Trustworthy AI is also a work programme of the International Telecommunication Union, an agency of the United Nations, initiated under its AI for Good programme. [2] Its origin lies with the ITU-WHO Focus Group on Artificial Intelligence for Health, where strong need for privacy at the same time as the need for analytics, created a demand for a standard in these technologies.
The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority.
Specifically, an AI model trained on 10 to the 26th floating-point operations must now be reported to the U.S. government and could soon trigger even stricter requirements in California.
The dangers of AI algorithms can manifest themselves in algorithmic bias and dangerous feedback loops, and they can expand to all sectors of daily life, from the economy to social interactions, to ...
The United States is spearheading the first United Nations resolution on artificial intelligence, aimed at ensuring the new technology is “safe, secure and trustworthy” and that all countries ...
As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising because soft laws can be adapted more flexibly to meet the needs of emerging and evolving AI technology and nascent applications. [25] [26] However, soft law approaches often lack substantial enforcement potential. [25] [27]
The letter highlights both the positive and negative effects of artificial intelligence. [7] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one ...