Post by account_disabled on Sept 12, 2023 3:25:44 GMT -5
Ten days after Altman's hearing, Smith also called for AI regulation. Smith, who is also a lawyer, joined Microsoft in 1993 and became the head of antitrust litigation brought by the U.S. Department of Justice, and was promoted to president and chief legal officer in 2015.
Smith is a person well-versed in the U.S. federal Phone Number List government. When Altman was drafting a proposal to submit to Congress, he sought Smith's help. Smith also knows this well enough to publish an editorial in the Washington Post saying, “Altman’s policy wisdom is helping others in the technology industry.”
On May 25, Smith released Microsoft's official recommendations for AI regulation. It is not surprising that Smith's opinion that “AI systems controlling critical infrastructure need efficient safety brakes” is almost identical to Altman's opinion, which emphasizes an apocalyptic future rather than the present.
Of course, this is the lowest level recommendation. Beyond that, Smith's recommendations are full of lofty legal language that only lawyers would love. The idea is to just pretend to do something important and do nothing. Sentences such as “Develop a broad legal and regulatory framework based on a technical architecture for AI,” and “Build a new public-private partnership framework to use AI as an effective tool to address the inevitable social challenges that will come with new technologies.” appears.
Additionally, Smith said deepfakes targeting fake news or disinformation, fake AI-generated videos, “foreign cyber influence operations already being conducted by the Russian government, China, and Iran,” and “the use of AI to deceive others.” He also mentioned several important challenges, including “modifying legal content.”
However, he did not propose regulations for these issues. He also failed to mention the numerous risks that AI could pose.
Will Europe's spirit of chivalry be demonstrated?
In general, the United States takes a stance against regulation, especially technology regulation. Lobbying by technology giants such as Microsoft, OpenAI, and Google will continue in the future. Europe, which tends to be tougher on big tech companies than its American leaders, will be well positioned to recognize the risks of AI.
Things are already changing. The European Union recently passed a draft law regulating AI. It is still only at the beginning stage, and the final bill is likely to be finalized only at the end of this year. However, the content of the draft is very sharp. Require AI developers to submit summaries of all copyrighted material they use to train AI, call for safeguards to prevent the creation of illegal content, scale back facial recognition capabilities, and allow companies to build AI databases with biometric data from social media. It contains a prohibition against doing so.
According to the New York Times, the more important points are: “European Union legislation regulates AI with a ‘risk-based’ approach, focusing on applications most likely to cause harm to humans. This includes using AI systems to operate critical infrastructure such as water and energy, the legal system and determining access to public services or government benefits. “Tech companies should conduct risk assessments similar to the drug approval process before routinely deploying AI technologies.”
It would be quite difficult, if not impossible, for AI companies to build different systems under European and American legislation. Therefore, once the EU's AI law is finalized, all AI developers around the world may have to follow this law.
Altman has met with European leaders, including Ursula von der Leyen, president of the European Commission, the executive branch of the European Union, to demand easing of regulations, but so far there has been no result.
Microsoft and other tech companies may think they can control political leaders in the United States, but they may face bigger challenges in Europe.
Smith is a person well-versed in the U.S. federal Phone Number List government. When Altman was drafting a proposal to submit to Congress, he sought Smith's help. Smith also knows this well enough to publish an editorial in the Washington Post saying, “Altman’s policy wisdom is helping others in the technology industry.”
On May 25, Smith released Microsoft's official recommendations for AI regulation. It is not surprising that Smith's opinion that “AI systems controlling critical infrastructure need efficient safety brakes” is almost identical to Altman's opinion, which emphasizes an apocalyptic future rather than the present.
Of course, this is the lowest level recommendation. Beyond that, Smith's recommendations are full of lofty legal language that only lawyers would love. The idea is to just pretend to do something important and do nothing. Sentences such as “Develop a broad legal and regulatory framework based on a technical architecture for AI,” and “Build a new public-private partnership framework to use AI as an effective tool to address the inevitable social challenges that will come with new technologies.” appears.
Additionally, Smith said deepfakes targeting fake news or disinformation, fake AI-generated videos, “foreign cyber influence operations already being conducted by the Russian government, China, and Iran,” and “the use of AI to deceive others.” He also mentioned several important challenges, including “modifying legal content.”
However, he did not propose regulations for these issues. He also failed to mention the numerous risks that AI could pose.
Will Europe's spirit of chivalry be demonstrated?
In general, the United States takes a stance against regulation, especially technology regulation. Lobbying by technology giants such as Microsoft, OpenAI, and Google will continue in the future. Europe, which tends to be tougher on big tech companies than its American leaders, will be well positioned to recognize the risks of AI.
Things are already changing. The European Union recently passed a draft law regulating AI. It is still only at the beginning stage, and the final bill is likely to be finalized only at the end of this year. However, the content of the draft is very sharp. Require AI developers to submit summaries of all copyrighted material they use to train AI, call for safeguards to prevent the creation of illegal content, scale back facial recognition capabilities, and allow companies to build AI databases with biometric data from social media. It contains a prohibition against doing so.
According to the New York Times, the more important points are: “European Union legislation regulates AI with a ‘risk-based’ approach, focusing on applications most likely to cause harm to humans. This includes using AI systems to operate critical infrastructure such as water and energy, the legal system and determining access to public services or government benefits. “Tech companies should conduct risk assessments similar to the drug approval process before routinely deploying AI technologies.”
It would be quite difficult, if not impossible, for AI companies to build different systems under European and American legislation. Therefore, once the EU's AI law is finalized, all AI developers around the world may have to follow this law.
Altman has met with European leaders, including Ursula von der Leyen, president of the European Commission, the executive branch of the European Union, to demand easing of regulations, but so far there has been no result.
Microsoft and other tech companies may think they can control political leaders in the United States, but they may face bigger challenges in Europe.