Kamala Harris on Wednesday delivered a speech on Artificial Intelligence at the Global Summit on AI Safety as part of her visit to the United Kingdom.
Harris announced new US initiatives to advance “safe and responsible” use of AI.
The Biden Regime is suddenly pushing new policies on AI to control the information flow online. They cannot allow for the free exchange of ideas on social media.
The Regime will also use AI to advance “equity” – in other words, algorithms will be used to silence any dissenters of the Regime.
All of us — from government to civil society to the private sector — must work together to build a future where AI creates opportunity, advances equity, and protects fundamental rights and freedoms.
Today's actions are an important step forward.
— Vice President Kamala Harris (@VP) November 1, 2023
Kamala Harris embarrassed herself on the world stage as she delivered another word salad.
“When people around the world cannot discern fact from fiction because of a flood of AI-enabled mis- and disinformation, I ask, is that not existential for democracy?’” Harris said.
She also claimed Artificial Intelligence can help fight the climate crisis.
WATCH:
Kamala Harris:
“When people around the world cannot discern fact from fiction because of a flood of AI-enabled mis- and disinformation, I ask, is that not existential for democracy?’”pic.twitter.com/OAEmVBMfeY
— Citizen Free Press (@CitizenFreePres) November 1, 2023
White House statement on Harris’s visit to the United Kingdom:
As part of her visit to the United Kingdom, the Vice President is announcing the following initiatives.
- The United States AI Safety Institute: The Biden-Harris Administration, through the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) inside NIST. The US AISI will operationalize NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk. The Institute will develop technical guidance that will be used by regulators considering rulemaking and enforcement on issues such as authenticating content created by humans, watermarking AI-generated content, identifying and mitigating against harmful algorithmic discrimination, ensuring transparency, and enabling adoption of privacy-preserving AI, and would serve as a driver of the future workforce for safe and trusted AI. It will also enable information-sharing and research collaboration with peer institutions internationally, including the UK’s planned AI Safety Institute (UK AISI), and partner with outside experts from civil society, academia, and industry.
- Draft Policy Guidance on U.S. Government Use of AI: The Biden-Harris Administration, through the Office of Management and Budget, is releasing for public comment its first-ever draft policy guidance on the use of AI by the U.S. government. This draft policy builds on prior leadership—including the Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework—and outlines concrete steps to advance responsible AI innovation in government, increase transparency and accountability, protect federal workers, and manage risks from sensitive uses of AI. In a wide range of contexts including health, education, employment, federal benefits, law enforcement, immigration, transportation, and critical infrastructure, the draft policy would create specific safeguards for uses of AI that impact the rights and safety of the public. This includes requiring that federal departments and agencies conduct AI impact assessments, identify, monitor, and mitigate AI risks, sufficiently train AI operators, conduct public notice and consultation for the use of AI, and offer options to appeal harms caused by AI. More details on this policy and how to comment can be found at ai.gov/input.
- Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy: In February, the United States made a Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy. The Vice President is announcing that 31 nations have joined the United States in endorsing this Declaration and is calling on others to join. This Declaration establishes a set of norms for responsible development, deployment, and use of military AI capabilities that can help responsible states around the globe harness the benefits of AI capabilities—including those enabling autonomous functions and systems for their military and defense establishments—in a responsible and lawful manner. These norms include compliance with International Humanitarian Law, properly training personnel, building in critical safeguards, and subjecting capabilities to rigorous testing and legal review. The Declaration marked the beginning of a crucial dialogue among responsible states regarding the implementation of these foundational principles and practices. As of November 1, countries joining the Declaration include: Albania, Australia, Belgium, Bulgaria, Canada, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Hungary, Iceland, Ireland, Italy, Japan, Kosovo, Latvia, Liberia, Malawi, Montenegro, Morocco, North Macedonia, Portugal, Romania, Singapore, Slovenia, Spain, Sweden, and the United Kingdom.
- New Funders Initiative to Advance AI in the Public Interest: Vice President Harris is announcing a bold new initiative with philanthropic organizations related to AI. This includes a vision for philanthropic giving to advance AI that is designed and used in the best interests of workers, consumers, communities, and historically marginalized people in the United States and across the globe. Ten leading foundations are announcing they have collectively committed more than $200 million in funding toward initiatives to advance the priorities laid out by the Vice President, and are forming a funders network to coordinate new philanthropic giving to advance work organized around five pillars: ensuring AI protects democracy and rights, driving AI innovation in the public interest, empowering workers to thrive amid AI-driven changes, improving transparency and accountability of AI, and supporting international rules and norms on AI. The foundations launching this effort are the David and Lucile Packard Foundation; Democracy Fund; the Ford Foundation; Heising-Simons Foundation; the John D. and Catherine T. MacArthur Foundation; Kapor Foundation; Mozilla Foundation; Omidyar Network; Open Society Foundations; and the Wallace Global Fund.
Additional actions:
- Detecting and Blocking AI driven Fraudulent Phone Calls: The Biden-Harris Administration will launch an effort to counter fraudsters who are using AI generated voice models to target and steal from the most vulnerable in our communities. The White House will host a virtual hackathon, inviting companies to submit teams of technology experts focused on building AI technologies, to come together and build AI models that can detect and block unwanted robocalls and robotexts, particularly those using novel AI-generated voice models which particularly harm the elderly. There are promising paths to develop these algorithms using metadata surrounding the phone call and voice models to detect AI-generated content and terminate a phone call early or warn the receiver while the call is in progress. The Federal Communication Commission is exploring creative ideas focusing on using AI to target AI-driven fraud and robocalls, and recommends continued joint engagement with the UK’s telecom regulator, Ofcom, on protecting consumers from robocalls via AI driven defenses.
- International Norms on Content Authentication: The Biden-Harris Administration is calling on all nations to support the development and implementation of international standards to enable the public to effectively identify and trace authentic government-produced digital content and AI-generated or manipulated content, including through digital signatures, watermarking, and other labeling techniques. This effort aims to increase global resilience against deceptive or harmful synthetic AI-generated or manipulated media. This call to action builds on the voluntary commitments by 15 leading AI companies to develop mechanisms that enable users to understand if audio or visual content is AI-generated and a U.S. government commitment in the recently-released Executive Order on AI to develop guidelines, tools, and practices for digital content authentication and synthetic content detection measures.
- Pledge to Incorporate Responsible and Rights-Respecting Practices in Government Development, Procurement, and Use of AI: Building on the principles of the Draft Policy Guidance on the U.S. Government Use of AI, the Biden-Harris Administration, through the State Department, intends to work with the Freedom Online Coalition of 38 countries to develop a pledge to incorporate responsible and rights-respecting practices in government development, procurement, and use of AI. Such a pledge is important to ensure AI systems are developed and used in a manner that is consistent with applicable international law, including international human rights law, and that upholds democratic institutions and processes.
The post Kamala Harris Embarrasses Herself on World Stage During Speech on AI (VIDEO) appeared first on The Gateway Pundit.