Block announced that in Q4 2023, Cash App generated $66 million in Bitcoin gross profit up 90% year-over-year.
Industry Collaboration: The emergence of the US AI Safety Consortium showcases a significant shift towards collaboration among tech giants in ensuring the safe development and deployment of artificial intelligence. This united effort recognises the shared responsibility in mitigating potential risks associated with AI technologies.
Leadership from Tech Titans: With industry giants leading the charge, such as Google, Microsoft, and Amazon, the consortium gains substantial influence and expertise. Their involvement underscores the importance of AI safety and signals a commitment to addressing ethical concerns and promoting responsible AI practices.
Impact on AI Governance: Establishing this consortium could have far-reaching implications for AI governance and regulation. By proactively engaging in collaborative efforts focused on safety and ethics, the tech industry aims to shape future policies and standards, potentially influencing the direction of AI development on a global scale.
In a move to address the ethical and safety concerns surrounding artificial intelligence (AI) development, a consortium comprising some of the most influential names in the tech industry has emerged in the United States (US). With a shared commitment to advancing AI responsibly, the consortium brings together renowned companies and experts to pioneer solutions that prioritise the well-being of humanity.
On February 8 2024, the US Department of Commerce announced the creation of its AI Safety Institute Consortium (AISIC) with a list of participants in the tech industry. The US Secretary of Commerce, Gina Raimondo, stated that the consortium plans to unite AI users globally and foster an environment that creates safe and trustworthy AI. Raimondo said,
“The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence… President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem.”
Key Players in the Consortium
At the helm of this initiative are industry giants such as Google, Visa, Microsoft, Amazon, and Facebook, collectively representing a vital portion of the global AI landscape. Joining forces with these corporate behemoths are leading academic institutions, policy think tanks and prominent AI research and ethics figures. The primary objective of the USAISIC is to establish a framework for responsible AI development and deployment. This framework encompasses many considerations: transparency, accountability, fairness, privacy, and security. By setting industry standards and best practices, the consortium aims to mitigate potential risks associated with AI technologies while maximising their societal benefits.
One of the pillars of the consortium’s mission is fostering collaboration and knowledge sharing among its members. By pooling together expertise from diverse backgrounds, the consortium seeks to tackle complex challenges more effectively and accelerate progress in AI safety. Through regular meetings, workshops, and research initiatives, members will have opportunities to exchange ideas, identify emerging risks, and develop innovative solutions. The consortium is committed to promoting public awareness and engagement on AI safety issues. Recognising the importance of building trust and understanding among stakeholders, the consortium plans to launch educational campaigns, host public forums, and engage with policymakers to ensure that AI development aligns with societal values and priorities.
Collaborative Efforts to Advance AI Safety Through Cooperation
One of the core principles guiding the consortium’s work is the notion of human-centric AI. The consortium aims to mitigate potential harms and foster trust in AI systems among users and stakeholders by embedding ethical considerations into the design and implementation of AI technologies. In October 2023, USAISI development was created due to President Joe Biden’s executive order on AI safety. Raimondo suggested that Biden’s executive order will ensure that the development and deployment of safe and responsible AI is the US’s main priority.
The White House deputy chief of staff, Bruce Reed, emphasised that keeping up with AI means “we have to move fast and make sure everyone – from the government to the private sector to academia – is rowing in the same direction.” The consortium also considers future challenges and opportunities in the AI landscape. This forward-thinking approach includes exploring the ethical implications of emerging technologies, such as autonomous vehicles, healthcare AI, and AI-driven decision-making systems.
The consortium invests in research and development initiatives to advance state-of-the-art AI safety to support its mission. This includes funding academic research projects, supporting interdisciplinary collaborations, and sponsoring technical challenges and competitions focused on AI ethics and safety. One of the flagship projects of the consortium is the development of an AI safety certification program.
Designed to assess and validate the safety and ethical standards of AI systems, this certification will assure consumers, businesses, and policymakers that AI technologies have been rigorously evaluated and meet established criteria for responsible development and deployment. Furthermore, the consortium actively engages with international partners and stakeholders to promote global cooperation on AI safety issues. Recognising that AI knows no borders, the consortium is committed to sharing insights, collaborating on standards, and coordinating regulatory efforts to ensure that AI technologies are developed and deployed responsibly globally.