Main menu

Pages

Tech behemoths cannot ensure AI security, an executive in AI warns

 Tech behemoths cannot ensure AI security, an executive in AI warns




During the inaugural UN Security Council meeting on the threats posed by AI to global peace, an executive from an artificial intelligence company expressed concerns regarding the trustworthiness of the leading tech giants spearheading the commercialization of AI. The executive emphasized that these companies cannot be relied upon to ensure the safety of systems that are still not fully comprehended and are susceptible to "chaotic or unpredictable behavior."

 

Jack Clark, co-founder of Anthropic, emphasized the imperative for global collaboration in order to prevent the misuse of AI technology. Recognizing the inherent risks and uncertainties associated with AI, he stressed the need for collective efforts to ensure responsible and ethical deployment of this technology.

 

Stay up-to-date with the latest headlines by following our Google News channel online or through our user-friendly app. Get instant access to the most recent news articles and stay informed on a wide range of topics. Don't miss out on important updates – join our Google News channel now.

 

Clark, who emphasizes the significance of safety and caution in training their AI chatbot at his company, highlights the importance of focusing on the development of methods to test the capabilities, potential misuses, and safety vulnerabilities of these systems. Having departed OpenAI, the creator of the renowned ChatGPT chatbot, Clark founded Anthropic, which offers a competing AI product named Claude.

 

He highlighted the remarkable progress of AI throughout the last decade, leading up to 2023, where new AI systems have achieved significant milestones. These advancements include surpassing military pilots in air combat simulations, effectively stabilizing plasma in nuclear fusion reactors, designing components for next-generation semiconductors, and performing inspections of goods on production lines. The breadth of applications demonstrates the expanding capabilities of AI across various fields.

 

While acknowledging the significant benefits that AI can bring, the speaker expressed concerns about the potential risks associated with AI's understanding of biology. Specifically, there is a possibility that an AI system capable of producing biological weapons could emerge, highlighting the importance of considering the dual-use nature of AI technologies and ensuring responsible development and use.

 

Clark further raised concerns about the "potential threats to international peace, security, and global stability" stemming from two crucial aspects of AI systems. Firstly, their potential for misuse can pose risks if AI technology falls into the wrong hands or is used irresponsibly. Secondly, the inherent unpredictability of AI systems, coupled with their development by a limited number of actors, adds to the fragility of the situation. These factors underscore the need for robust governance, collaboration, and ethical considerations in the development and deployment of AI.


Clark emphasized that tech companies around the world possess the essential resources such as advanced computers, vast data pools, and significant capital to drive the development of AI systems. Consequently, these companies are likely to remain at the forefront in shaping the future trajectory of AI. Their prominent position in terms of resources and expertise gives them a considerable influence over the direction and advancement of AI technology.

 

During a video briefing to the UN Security Council, Clark expressed optimism that global efforts can yield positive results. He highlighted that numerous countries, including the European Union, China, and the United States, have placed significant emphasis on safety testing and evaluation in their AI proposals. This collective focus on prioritizing safety measures offers hope for effective collaboration and the development of comprehensive frameworks to address the potential risks associated with AI technology.

 

Clark pointed out that the current lack of standards or established best practices for testing frontier AI systems in areas such as discrimination, misuse, or safety poses challenges for governments in formulating effective policies. This information asymmetry allows the private sector to enjoy an advantage, making it difficult for governments to regulate AI effectively. The absence of clear guidelines underscores the need for collaborative efforts between the public and private sectors to develop comprehensive frameworks that address these crucial issues and promote responsible AI development and deployment.

 

Clark emphasized the importance of a sensible approach to AI regulation, highlighting that the ability to evaluate an AI system for specific capabilities or flaws is a fundamental starting point. He cautioned against failed approaches that rely solely on broad policy ideas without effective measurements and evaluations. Implementing robust mechanisms for evaluating AI systems is crucial to ensure that regulations are grounded in practical assessments and capable of addressing potential risks and shortcomings effectively.

 

He emphasized the significance of a strong and dependable assessment process for AI systems, stating that it enables governments to hold companies accountable while allowing companies to gain the trust required to deploy their AI systems worldwide. However, in the absence of a rigorous evaluation, he cautioned that there is a potential danger of regulatory capture undermining global security and granting control of the future to a limited group of private sector entities.

 

Regulation has been advocated by several AI executives, including OpenAI's CEO, Sam Altman. However, skeptics argue that regulation might disproportionately benefit well-established players like OpenAI, Google, and Microsoft, as smaller competitors could be marginalized due to the substantial expenses involved in ensuring compliance with regulatory requirements for their large language models.

According to UN Secretary-General Antonio Guterres, he believes that the United Nations serves as the ideal platform for establishing global standards aimed at maximizing the benefits of artificial intelligence (AI) while effectively addressing its potential risks.

 

In his address to the council, the Secretary-General issued a warning about the potential ramifications of generative AI and its impact on international peace and security. He expressed concerns about its potential exploitation by terrorists, criminals, and governments, emphasizing the grave consequences it could entail. He emphasized that its misuse could lead to alarming levels of casualties, extensive destruction, widespread trauma, and profound psychological harm on an unprecedented magnitude.

 

To facilitate international cooperation, Secretary-General Guterres announced his intention to establish a high-level Advisory Board for Artificial Intelligence. The primary objective of this board is to explore possibilities for global AI governance. The board will be tasked with providing recommendations on the matter and is expected to deliver its report on potential options by the end of the year. This step is seen as an initial

 

Secretary-General Guterres expressed his support for the establishment of a new United Nations body dedicated to supporting global efforts in governing artificial intelligence. He cited examples such as the International Atomic Energy Agency, the International Civil Aviation Organization, and the Intergovernmental Panel on Climate Change as models to draw inspiration from. The proposed body would aim to set up a comprehensive framework for AI development and governance, with the goal of ensuring global peace and security.

 

Professor Zeng Yi, the director of the Chinese Academy of Sciences Brain-inspired Cognitive Intelligence Lab and co-director of the China-UK Research Center for AI Ethics and Governance, echoed the sentiment that the United Nations should play a central role in AI governance. He recommended that the Security Council consider the establishment of a working group to address both immediate and long-term challenges related to AI's impact on international peace and security.

 

 

During his video briefing, Professor Zeng emphasized that although recent generative AI systems may exhibit the appearance of intelligence, they lack genuine understanding and are not truly intelligent. He cautioned against AI attempting to imitate or replace humans, emphasizing the need for humans to maintain control, particularly concerning weapon systems.

 

Foreign Secretary James Cleverly, who chaired the meeting as the UK held the council presidency, announced that the United Kingdom plans to host the first major global summit on AI safety in the coming autumn. Cleverly stressed the universal impact of AI, stating that no country will be unaffected, and emphasized the importance of involving a broad coalition of international actors from various sectors. The primary objective of the summit will be to collectively examine the risks associated with AI and determine how coordinated action can effectively mitigate them.


Comments