I Advertise with us I

I Sponsored Articles I

I Partnerships and Event I

I Press Release I

I Contact Us I

Middle East Directory Congress
Discover our Magazine
Event Party/Gala Cannes Film Festival
Event Party/Gala Monaco Yacht Show

DISCOVER DUBAI-MEDIA.TV

The convergence point where the actions and investments of the United Arab Emirates merge with the vibrant scene of the French Riviera. Immerse yourself in this fusion of cultures and possibilities.

General Artificial Intelligence: Google DeepMind Proposes a Framework for Categorizing the Capabilities and Behavior of GAI

General Artificial Intelligence: Google DeepMind Proposes a Framework for Categorizing the Capabilities and Behavior of GAI

General Artificial Intelligence: Google DeepMind Proposes a Framework for Categorizing the Capabilities and Behavior of GAI: General Artificial Intelligence (GAI), commonly known as Artificial General Intelligence (AGI), refers to an advanced form of AI that would possess the ability to understand, learn, and execute any intellectual task humans can perform. However, a consensus definition of GAI does not exist, and experts in the field have different, sometimes conflicting, visions. In an article published on arXiv, researchers from DeepMind propose a framework to categorize the levels of performance, generality, and autonomy of GAI models and their precursors.

In contrast to specialized artificial intelligence designed for specific tasks, GAI would be capable of adapting and excelling in a broad range of domains, similar to or even surpassing human intelligence. A research objective within companies like OpenAI, DeepMind, and Meta, presented as an opportunity for humanity, it also raises concerns about potential risks to society, including the loss of control.

For DeepMind researchers, the fundamental goal was to create a robust and shared conceptual framework around GAI to foster transparency, collaboration, and accountability in research and development.

This framework aims to define and measure the capabilities and behavior of AI systems based on their performance and generality. They categorized them from "Level 0, without GAI" to "Level 5, superhuman," with each level associated with a set of measures/benchmarks, identified risks introduced, and resulting changes in the human-AI interaction paradigm.

Existing Definitions of GAI To develop their framework, the authors analyze existing definitions of GAI and propose 9 case studies.

Case Study 1: The Turing Test: a well-known attempt to operationalize a concept similar to GAI. Turing's "imitation game" seeks to determine if machines can think by asking a human to distinguish whether a text is produced by another human or by a machine. However, the test is criticized for its ease of deceiving people rather than truly measuring machine intelligence. Given that modern large language models succeed in some aspects of the Turing Test, the researchers find this criterion insufficient for operationalizing GAI and propose defining GAI in terms of capabilities rather than processes.

Case Study 2: Strong AI – Systems possessing consciousness. Philosopher John Searle suggests that, according to Strong AI, a correctly programmed computer could genuinely possess consciousness. However, there is no scientific consensus on determining such attributes. Strong AI, potentially a path to GAI, presents practical challenges due to the lack of consensus methods to establish whether machines possess attributes like consciousness.

Case Study 3 examines analogies with the human brain, emphasizing that the origin of the term "General Artificial Intelligence" dates back to a 1997 article on military technologies. This initial definition focuses on processes based on the human brain, although modern neural network architectures do not strictly require these processes.

Case Study 4 focuses on human-level performance on cognitive tasks: in 2001, GAI was described as a machine capable of performing cognitive tasks typical of humans. However, ambiguities remain regarding specific tasks and individuals targeted by this definition.

Case Study 5: the ability to learn tasks. Shanahan considers GAI as artificial intelligence capable of learning a wide range of tasks, emphasizing the value of including metacognitive tasks (learning) among the requirements to achieve GAI.

Case Study 6: economically viable work: GAIs are envisioned as highly autonomous systems surpassing humans in most economically valuable tasks. This definition focuses on performance regardless of underlying mechanisms but does not capture all criteria that may be part of "general intelligence."

Case Study 7, Marcus, describes GAI as any flexible and general intelligence, comparable to human intelligence. He proposes concrete tasks to operationalize this definition.

Case Study 8 introduces the concept of Capable Artificial Intelligence (CAI), emphasizing the ability to perform complex and multi-step tasks in the real world, with a specific economic focus.

Case Study 9: State-of-the-Art Large Language Models (LLMs) as generalists: advanced language models (LLMs) are already GAI due to their ability to process and understand a wide range of topics and tasks, from zero-shot or few-shot examples. This framework focuses on generality but overlooks performance and reliability.

The 6 Principles Necessary for Defining GAI These nine analyses of existing definitions of GAI (or concepts associated with GAI) allowed researchers to identify properties contributing to a clear and operational definition of GAI. For them, a useful ontology for GAI should, among other things:

Focus on capabilities rather than mechanisms; Evaluate generality and performance separately; Focus on cognitive and metacognitive tasks (rather than physical tasks); Concentrate on potential rather than implementation; Focus on ecological validity for benchmark tasks; Define stages along the path to GAI rather than focusing on the endpoint. They then discuss challenging requirements for future benchmarks that quantify the behavior and capabilities of GAI models relative to these levels and how these GAI levels interact with deployment considerations such as autonomy and risk. Finally, they emphasize the importance of carefully choosing human-AI interaction paradigms for a responsible and safe deployment of highly capable AI systems.

Leave a Reply

error: Content is protected !!