Close Menu
Beverly Hills Examiner

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The Cure’s Perry Bamonte Dies at 65

    January 1, 2026

    Copper records biggest annual gain since 2009 on supply bets

    January 1, 2026

    Trump Takes One Final Big Loss In Court Before The End Of The Year

    January 1, 2026
    Facebook X (Twitter) Instagram
    Beverly Hills Examiner
    • Home
    • US News
    • Politics
    • Business
    • Science
    • Technology
    • Lifestyle
    • Music
    • Television
    • Film
    • Books
    • Contact
      • About
      • Amazon Disclaimer
      • DMCA / Copyrights Disclaimer
      • Terms and Conditions
      • Privacy Policy
    Beverly Hills Examiner
    Home»Science»Tests that AIs Often Fail and Humans Ace Could Pave the Way for Artificial General Intelligence
    Science

    Tests that AIs Often Fail and Humans Ace Could Pave the Way for Artificial General Intelligence

    By AdminJuly 19, 2025
    Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    Tests that AIs Often Fail and Humans Ace Could Pave the Way for Artificial General Intelligence


    There are many ways to test the intelligence of an artificial intelligence—conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs.

    One test designed to evaluate an AI’s ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test—now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents—and is based on making them play video games.

    Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article.


    On supporting science journalism

    If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


    [An edited transcript of the interview follows.]

    What definition of intelligence is measured by ARC-AGI-1?

    Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can’t go and learn English. So what François Chollet made was a benchmark called ARC-AGI—it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We’re basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model’s ability to learn within a narrow domain. But our claim is that it does not measure AGI because it’s still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI.

    How are you defining AGI here?

    There are two ways I look at it. The first is more tech-forward, which is ‘Can an artificial system match the learning efficiency of a human?’ Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don’t really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike—all these things outside our training data. That’s called generalization. When you can do things outside of what you’ve been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot—that’s when we have AGI. That’s an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet’s benchmark… is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that’s so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that’s spiky intelligence. It still doesn’t have the generalization power of a human. And that’s what this benchmark shows.

    How do your benchmarks differ from those used by other organizations?

    One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That’s in opposition to other benchmarks, where they do “Ph.D.-plus-plus” problems. I don’t need to be told that AI is smarter than me—I already know that OpenAI’s o3 can do a lot of things better than me, but it doesn’t have a human’s power to generalize. That’s what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2.

    What makes this test hard for AI and relatively easy for humans?

    There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that’s running in a human’s head is orders of magnitude better and more efficient than what we’re seeing with AI right now.

    What is the difference between ARC-AGI-1 and ARC-AGI-2?

    So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn’t touch it at all. It wasn’t even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it’s the same concept, more or less…. We are now launching a developer preview for ARC-AGI-3, and that’s completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark.

    How will ARC-AGI-3 test agents differently compared with previous tests?

    If you think about everyday life, it’s rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There’s a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we’re making 100 novel video games that we will use to test humans to make sure that humans can do them because that’s the basis for our benchmark. And then we’re going to drop AIs into these video games and see if they can understand this environment that they’ve never seen beforehand. To date, with our internal testing, we haven’t had a single AI be able to beat even one level of one of the games.

    Can you describe the video games here?

    Each “environment,” or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions.

    How is using video games to test for AGI different from the ways that video games have previously been used to test AI systems?

    Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games—unintentionally embedding their own insights into the solutions.

    Try ARC-AGI-1, ARC-AGI-2 and ARC-AGI-3.



    Original Source Link

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    Previous ArticleAfter 21 Episodes, Apple TV+’s Foundation Adaptation Finally Introduces A Book-Accurate Version Of Hari Seldon
    Next Article Apple Sues the YouTuber Who Leaked iOS 26

    RELATED POSTS

    Poor Sleep Quality Accelerates Brain Aging

    January 1, 2026

    NASA Telescopes Capture Colliding Spiral Galaxies in Sparkling Detail

    December 31, 2025

    Star that seemed to vanish more than 130 years ago is found again

    December 31, 2025

    The Great Big Power Play

    December 30, 2025

    15 Million Years before the Megalodon, This Giant Ancient Shark Prowled the Oceans

    December 30, 2025

    Mathematicians unified key laws of physics in 2025

    December 29, 2025
    latest posts

    The Cure’s Perry Bamonte Dies at 65

    Perry Bamonte, the Cure’s longtime guitarist and keyboardist, has died following an undisclosed illness. He…

    Copper records biggest annual gain since 2009 on supply bets

    January 1, 2026

    Trump Takes One Final Big Loss In Court Before The End Of The Year

    January 1, 2026

    Zohran Mamdani sworn in as NYC mayor in midnight ceremony at Old City Hall

    January 1, 2026

    ‘College dropout’ has become the most coveted startup founder credential

    January 1, 2026

    Poor Sleep Quality Accelerates Brain Aging

    January 1, 2026

    Avengers, Toy Story 5, The Odyssey

    January 1, 2026
    Categories
    • Books (970)
    • Business (5,878)
    • Film (5,812)
    • Lifestyle (3,915)
    • Music (5,880)
    • Politics (5,882)
    • Science (5,224)
    • Technology (5,811)
    • Television (5,497)
    • Uncategorized (2)
    • US News (5,863)
    popular posts

    Stream SNL Online Free – Billboard

    All products and services featured are independently chosen by editors. However, Billboard may receive a…

    Bill Fay, Cult British Singer-Songwriter, Dies at 81

    February 23, 2025

    Biden pledges ‘ironclad’ support for Israel

    April 14, 2024

    Special Counsel Obtained Trump’s Twitter DMs Before Jan. 6 Indictment

    August 16, 2023
    Archives
    Browse By Category
    • Books (970)
    • Business (5,878)
    • Film (5,812)
    • Lifestyle (3,915)
    • Music (5,880)
    • Politics (5,882)
    • Science (5,224)
    • Technology (5,811)
    • Television (5,497)
    • Uncategorized (2)
    • US News (5,863)
    About Us

    We are a creativity led international team with a digital soul. Our work is a custom built by the storytellers and strategists with a flair for exploiting the latest advancements in media and technology.

    Most of all, we stand behind our ideas and believe in creativity as the most powerful force in business.

    What makes us Different

    We care. We collaborate. We do great work. And we do it with a smile, because we’re pretty damn excited to do what we do. If you would like details on what else we can do visit out Contact page.

    Our Picks

    Poor Sleep Quality Accelerates Brain Aging

    January 1, 2026

    Avengers, Toy Story 5, The Odyssey

    January 1, 2026

    ‘The Challenge’ Star Reveals Horrific Accident Blinded Him

    January 1, 2026
    © 2026 Beverly Hills Examiner. All rights reserved. All articles, images, product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Terms & Conditions and Privacy Policy.

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT