Close Menu
Beverly Hills Examiner

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Weezer Play Acoustic Rooftop Set in Venice Beach

    March 29, 2026

    She quit VC to replace the underwire bra. Now she’s Nordstrom’s fastest-growing brand

    March 29, 2026

    Enormous Crowd Of Over 200,000 Show Up For No Kings Protest In Minnesota

    March 29, 2026
    Facebook X (Twitter) Instagram
    Beverly Hills Examiner
    • Home
    • US News
    • Politics
    • Business
    • Science
    • Technology
    • Lifestyle
    • Music
    • Television
    • Film
    • Books
    • Contact
      • About
      • Amazon Disclaimer
      • DMCA / Copyrights Disclaimer
      • Terms and Conditions
      • Privacy Policy
    Beverly Hills Examiner
    Home»Science»AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype
    Science

    AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

    By AdminAugust 12, 2023
    Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype



    AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

    Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.

    Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.

    Nevertheless, in May the nonprofit Center for AI safety released a statement—co-signed by hundreds of industry leaders, including OpenAI’s CEO Sam Altman—warning of “the risk of extinction from AI,” which it asserted was akin to nuclear war and pandemics. Altman had previously alluded to such a risk in a Congressional hearing, suggesting that generative AI tools could go “quite wrong.” And in July executives from AI companies met with President Joe Biden and made several toothless voluntary commitments to curtail “the most significant sources of AI risks,” hinting at existential threats over real ones. Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”

    The broader public and regulatory agencies must not fall for this science-fiction maneuver. Rather we should look to scholars and activists who practice peer review and have pushed back on AI hype in order to understand its detrimental effects here and now.

    Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.

    With OpenAI’s release of ChatGPT (and Microsoft’s incorporation of the tool into its Bing search) late last year, text synthesis machines have emerged as the most prominent AI systems. Large language models such as ChatGPT extrude remarkably fluent and coherent-seeming text but have no understanding of what the text means, let alone the ability to reason. (To suggest so is to impute comprehension where there is none, something done purely on faith by AI boosters.) These systems are instead the equivalent of enormous Magic 8 Balls that we can play with by framing the prompts we send them as questions such that we can make sense of their output as answers.

    Unfortunately, that output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem. Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.

    Nevertheless, the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few.

    In addition to not really helping those in need, deployment of this technology actually hurts workers: the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place.

    Second, the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.

    Finally, employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.

    AI-related policy must be science-driven and built on relevant research, but too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity (the property that a test measures what it purports to measure).

    Some recent remarkable examples include a 155-page preprint paper entitled “Sparks of Artificial General Intelligence: Early Experiments with GPT-4” from Microsoft Research—which purports to find “intelligence” in the output of GPT-4, one of OpenAI’s text synthesis machines—and OpenAI’s own technical reports on GPT-4—which claim, among other things, that OpenAI systems have the ability to solve new problems that are not found in their training data.

    No one can test these claims, however, because OpenAI refuses to provide access to, or even a description of, those data. Meanwhile “AI doomers,” who try to focus the world’s attention on the fantasy of all-powerful machines possibly going rogue and destroying all of humanity, cite this junk rather than research on the actual harms companies are perpetrating in the real world in the name of creating AI.

    We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.

    This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.





    Original Source Link

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    Previous ArticleWhy I love Lindsay Lohan in Freaky Friday
    Next Article This AI startup is racking up government customers

    RELATED POSTS

    At Gaza’s Al-Shifa Hospital, the War Isn’t Over

    March 29, 2026

    How ultraprecise ‘nuclear clocks’ could transform timekeeping

    March 28, 2026

    AI data centres can warm surrounding areas by up to 9.1°C

    March 28, 2026

    One Way or Another, Most of Our Electricity Comes From Solar Power

    March 27, 2026

    Why your psoriasis flares up in the same spots

    March 27, 2026

    Computer finds flaw in major physics paper for first time

    March 26, 2026
    latest posts

    Weezer Play Acoustic Rooftop Set in Venice Beach

    Weezer staged an acoustic rooftop concert in Venice Beach on Friday as part of a…

    She quit VC to replace the underwire bra. Now she’s Nordstrom’s fastest-growing brand

    March 29, 2026

    Enormous Crowd Of Over 200,000 Show Up For No Kings Protest In Minnesota

    March 29, 2026

    Ted Cruz stays neutral in Texas Senate runoff between Cornyn and Paxton

    March 29, 2026

    Bluesky leans into AI with Attie, an app for building custom feeds

    March 29, 2026

    At Gaza’s Al-Shifa Hospital, the War Isn’t Over

    March 29, 2026

    Kim Novak Slams Sydney Sweeney Casting in ‘Scandalous!’ Movie

    March 29, 2026
    Categories
    • Books (1,147)
    • Business (6,054)
    • Cover Story (2)
    • Film (5,990)
    • Lifestyle (4,084)
    • Music (6,060)
    • Politics (6,056)
    • Science (5,402)
    • Technology (5,987)
    • Television (5,678)
    • Uncategorized (3)
    • US News (6,038)
    popular posts

    Jeffrey Epstein’s jail suicide details revealed

    Two weeks before ending his life, Jeffrey Epstein sat in the corner of his Manhattan…

    Should COVID Vaccines Be Given Yearly?

    February 2, 2023

    Alicia Vikander Felt Like an Imposter Acting Pregnant Before Becoming Mom

    July 26, 2024

    Lori Loughlin Recalls Working With Keanu Reeves on ‘The Night Before’

    April 14, 2024
    Archives
    Browse By Category
    • Books (1,147)
    • Business (6,054)
    • Cover Story (2)
    • Film (5,990)
    • Lifestyle (4,084)
    • Music (6,060)
    • Politics (6,056)
    • Science (5,402)
    • Technology (5,987)
    • Television (5,678)
    • Uncategorized (3)
    • US News (6,038)
    About Us

    We are a creativity led international team with a digital soul. Our work is a custom built by the storytellers and strategists with a flair for exploiting the latest advancements in media and technology.

    Most of all, we stand behind our ideas and believe in creativity as the most powerful force in business.

    What makes us Different

    We care. We collaborate. We do great work. And we do it with a smile, because we’re pretty damn excited to do what we do. If you would like details on what else we can do visit out Contact page.

    Our Picks

    At Gaza’s Al-Shifa Hospital, the War Isn’t Over

    March 29, 2026

    Kim Novak Slams Sydney Sweeney Casting in ‘Scandalous!’ Movie

    March 29, 2026

    ‘Duck Dynasty’ Drama as John Luke Robertson Is Cuffed & Stuffed by Cops

    March 29, 2026
    © 2026 Beverly Hills Examiner. All rights reserved. All articles, images, product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Terms & Conditions and Privacy Policy.

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT