Close Menu
Beverly Hills Examiner

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The Cure’s Perry Bamonte Dies at 65

    January 1, 2026

    Copper records biggest annual gain since 2009 on supply bets

    January 1, 2026

    Trump Takes One Final Big Loss In Court Before The End Of The Year

    January 1, 2026
    Facebook X (Twitter) Instagram
    Beverly Hills Examiner
    • Home
    • US News
    • Politics
    • Business
    • Science
    • Technology
    • Lifestyle
    • Music
    • Television
    • Film
    • Books
    • Contact
      • About
      • Amazon Disclaimer
      • DMCA / Copyrights Disclaimer
      • Terms and Conditions
      • Privacy Policy
    Beverly Hills Examiner
    Home»Technology»AI training data has a price tag that only Big Tech can afford
    Technology

    AI training data has a price tag that only Big Tech can afford

    By June 1, 2024
    Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    AI training data has a price tag that only Big Tech can afford


    Data is at the heart of today’s advanced AI systems, but it’s costing more and more — making it out of reach for all but the wealthiest tech companies.

    Last year, James Betker, a researcher at OpenAI, penned a post on his personal blog about the nature of generative AI models and the datasets on which they’re trained. In it, Betker claimed that training data — not a model’s design, architecture or any other characteristic — was the key to increasingly sophisticated, capable AI systems.

    “Trained on the same data set for long enough, pretty much every model converges to the same point,” Betker wrote.

    Is Betker right? Is training data the biggest determiner of what a model can do, whether it’s answer a question, draw human hands, or generate a realistic cityscape?

    It’s certainly plausible.

    Statistical machines

    Generative AI systems are basically probabilistic models — a huge pile of statistics. They guess based on vast amounts of examples which data makes the most “sense” to place where (e.g., the word “go” before “to the market” in the sentence “I go to the market”). It seems intuitive, then, that the more examples a model has to go on, the better the performance of models trained on those examples.

    “It does seem like the performance gains are coming from data,” Kyle Lo, a senior applied research scientist at the Allen Institute for AI (AI2), a AI research nonprofit, told TechCrunch, “at least once you have a stable training setup.”

    Lo gave the example of Meta’s Llama 3, a text-generating model released earlier this year, which outperforms AI2’s own OLMo model despite being architecturally very similar. Llama 3 was trained on significantly more data than OLMo, which Lo believes explains its superiority on many popular AI benchmarks.

    (I’ll point out here that the benchmarks in wide use in the AI industry today aren’t necessarily the best gauge of a model’s performance, but outside of qualitative tests like our own, they’re one of the few measures we have to go on.)

    That’s not to suggest that training on exponentially larger datasets is a sure-fire path to exponentially better models. Models operate on a “garbage in, garbage out” paradigm, Lo notes, and so data curation and quality matter a great deal, perhaps more than sheer quantity.

    “It is possible that a small model with carefully designed data outperforms a large model,” he added. “For example, Falcon 180B, a large model, is ranked 63rd on the LMSYS benchmark, while Llama 2 13B, a much smaller model, is ranked 56th.”

    In an interview with TechCrunch last October, OpenAI researcher Gabriel Goh said that higher-quality annotations contributed enormously to the enhanced image quality in DALL-E 3, OpenAI’s text-to-image model, over its predecessor DALL-E 2. “I think this is the main source of the improvements,” he said. “The text annotations are a lot better than they were [with DALL-E 2] — it’s not even comparable.”

    Many AI models, including DALL-E 3 and DALL-E 2, are trained by having human annotators label data so that a model can learn to associate those labels with other, observed characteristics of that data. For example, a model that’s fed lots of cat pictures with annotations for each breed will eventually “learn” to associate terms like bobtail and shorthair with their distinctive visual traits.

    Bad behavior

    Experts like Lo worry that the growing emphasis on large, high-quality training datasets will centralize AI development into the few players with billion-dollar budgets that can afford to acquire these sets. Major innovation in synthetic data or fundamental architecture could disrupt the status quo, but neither appear to be on the near horizon.

    “Overall, entities governing content that’s potentially useful for AI development are incentivized to lock up their materials,” Lo said. “And as access to data closes up, we’re basically blessing a few early movers on data acquisition and pulling up the ladder so nobody else can get access to data to catch up.”

    Indeed, where the race to scoop up more training data hasn’t led to unethical (and perhaps even illegal) behavior like secretly aggregating copyrighted content, it has rewarded tech giants with deep pockets to spend on data licensing.

    Generative AI models such as OpenAI’s are trained mostly on images, text, audio, videos and other data — some copyrighted — sourced from public web pages (including, problematically, AI-generated ones). The OpenAIs of the world assert that fair use shields them from legal reprisal. Many rights holders disagree — but, at least for now, they can’t do much to prevent this practice.

    There are many, many examples of generative AI vendors acquiring massive datasets through questionable means in order to train their models. OpenAI reportedly transcribed more than a million hours of YouTube videos without YouTube’s blessing — or the blessing of creators — to feed to its flagship model GPT-4. Google recently broadened its terms of service in part to be able to tap public Google Docs, restaurant reviews on Google Maps and other online material for its AI products. And Meta is said to have considered risking lawsuits to train its models on IP-protected content.

    Meanwhile, companies large and small are relying on workers in third-world countries paid only a few dollars per hour to create annotations for training sets. Some of these annotators — employed by mammoth startups like Scale AI — work literal days on end to complete tasks that expose them to graphic depictions of violence and bloodshed without any benefits or guarantees of future gigs.

    Growing cost

    In other words, even the more aboveboard data deals aren’t exactly fostering an open and equitable generative AI ecosystem.

    OpenAI has spent hundreds of millions of dollars licensing content from news publishers, stock media libraries and more to train its AI models — a budget far beyond that of most academic research groups, nonprofits and startups. Meta has gone so far as to weigh acquiring the publisher Simon & Schuster for the rights to e-book excerpts (ultimately, Simon & Schuster sold to private equity firm KKR for $1.62 billion in 2023).

    With the market for AI training data expected to grow from roughly $2.5 billion now to close to $30 billion within a decade, data brokers and platforms are rushing to charge top dollar — in some cases over the objections of their user bases.

    Stock media library Shutterstock has inked deals with AI vendors ranging from $25 million to $50 million, while Reddit claims to have made hundreds of millions from licensing data to orgs such as Google and OpenAI. Few platforms with abundant data accumulated organically over the years haven’t signed agreements with generative AI developers, it seems — from Photobucket to Tumblr to Q&A site Stack Overflow.

    It’s the platforms’ data to sell — at least depending on which legal arguments you believe. But in most cases, users aren’t seeing a dime of the profits. And it’s harming the wider AI research community.

    “Smaller players won’t be able to afford these data licenses, and therefore won’t be able to develop or study AI models,” Lo said. “I worry this could lead to a lack of independent scrutiny of AI development practices.”

    Independent efforts

    If there’s a ray of sunshine through the gloom, it’s the few independent, not-for-profit efforts to create massive datasets anyone can use to train a generative AI model.

    EleutherAI, a grassroots nonprofit research group that began as a loose-knit Discord collective in 2020, is working with the University of Toronto, AI2 and independent researchers to create The Pile v2, a set of billions of text passages primarily sourced from the public domain.

    In April, AI startup Hugging Face released FineWeb, a filtered version of the Common Crawl — the eponymous dataset maintained by the nonprofit Common Crawl, composed of billions upon billions of web pages — that Hugging Face claims improves model performance on many benchmarks.

    A few efforts to release open training datasets, like the group LAION’s image sets, have run up against copyright, data privacy and other, equally serious ethical and legal challenges. But some of the more dedicated data curators have pledged to do better. The Pile v2, for example, removes problematic copyrighted material found in its progenitor dataset, The Pile.

    The question is whether any of these open efforts can hope to maintain pace with Big Tech. As long as data collection and curation remains a matter of resources, the answer is likely no — at least not until some research breakthrough levels the playing field.



    Original Source Link

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    Previous ArticleWormholes could blast out blazing hot plasma at incredible speeds
    Next Article Russia not ‘bluffing’ with nuclear threats as Biden greenlights limited military strikes, Medvedev says

    RELATED POSTS

    ‘College dropout’ has become the most coveted startup founder credential

    January 1, 2026

    Factor Meal Delivery Promo: Free $200 Withings Body-Scan Scale

    December 31, 2025

    The phone is dead. Long live . . . what exactly?

    December 31, 2025

    Commodore 64 Ultimate Review: An Astonishing Remake

    December 30, 2025

    Meta just bought Manus, an AI startup everyone has been talking about

    December 30, 2025

    iMP Tech Mini Arcade Pro Review: A Nintendo Switch Arcade Cabinet

    December 29, 2025
    latest posts

    The Cure’s Perry Bamonte Dies at 65

    Perry Bamonte, the Cure’s longtime guitarist and keyboardist, has died following an undisclosed illness. He…

    Copper records biggest annual gain since 2009 on supply bets

    January 1, 2026

    Trump Takes One Final Big Loss In Court Before The End Of The Year

    January 1, 2026

    Zohran Mamdani sworn in as NYC mayor in midnight ceremony at Old City Hall

    January 1, 2026

    ‘College dropout’ has become the most coveted startup founder credential

    January 1, 2026

    Poor Sleep Quality Accelerates Brain Aging

    January 1, 2026

    Avengers, Toy Story 5, The Odyssey

    January 1, 2026
    Categories
    • Books (970)
    • Business (5,878)
    • Film (5,812)
    • Lifestyle (3,915)
    • Music (5,880)
    • Politics (5,882)
    • Science (5,224)
    • Technology (5,811)
    • Television (5,497)
    • Uncategorized (2)
    • US News (5,863)
    popular posts

    Amber Heard’s legal team alleges wrong juror was seated in Depp trial, says mistrial should be declared

    NEWYou can now listen to Fox News articles! The legal team for Amber Heard is…

    Teaching about Racism Is Essential for Education

    September 12, 2022

    Malaysia bags another Big Tech deal with Oracle $6.5B investment

    October 2, 2024

    Skin-Boosting Tips from the World-Famous Facialists in Residence at Blackberry Mountain and Farm

    January 26, 2024
    Archives
    Browse By Category
    • Books (970)
    • Business (5,878)
    • Film (5,812)
    • Lifestyle (3,915)
    • Music (5,880)
    • Politics (5,882)
    • Science (5,224)
    • Technology (5,811)
    • Television (5,497)
    • Uncategorized (2)
    • US News (5,863)
    About Us

    We are a creativity led international team with a digital soul. Our work is a custom built by the storytellers and strategists with a flair for exploiting the latest advancements in media and technology.

    Most of all, we stand behind our ideas and believe in creativity as the most powerful force in business.

    What makes us Different

    We care. We collaborate. We do great work. And we do it with a smile, because we’re pretty damn excited to do what we do. If you would like details on what else we can do visit out Contact page.

    Our Picks

    Poor Sleep Quality Accelerates Brain Aging

    January 1, 2026

    Avengers, Toy Story 5, The Odyssey

    January 1, 2026

    ‘The Challenge’ Star Reveals Horrific Accident Blinded Him

    January 1, 2026
    © 2026 Beverly Hills Examiner. All rights reserved. All articles, images, product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Terms & Conditions and Privacy Policy.

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT