Close Menu
Beverly Hills Examiner

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    40 Unique Gifts Your Wife Will Surely Swoon Over

    February 4, 2026

    PepsiCo will cut the cost of snacks like Doritos by ‘up to 15%’

    February 4, 2026

    California Dems lash out at ICE during gubernatorial debate

    February 4, 2026
    Facebook X (Twitter) Instagram
    Beverly Hills Examiner
    • Home
    • US News
    • Politics
    • Business
    • Science
    • Technology
    • Lifestyle
    • Music
    • Television
    • Film
    • Books
    • Contact
      • About
      • Amazon Disclaimer
      • DMCA / Copyrights Disclaimer
      • Terms and Conditions
      • Privacy Policy
    Beverly Hills Examiner
    Home»Technology»6 VCs explain how startups can capture and defend marketshare in the AI era
    Technology

    6 VCs explain how startups can capture and defend marketshare in the AI era

    By AdminOctober 13, 2023
    Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    6 VCs explain how startups can capture and defend marketshare in the AI era


    You cannot escape conversations about AI no matter how far or fast you run. Hyperbole abounds around what current AI tech will be able to do (revolutionize every industry!) and what current AI tech will be able to do (take over the world!). Closer to the ground, TechCrunch+ is working to understand where startups might find footholds in the market by levering large language models (LLMs), a recent and impactful new method of creating artificially intelligent software.

    How AI will play in Startup Land is not a new topic of conversation. A few years back, one venture firm asked how AI-focused startups would monetize and whether they would suffer from impaired margins due to costs relating to running models on behalf of customers. That conversation died down, only to come raring back in recent quarters as it became clear that while LLM technology is quickly advancing, it’s hardly cheap to run in its present form.

    But costs are only one area where we have unanswered questions. We are also incredibly curious about how startups should approach building tools for AI technologies, how defensible startup-focused AI work will prove, and how upstart tech companies should charge for AI-powered tooling.

    With the amount of capital flowing to startups working with and building AI today, it’s critical that we understand the market as we best we can. So we asked a number of venture capitalists who are active in the AI investing space to walk us through what they are seeing in the market today.

    What we learned from the investing side of the house was useful. Rick Grinnell, founder and managing partner at Glasswing Ventures, said that within the new AI tech stack, “most of the opportunity lies in the application layer,” where “the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models.” Startups, he added, can use speed to their advantage as they work to “innovate, iterate and deploy solutions” to customers.

    Will that work prove defensible in the long run? Edward Tsai, a managing partner at Alumni Ventures, told us that he had a potentially “controversial opinion that VCs and startups may want to temporarily reduce their focus on defensibility and increase their focus on products that deliver compelling value and focusing on speed to market.” Presuming massive TAM, that could work!

    Read on for answers to all our questions from:

    • Rick Grinnell, founder and managing partner, Glasswing Ventures
    • Lisa Calhoun, a founding managing partner, Valor VC
    • Edward Tsai, a managing partner, Alumni Ventures
    • Wei Lien Dang, a general partner, Unusual Ventures
    • Rak Garg, principal, Bain Capital Ventures
    • Sandeep Bakshi, head of Europe investments, Prosus Ventures

    Rick Grinnell, founder and managing partner, Glasswing Ventures

    There are several layers to the emerging LLM stack, including models, pre-training solutions and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?

    In our proprietary view of the GenAI tech stack, we categorize the landscape into four distinct layers: foundation model providers, middle-tier companies, end-market or top-layer applications, and full stack or end-to-end vertical companies.

    We think that most of the opportunity lies in the application layer, and within that layer, we believe that in the near future, the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models. These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.

    On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge. Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.

    Companies like Datadog are building products to support the expanding AI market, including releasing an LLM Observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?

    LLM observability falls within the “middle layer” category, acting as a catalyst for specialized business applications to use foundational models. Incumbents like Datadog, New Relic and Splunk have all produced LLM observability tools and do appear to be putting a lot of R&D dollars behind this, which may curtail the market area in the short term.

    However, as we have seen before with the inceptions of the internet and cloud computing, incumbents tend to innovate until innovation becomes stagnant. With AI becoming a household name that finds use cases in every vertical, startups have the chance to come in with innovative solutions that disrupt and reimagine the work of incumbents. It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks. Therein lie major opportunities for startups.

    How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?

    When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open-source contingency that will drive considerable third-party adoption.

    As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.

    However, it’s essential to recognize that the market is still far from being crystallized. LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement. We need robust tools and platforms to enable broader utilization among businesses and individuals. Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.

    Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, NVIDIA, and Salesforce into companies like Inflection AI and Cohere.

    What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?

    To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.

    We have established a framework to assess the defensibility of application layers of AI companies. Firstly, the application must address a real enterprise pain point prioritized by executives. Secondly, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.

    Thirdly, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models. Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.

    How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?

    Within the enterprise sector, there’s a clear recognition of the value of AI. However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients. As the business landscape matures, proficiency in leveraging AI is becoming a strategic imperative.

    McKinsey reports that generative AI alone can add up to $4.4 trillion in value across industries through writing code, analyzing consumer trends, personalizing customer service, improving operating efficiencies, and more. 94% of business leaders agree AI will be critical to all businesses’ success over the next five years, and total global spending on AI is expected to reach $154 billion by the end of this year, a 27% increase from 2022. The next three years are also expected to see a compound annual growth rate of 27% – the annual AI spending in 2026 will be over $300 billion. Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing. 82% of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.

    Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters. Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question).

    The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user. OpenAI itself racks upwards of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.

    Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively. Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing. Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI, are likely to gravitate toward such pricing strategies.

    The technology is still nascent, and many companies will likely find success with both kinds of pricing models. Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises. However, as long as large language technology remains heavily dependent on the inflow of data usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.

    Lisa Calhoun, founding managing partner, Valor VC

    There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?

    While there are startups specializing in parts of the stack (like Pinecone) – Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U–using its own trained data set to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.

    Companies like Datadog are building products to support the expanding AI market, including releasing an LLM Observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?

    Tools like Datadog can only help the acceptance of AI tools, if they succeed in monitoring AI performance bottlenecks. That in and of itself is probably still largely unexplored territory that will see a lot of change and maturing in the next few years. One key aspect there might be cost monitoring as well since companies like Openai charge largely ‘by the token’, which is a very different metric than most cloud computing.

    What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?



    Original Source Link

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Email Reddit Telegram
    Previous ArticleFEMA Offers Every State $2 Million to Adopt Safer Building Codes
    Next Article How stores are spying on you using creepy facial recognition technology without your consent

    RELATED POSTS

    A New AI Math Startup Just Cracked 4 Previously Unsolved Problems

    February 4, 2026

    Epstein-linked longevity guru Peter Attia leaves David Protein, and his own startup ‘won’t comment’

    February 4, 2026

    Upgrade Your Roku Before the Big Game

    February 3, 2026

    Fintech CEO and Forbes 30 Under 30 alum has been charged for alleged fraud

    February 3, 2026

    Dyson Deals: WIRED’s Top Pick Pet Vacuum and Purifier Heater

    February 2, 2026

    TikTok says its services are restored after the outage

    February 2, 2026
    latest posts

    40 Unique Gifts Your Wife Will Surely Swoon Over

    All products and services featured are independently chosen by editors. However, Billboard may receive a…

    PepsiCo will cut the cost of snacks like Doritos by ‘up to 15%’

    February 4, 2026

    California Dems lash out at ICE during gubernatorial debate

    February 4, 2026

    Anthony Davis joins Wizards in trade from Mavericks: report

    February 4, 2026

    A New AI Math Startup Just Cracked 4 Previously Unsolved Problems

    February 4, 2026

    Why Are Some Women Training for Pregnancy Like It’s a Marathon?

    February 4, 2026

    The Spooky Reason Robert Pattinson’s Batcave Is Batman’s Best Live-Action HQ Yet

    February 4, 2026
    Categories
    • Books (1,040)
    • Business (5,946)
    • Film (5,882)
    • Lifestyle (3,984)
    • Music (5,950)
    • Politics (5,951)
    • Science (5,293)
    • Technology (5,880)
    • Television (5,569)
    • Uncategorized (2)
    • US News (5,932)
    popular posts

    Conner O’Malley is the internet’s most Online filmmaker

    Conner O’Malley is the internet’s most Online filmmaker About Little White Lies Little White Lies…

    Beyond Wonderland Shooting Leaves Two People Dead, Three Injured

    June 19, 2023

    9 best Memorial Day appliance sales 2022

    May 29, 2022

    Trump Appears To Fall Asleep Multiple Times During Night 3 Of Republican Convention

    July 18, 2024
    Archives
    Browse By Category
    • Books (1,040)
    • Business (5,946)
    • Film (5,882)
    • Lifestyle (3,984)
    • Music (5,950)
    • Politics (5,951)
    • Science (5,293)
    • Technology (5,880)
    • Television (5,569)
    • Uncategorized (2)
    • US News (5,932)
    About Us

    We are a creativity led international team with a digital soul. Our work is a custom built by the storytellers and strategists with a flair for exploiting the latest advancements in media and technology.

    Most of all, we stand behind our ideas and believe in creativity as the most powerful force in business.

    What makes us Different

    We care. We collaborate. We do great work. And we do it with a smile, because we’re pretty damn excited to do what we do. If you would like details on what else we can do visit out Contact page.

    Our Picks

    Why Are Some Women Training for Pregnancy Like It’s a Marathon?

    February 4, 2026

    The Spooky Reason Robert Pattinson’s Batcave Is Batman’s Best Live-Action HQ Yet

    February 4, 2026

    Shaboozey Slammed For Tone Deaf Grammy Speech, Jelly Roll

    February 4, 2026
    © 2026 Beverly Hills Examiner. All rights reserved. All articles, images, product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Terms & Conditions and Privacy Policy.

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT