We’ve been chatting with portfolio companies a good deal about their approach to using generative AI, especially large language models (LLMs), internally. 

The responses have been quite divergent. 

On one end, companies have dived all in, including doing things like ensuring their entire staff has access to the latest models, revamping their coding practices, or starting over on their engineering take-home assessments for hiring. On the other end, we’ve spoken to founders who are paying little to no attention to the entire space or even actively dismissive of the benefits.

Most founders are in the middle somewhere, paying attention but still need to figure out how much time they should be spending on the topic or where to focus. But it turns out, at the firm, we’re pretty engaged and believe this technology will transform a large number of products and workflows (yes, a bunch of VCs excited about AI. Shocking, I admit).

So I thought I’d articulate some preliminary thoughts I’ve been having about where and how founders should be looking to apply generative AI.

Starting Points

To start, I think it’s useful to articulate a few ways in which startups relate to AI. I’d encourage you to be deeply skeptical of anyone who presents a random taxonomy they made up as reflective of objective reality, but here are a few buckets to at least kick off the decisions tree:

  1. Startups that are building some technology core to the creation or delivery of generative AI. Think labeling data (hi Datasaur!), training LLMs, hosting tooling, or the models themselves (hey Paperspace!).
  2. Startups that are using 3rd party foundational generative AI models as the foundation of their product. Think of a word processor built from the ground up around an LLM or a new type of image editor with a diffusion model at its core (hello Playground.ai!).
  3. Startups who are building software via mostly non-AI technology but have found some way to leverage a 3rd party generative AI model somewhere in their product.
  4. Startups that don’t use generative AI in their product directly, but use generative AI-based tooling internally.
  5. Startups not currently engaged at all.

The Easy Advice

Ok, so groups 1 and 2 are pretty simple. You don’t need my help here. Get back to work.

Folks in group 3: It sounds like you’re ahead of or at least appropriately positioned on the curve. Good for you.

Honestly, for those in group 5, my take is fairly simple. The future is coming at you fast; this shit is incredibly useful; you should start paying attention soon. You should probably at least give your team the option to have access to cutting-edge LLMs. Be careful about data security, but otherwise, at least get educated. If you’re not trying to use LLMs to see where they can be useful, you won’t be building fluency and be increasingly behind as things get crazy.

The More Complex Advice

The harder question is, if you’re in group 4, should you be evaluating a move into group 3? That is, if you don’t use generative AI in your product directly, should you find ways to leverage a 3rd party generative AI model somewhere into your product?

Here are two tests that come immediately to mind:

Test 1: Does your product create or allow users to create documents? This could be any format, such as text, image, audio, code, web pages, PDFs, songs, or spreadsheets. A key question to ask is: is there a place on your website where users can write extensive amounts of text, either in a natural or programming language? 

If the answer is yes, and you’re not yet using AI models to either produce content automatically for your users or to assist in the content creation process, then you should seriously consider researching this technology. If you don’t, you run the risk of being outdone by competitors who use these tools to make their users’ lives easier. 

Test 2: Does your product ingest or make sense of a lot of text/images/video/audio?

If the answer is yes, again, you’ve got some research to do. LLMs are proving to be extremely proficient in this area. This is in large part why a lot of machine learning was developed in the first place, e.g., applications of computer vision (hey Standard!) or natural language processing. 

So perhaps it’s not a huge surprise that bigger, better general models are creating shortcuts for the same functionality without requiring extensive in-house machine learning expertise.

Text ingestion via LLMs is probably the low hanging fruit here, and other media will take more work to make useful or depend on further advances.

The Part About Coding

Stepping back for a second, one thing I’d say is that if you’re building software, you need to take generative AI tooling for building software very seriously.

There is a version where things start moving extremely fast. The current generations of LLMs can do crazy things in terms of writing code on their own, and code completion engines like Copilot are already delivering meaningful productivity gains.

And this is before we even get to agents. They’re early, they do wacky crap, and they need a bunch of tooling scaffolded around them. Still, it seems very likely that we’ll routinely be assigning tickets to AI agents and getting back pull requests before we know it.

So, if you have any general angst about your knowledge workers losing their edge without embracing generative AI, it should feel doubly sickening with regards to you devs.

Alright, So How Urgent Is This?

It’s a strange time because a) sooner is better, but also b) things are going to change a lot, and thus your strategy is going to need to shift. Given the pace of change in the underlying models and related technology, some of this may feel like building on sand.

But as with mastery of all technology, much of the art is developing correct intuitions so you can accelerate learning, keep current, and move faster.

If you assume generative AI is transformative technology, then even if you expect it to change a lot, the sooner you dive in, the sooner your knowledge and experience compounds and the better you are off long term.