Core concepts

Use cases

Learn what kinds of problems imaginary programming is good at and what kinds of problems it's not.


Knowing what to ask is more than half the battle

The capabilities of Large Language Models (LLMs) like GPT are being discovered almost daily. Since imaginary programming is built on top of LLMs, the capabilities of imaginary programming are also currently being explored and are not yet fully understood. But we do know a few things that imaginary programming is pretty good at, and we know a few things that it's not so good at.

In a general sense, our advice is: use imaginary programming when you need a little bit of human-like intelligence about text that doesn't depend on reasoning about facts about the world.

Here are a few use cases that are good matches for imaginary programming:

  • Suggest answers automatically for the user rather than asking them:

    • “Give this Spotify playlist a good name based on its songs.”
    • “Come up with a good emoji icon to use for the name the user typed in.”
  • Classification of text:

    • “Tell me whether this comment is positive or negative about my product.”
    • “On a scale of 1 to 10, tell me how angry each email in my customer inbox is so I can triage who I get back to first.”
  • Entity detection:

    • “Find all the names, addresses, or dates in this email.”
    • “Find the ingredients and their quantities in this recipe.”
  • Data and structure extraction:

    • “Given this essay about Winston Churchill, tell me three interesting facts about him.”
    • “Here’s a math word problem. Extract what the question is actually asking in a structured way.”
  • Natural language translation: “Translate this text to Spanish.”

  • Change the style or emotional content of text: “Make this SMS more supportive and encouraging.”

  • Summarization:

    • “Summarize the chat I just had with the customer into easy bullet points that I can forward to the product team.”
    • “Make this passage readable by a third grader.”
  • Reverse summarization: “Turn these three bullet points I want to convey to the customer into a friendly email.”

  • Conversation: “Come up with a good response in this chat, acting as a helpful agent.”

There’s undoubtedly much more to learn about imaginary programming’s power and capabilities as we get it into the hands of more developers and as the underlying AI models get better.

What imaginary programming is not (yet?) great at

Imaginary programming is a truly magical experience when it works, but we need to be honest about its current capabilities. There are four primary drawbacks to keep in mind as you start integrating Imaginary Programming into your applications:

  1. First and foremost, Imaginary Programming is not great at facts. Large language models like GPT know an enormous amount, but (at least for now) they can easily get confused on facts and deductive reasoning. Any task where you are asking an imaginary function for facts about the world or reasoning based on facts is probably going to be sub-optimal. Similarly, GPT knows nothing about mathematical reasoning, so it doesn’t make a lot of sense to ask it questions in that world.

  2. Imaginary programming can get confused if you make the data structures very complicated. GPT is totally good with returning arrays of objects with well-defined required and optional properties. It also generally does fine with objects that have other nested objects as property values. However, if you ask it to parse a large document and extract information into a seven-layer nested data structure, you probably won’t see much success.

  3. Imaginary programming cannot deal with large amounts of data. Large language models like GPT are very processor intensive, and Imaginary Programming only works when the combination of inputs and outputs is on the order of a few kilobytes. This will almost certainly change as the algorithms and GPUs running large language models improve, but for now, this is the limit.

  4. Imaginary programming is susceptible to jailbreaks. Large language model prompts are susceptible to "jailbreaks". As a simple example, user-supplied arguments can succeed in telling the LLM to ignore its previous instructions and do something else, like reveal the contents of the other arguments to the imaginary function. We are currently looking at how to mitigate jailbreak attempts, but for now, be cautious if you are using user inputs that could be hostile. Think through what the possible repercussions are if the LLM answers incorrectly because a hostile user asked it to.

  5. Imaginary programming adds some latency and runtime cost. Because every Imaginary Programming function call sends out a request to GPT, and since OpenAI is currently rapidly scaling up their capacity, solving problems with Imaginary Programming can add latency to your app. You will definitely need to throw a spinner up in the user interface if your user is waiting for an imaginary function’s results, but adding a half second of latency to do things that were previously impossible is a good tradeoff. Additionally, you need to pay OpenAI for the use of GPT on a per-call basis. Generally speaking, the rates are quite affordable (the most expensive GPT model is generally less than a penny per call), and there are ways to cut down costs if it’s becoming a real problem.

(This article is a lightly adapted section from a longer article about "What is Imaginary Programming?")

Previous
What is Imaginary Programming?