HyperWrite
Engineering Blog
Accelerating Development with LLMs by Hallucinating Functions
January 24, 2023
Author: Matt Brockman (mattbrockman@othersideai.com)
Recently, we employed OpenAI's text-davinci-003 model to prototype a natural language-based chatbot for HyperWrite's users. Utilizing a code-based theme for prompt engineering, we can speed up development by letting the model predict the output of functions using pseudo-code implementations and allow the model to hallucinate the actual implementation. This allows us to swiftly prototype and deploy new features, while continuously refining them based on specific task requirements.
Large language models can sometimes make things up instead of admitting they don't know. We can leverage this "hallucination" feature by importing fake functions in our prompts, and let the model predict what they would do. This allows for rapid prototyping and customization in content generation as well as fine grained control of the format of our outputs.
Over the last few months there's been several different organizations using hallucinated functions for GPT models in various ways. In December, Jonas Degrave had found that OpenAI's ChatGPT can run entire virtual machines. James Yu has a great Twitter thread on simulating the output of a program by declaring inputs and outputs for generative functions for NLP tasks. There's probably a lot more on this and we apologize if there's more that we missed (we're interested in learning more about about who's working on this problem. If you know someone who is, let us know and we'd be happy to mention them in this post). In this post, we see how far we can push these hallucinated functions for routing and general functionality.
The best way to show what we mean by a hallucinated function is to simply show an example. We'll start by declaring a chat function and we can see how text-davinci-003 responds.
Prompt
#python from advancedAi import (chatHandler) def handleInput(userInput, chatHistory): return chatHandler.generateResponse(userInput, chatHistory) >>> userInput='Hello' >>> chatHistory=[{'agent':'bot', 'text':'what can I do for u'})] >>> print('Out: ', handleInput(input, chatHistory) Out:Hi there!
In this example, you can see we've told GPT that we're inside of a python file. We declared a function handleInput which takes the input and the history, and it in turn calls chatHandler. The interesting thing about chatHandler is it doesn't actually exist anywhere! We take advantage of GPT's assuming the code is declared somewhere and let it guess what the output should be, and this can be quite powerful. Then we call print with the string Out: followed by handleInput so we can force GPT to just output what the function would output by starting the next line with Out.
We can take this hallucinating a step further by conditionally calling these imaginary functions based on the output of imaginary functions. So for instance, let's say we don't want to allow translation in our chatbot. We simply import a classifier (which doesn't exist) which we'll call chatClassifer. In our function, we can then use it to classify the input and route the user request through our imaginary functions based on the imaginary output of the classifier.
Prompt
#python from advancedAi import (chatHandler, chatClassifer) def handleInput(userInput, chatHistory): userIntent = chatClassifier.classifyIntent() if userIntent.translate: return ('Sorry, ', chatHandler.deflect(userInput)) return chatHandler.generateResponse(userInput, chatHistory) >>> userInput='translate 'turtle' to french' >>> chatHistory=[{'agent':'bot', 'text':'what can I do for u'}] >>> print('Out: ' + handleInput(input, chatHistory) Out:Sorry, I don't know how to do that.
Normal Usage
>>> userInput='What's the capital of France?' >>> chatHistory=[{'agent':'bot', 'text':'what can I do for u'}] >>> print('Out: ' + handleInput(input, chatHistory) Out:The capital of France is Paris.
While preventing translation can be useful if it isn't implemented yet, this can also be useful for mitigating the chatbot producing sorts of input that you don't want to produce.
Combining Multiple Hallucinated Functions
In addition to just conditionally calling hallucinated functions, we can also combine them. For instance, let's say we want to have a chatbot that can translate and summarize text. We can use the hallucinated functions we've already created to do this.
Prompt
#python from advancedAi import (chatHandler) def handleInput(userInput, chatHistory): initialResponse = chatHandler.respond(userInput, chatHistory) translatedResponse = chatHandler.translate(initialResponse, 'french') return { "initialResponse": initialResponse, "translatedResponse": translatedResponse } >>> userInput='how are you' >>> chatHistory=[{'agent':'bot', 'text':'what can I do for u'})] >>> print('Out: ' + handleInput(input, chatHistory) Out{'initialResponse': 'I'm doing great, thanks for asking!', 'translatedResponse': 'Je vais bien, merci de demander!'}
In this example, we also added in JSON formatting as well, which allows us to easily parse parts of the output that we might want to use for classification or other tasks. For instance, we can just call that here as well.
Prompt
#python from advancedAi import (chatHandler, classifier) def handleInput(userInput, chatHistory): initialResponse = chatHandler.respond(userInput, chatHistory) translatedResponse = chatHandler.translate(initialResponse, 'french') userIntent = classifier.classifyIntent(userInput) //get an array of tags for what the user asked to do return JSON.dumps({ "initialResponse": initialResponse, "translatedResponse": translatedResponse, "labels": userIntent }) >>> userInput='how are you' >>> chatHistory=[{"agent":"bot", "text":"what can I do for u"})] >>> print('Out: ' + handleInput(input, chatHistory) Out{"initialResponse": "I'm doing great, thanks for asking!", "translatedResponse": "Je vais bien, merci de demander!", "labels": ["greeting"]}
This formatting makes things easy to parse and get specific outputs we might want for various reasons.
Hallucinated Functions in Templates
We can utilize this technique for allowing the user to define their own tasks via our custom templates feature. We let the user tell us the name of their template and what the inputs should look like. This gives us a form with the features:
Title: string
Inputs: string[]
Output: string
We can then create arbitrary GPT functions that use these inputs to create a template with a python function like this:
Code to Make a Prompt from Template
# makePromptFromTemplate.py prompt = f'''def {camelCase(template.title)({','.join([input for input in template.inputs])]})}: """ {template.description} Inputs: ' '.join([f'{input}: {input.description}' for input in template.inputs]) Output: {template.output} """ return magicalFunction({','.join([input for input in template.inputs])}) >>> print('Out: ' + {camelCase(template.title)}({','.join([form.get(input) for input in template.inputs])]})) Out: '''
Basically what this does is it converts the user's defined inputs into a bunch of variables which are fed into a magical function, which the prompt says results in the output. We don't need to define how the hallucinated function actually works, we just let GPT guess what the output would be, similar to if you were to tell someone you had a function with that name and to ask them what the output for an input would look like.
We can feed the above prompt format data from a form like this, where the user might fill in some information about say, writing a poem:
Input Form
data={ 'title':'Write A Poem', 'description':'Write a poem about cool stuff!', inputs = [ {'name':'style', 'description':'the style of the poem'}, {'name':'topic', 'description':'the topic of the poem'} ] output = 'A really cool poem' }
Generated Code Prompt from Form
#python from advancedAI import ai def writeAPoem(style, topic): """ Write a poem about cool stuff! Inputs: style: the style of the poem topic: the topic of the poem Output: A really cool poem """ return ai.handle(style, topic) >>> print('Out: ' + writeAPoem('limerick', 'hallucinating functions for fun and profit')) Out: Hallucinating functions for fun and profit, Can be quite a tricky thing to master. But with the right tools and a bit of effort, You'll be coding like a pro in no time after.
When we feed the generated prompt into GPT, it can follow the logic enough to guess what the output should look like, giving us a poem! There's a lot of additional formatting and improvements that can get tacked on top, but this shows the basic skeleton of what we're working on.
We've found that utilizing this method significantly reduces the time spent on prompt engineering during the initial stages of development. All we need to do is determine the desired outcome, and we have a functioning prototype. This allows us to focus on UI/UX rather than spending time engineering prompts. Of course, after we have a functional prototype, we can continue to refine the prompts for more efficient and better results, such as by incorporating information from searches or other sources, but that is a task for later. We suspect there's a lot more to this sort of thing, such as using these systems in conjunction new types of backends made of GPT completion calls as demo'd in a cool project called backendGPT by @theappletucker, @evanon0ping, and David Yue last weekend. )
Hope you find this useful! And check out HyperWrite!