Prompt Fun: Davinci Could Always do Word in Context
February 28, 2023
Since OpenAI released GPT3, it's been common knowledge that their Davinci model couldn't do the NLP task "Word in Context". Learn some fun tricks to see how every version of Davinci has actually been capable of performing this task!
Read More
Accelerating Development with LLMs by Hallucinating Functions
January 24, 2023
Utilizing a code-based theme for prompt engineering with text-davinci-003 enables rapid development by predicting the output of arbitrary function names. This allows us to swiftly prototype and deploy new features, while continuously refining them based on specific task requirements.
Read More
GuidingAutomating Stable Diffusion Prompting
October 10, 2022
Stable Diffusion is an amazing model capable of generating highly detailed images. In its most basic form, it takes a prompt (a short description of the desired image) and produces an image based on that prompt. However, it's not reasonable to expect most people to become expert prompt engineers. We've simplified using stable diffusion in the Hyperwrite extension by first passing user input through a GPT formatting step.
Read More
Guiding Large Language Models with JSON
June 7, 2022
It's easy to get language models to generate text, not so easy to get them to output specific pieces of information given arbitrary inputs. We've found JSON formatting can help models keep track of what information to include, but this helps larger models more than smaller ones. (And mostly Davinci/Curie/Fairseq series)
Read More
Prompts
May 27, 2022
We figured we should throw up a blog post explaining what LLMs are and how we can prompt them to generate stuff.
Read More
We Started a Blog
May 27, 2022
Welcome to our blog!
We realized that we wanted to write a bit more about what we're doing and how so we figured we'd throw this up to share. At some point our designers will catch on and make us fix this, but for now it's a bit bare bones the way we like it :)
Right now, we're dealing with several engineering challenges that we think are pretty interesting:
- Figuring out how to generate text with large language models
- Figuring out how to search for related information to current stuff
- Distributed infrastructure for routing queries to the correct GPU cluster
- Real time analytics of usage
We'll be posting here occasionally as we figure things out as well as discussing some of the challenges we're running into. Feel free to contact us at support@hyperwrite.ai if there's anything we should cover or things that pique your interest.