tl;dr: You can use OpenAI’s ChatGPT to bounce ideas around and write story outlines Since I got into the field of AI and started working at OpenAI, it’s been interesting to see how things have accelerated. As an author, I’m frequently asked if AI will replace writers altogether. My personal take is that while AI … Continue reading Collaborative Creative Writing with OpenAI’s ChatGPT
OpenAI recently released Whisper, an open source automatic speech recognition model that's incredibly powerful. I'm biased (I'm the Science Communicator for OpenAI), but in my experience it's better than any system or service I've ever used. Best of all, your can use it completely free, either by downloading it to your computer or by running … Continue reading The Easy Guide to Using OpenAI’s Whisper Model to Transcribe Video and Audio
https://media.giphy.com/media/vMFgJ4Uq1yqOtuT1Cc/giphy.gif TL;DR: OpenAI has a new code generating model that’s improved in a number of ways and can handle nearly two times as much text (4,000 tokens.) I built several small games and applications without touching a single line of code. There are limitations, and coding purely by simple text instructions can stretch your imagination, … Continue reading Building games and apps entirely through natural language using OpenAI’s code-davinci model
Large models like GPT-3 can perform a variety of tasks with little instruction. That said, one of the challenges in working with these models is determining the right way to do something. GPT-3 has acquired knowledge from its training data as well as another kind of “intelligence” from learning the various relationships between concepts in … Continue reading Smarter than you think: Crystalline and fluid intelligence in large language models
GPT-3 is an exceptional mimic. It looks at the text input and attempts to respond with what text it thinks best completes the input. If the first line sounds like something from a romance novel it will try to continue writing in that style. If it’s a list of video games, it will try to … Continue reading How to get better Q&A answers from GPT-3
https://vimeo.com/552634504 GPT-3 can remember hundreds of items and perform completions with them. This is useful if you want to take your prompts to the next level and do more complex operations.
OpenAI's GPT-3 is a highly capable general language model able to talk about almost anything. While this is an advantage on one hand, it can also make keeping GPT-3 focused on one topic a challenge if you’re trying to create a special purpose chatbot. If you want GPT-3 to talk about movies with a user, … Continue reading A simple method to keep GPT-3 focused in a conversation
We recently added three new endpoints to the API for GPT-3. The Classification Endpoint makes it easy to apply classification from a data set larger than what fits inside a prompt. https://vimeo.com/536638286
TL;DR: In an API call GPT-3 can recall details from a 1,500 word article and even repeat passages verbatim. It can also repeat over 250 items from a list as it creates a completion. The concept of memory with a large language model can be a little fuzzy. There's how much information the model possesses … Continue reading How large is GPT-3’s short term memory?
TL;DR: For many tasks you don’t need to provide GPT-3 with examples because it already understands what you want. If you look closely at the documentation and prompts for GPT-3 provided by OpenAI you’ll notice that a number of them don’t require any examples to show the model what you want. This is because in … Continue reading The GPT-3 Zero Shot approach