Building games and apps entirely through natural language using OpenAI’s code-davinci model

https://media.giphy.com/media/vMFgJ4Uq1yqOtuT1Cc/giphy.gif TL;DR: OpenAI has a new code generating model that’s improved in a number of ways and can handle nearly two times as much text (4,000 tokens.) I built several small games and applications without touching a single line of code. There are limitations, and coding purely by simple text instructions can stretch your imagination, … Continue reading Building games and apps entirely through natural language using OpenAI’s code-davinci model

Smarter than you think: Crystalline and fluid intelligence in large language models

Large models like GPT-3 can perform a variety of tasks with little instruction. That said, one of the challenges in working with these models is determining the right way to do something.  GPT-3 has acquired knowledge from its training data as well as another kind of “intelligence” from learning the various relationships between concepts in … Continue reading Smarter than you think: Crystalline and fluid intelligence in large language models

Advanced Prompt Design for GPT-3: How to make a prompt 20x more efficient

TL;DR: GPT-3 is much more capable than people realize when you utilize advanced prompt design that shows it what you want performed in a task then show it how to perform this task with a list of inputs. One of my favorite prompts in our OpenAI documentation is an example showing how to get GPT-3 … Continue reading Advanced Prompt Design for GPT-3: How to make a prompt 20x more efficient

Overclocking OpenAI’s GPT-3

Here's a fun fact: OpenAI's GPT-3 is actually a family of models, Ada, Babbage, Curie and Davinci that have different capabilities and speed. While Davinci gets most of the attention, the other models are amazing in their own way. Davinci is the most generally-capable model that’s exceptional at intuiting what someone wants to accomplish while … Continue reading Overclocking OpenAI’s GPT-3