InferKit's text generation tool takes text you provide and generates what it thinks comes next, using a state-of-the-art neural network. It's configurable and can produce any length of text on practically any topic. An example:
While not normally known for his musical talent, Elon Musk is releasing a debut album
While not normally known for his musical talent, Elon Musk is releasing a debut album. It's called "The Road to Re-Entry," and it features an astounding collection of songs... (continued)
You can also create custom generators for specific kinds of content. Both types of generators can be used by anyone through either the web interface or the developer API. Get started by creating an account.
Creative and fun uses of the network include writing stories, fake news articles, poetry, silly songs, recipes and just about every other type of content. More utiliarian use cases might include autocompletion.
App developers can use the API in their own games or other projects.
While the neural network is impressive, it doesn't comprehend text as well as a human, so many applications are still out of reach.
Due to technical limitations, the generator currently goes down sporadically for a total of about 20-30 minutes each day. We're working on improving this. If your application requires higher uptime, let us know.
The generator may produce offensive or sexual content. Use at your own risk! This is because it was modeled on a great variety of different web pages, some of which contained such content. You may be able to prevent this by creating a custom generator (see next section).
Modern neural networks can be retrained ("fine-tuned") on custom datasets to produce content similar to those datasets. We support this functionality through custom generators.
InferKit was created by @AdamDanielKing. It leverages my experience creating and running one of the biggest AI demo sites on the web, Talk to Transformer. Owing to traffic from the Verge, the Next Web, Wired, the BBC and others, the site has reached millions of users and at times required more than 50 parallel GPUs to serve requests.
We're currently using Megatron-11b, the largest publicly available language model. It was created by Facebook and has 11 billion parameters.
No. The network is already trained and does not learn from the inputs you give it. Nor do we store them.
This seems to be a complicated issue. We can't give legal advice, so if you rely a solid answer you'll need to consult a legal professional. We do however waive any rights we may have in the text you generate and grant you licence to use it for any purpose (to the extent that we have that right), royalty-free and without warranty. You don't have to credit us anywhere.
To hint at the complexity of this issue, consider that the neural networks were originally: 1) designed and trained by large tech companies who licensed their code under the MIT license, 2) learned from millions of web pages containing content in which many people hold copyright, 3) are hosted by us (we waive all rights to the content) and 4) are conditioned on the prompt you give it (your own content).