DALL-E

DALL-E, stylized DALL·E, is an artificial intelligence program that creates images from textual descriptions, revealed by OpenAI on January 5, 2021.[1][2] It uses a 12-billion parameter[3] version of the GPT-3 transformer model to interpret natural language inputs (such as "a green leather purse shaped like a pentagon" or "an isometric view of a sad capybara") and generate corresponding images.[1] It is able to create images of realistic objects ("a stained glass window with an image of a blue strawberry") as well as objects that do not exist in reality ("a cube with the texture of a porcupine").[3]

DALL-E
Original author(s)OpenAI
Initial release5 January 2021
TypeTransformer language model
Websitewww.openai.com/blog/dall-e/

References

  1. Coldewey, Devin (5 January 2021). "OpenAI's DALL-E creates plausible images of literally anything you ask it to". Retrieved 5 January 2021.
  2. Heaven, Will Douglas (5 January 2021). "This avocado armchair could be the future of AI". MIT Technology Review. Retrieved 5 January 2021.
  3. Johnson, Khari (5 January 2021). "OpenAI debuts DALL-E for generating images from text". VentureBeat. Retrieved 5 January 2021.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.