Site icon Stories Cover

Neural networks have learned to turn text into realistic images. What Google Imagen and DALL-E 2 can do and why it is dangerous

Last week, Google unveiled the Imagen project. It is an artificial intelligence system that can convert textual descriptions into realistic images.

Imagen is the second professional neural network working on the text-to-image model. In the future, such systems can take over much of the work of designers and artists. SPEKA explains what such neural networks are and what ethical issues they raise.
How Imagen works

Text-to-image artificial intelligence models are able to understand the relationship between an image and the words that describe it. The network operator sets the text description, and the system generates images based on its own interpretation of the text. The neural network is able to combine different objects, attributes and styles. According to the given description “dog photo”, the system creates a realistic image that will look like a real photo. But if you change the description to “dog painted with oil paint”, the image will be similar to painting.

Google claims that the new model for converting text into Imagen images has “an unprecedented degree of photorealism and a deep understanding of language.” Here are some examples of predefined text descriptions and images that Imagen has created based on them.

Exit mobile version