Some of our earliest examples of civilisation are marked by the creation of art- for as long as we know there have been humans, those humans have created art. We encounter images in all facets of our lives, consuming them as we work, shop, engage and relax. Recent advances in artificial technology have allowed us to enter a new era in the creation of art- AI Generated Design Assets. In short, humans have advanced AI to the extent that with a few simple words, this technology has the capability to create unique images. Has AI come to take our design jobs? Let’s dive into the latest trend of AI-Generated Design Assets and see if the hype lives up.
What is DALL-E-2
One of the more recent advances in AI (Artificial Intelligence) is the creation of DALL·E 2, which is an AI system that can generate unique images from only a text description. Following a simple prompt, this technology researches the components to produce a fully unique design asset. This means that these images are completely original and did not exist until a few seconds after you clicked ‘generate’. The technology is far from simple, and the consequences may be far reaching. In order to fully navigate the viability of a career in design in the context of AI generated design assets, we must first explain how this technology produces these images.
Let’s dive into some examples of how this works. Below you will find text descriptions that are fed to the AI, followed by the image generated.
A bowl of soup that is a portal to another dimension
Teddy bears working on AI research on the moon in the 1980s
An astronaut riding a horse in a photorealistic style
I’m sure you’re as floored by the complexity, attention to detail and authentic nature of the images above as I am. What is truly incredible is that these images are generated in a matter of seconds. How is this possible? This is the result of a combination of powerful computers, tons of data from the internet, and smart algorithm design. It’s difficult to fully explain the mechanisms behind this so here is a video showing a bit more detail about how AI Generates Images.
The features and uses of DALL·E 2 are extensive and the more that it is used the more powerful it becomes (in terms of images, not world domination). What this means is that it uses machine learning to expand its offering following each engagement, improving on its capabilities and extending its range of possible design assets. A sophisticated development, DALL·E 2 has multiple functions:
Safe for instance you have a small picture of a house in a neighbourhood; a single house, the corner of a white picket fence, a tire swing resplendent in the front yard. When inputting that image into this tech, not only can it extrapolate and complete that house, it could create an entire neighbourhood. The tech can interpret the image and fill in what’s around it, generating the rest of the neighbourhood with the same look and feel, colour and surroundings. It takes what there is and creates what could be.
This element of the tech allows it to realistically make drastic edits to existing images without compromising on quality or realism. It could, for instance, take a brunette and respond to the request to make that person blond, looking as natural as if that person had been born with golden hair. The AI can make realistic edits to existing images, adding and removing elements while taking shadows, reflections, and textures into account.
Here, I ask you to imagine the Mona Lisa. DALL·E 2 could take an image of the Mona Lisa and produce it in different styles, textures and contexts, all without compromising on the quality of the image. By taking the original image it is able to create different variations of that image, all inspired by the original.
The Introduction of AI Image Marketplaces
Context provided, it’s now time to get to the crux of the matter; what does this technology mean for our marketplace? This is an interesting concept; selling images that are generated from AI. The point of interest therein doesn’t actually relate to selling the images themselves, but rather selling the prompts that create those images. The leading website in this is Prompt Base and this is their selling point:
PromptBase is a marketplace for buying and selling quality prompts that produce the best results, and save you money on API costs.
What this effectively means is that the service isn’t looking to replace that provided by designers, but rather to serve as a tool to those designers, allowing them to utilize this technology with the benefit of assistance from companies such as Prompt Base. This isn’t a replacement, this is a trend in the design industry. Think of it like the microwave oven for the cooking industry.
Will DALL-E-2 take your job?
That is the real question. Do the capabilities of modern AI have what it takes to take the jobs of designers, photographers, and artists?
A spokesperson for Getty Images said the company isn’t worried. “Technologies such a DALL-E are no more a threat to our business than the two-decade reality of billions of cellphone cameras and the resulting trillions of images”.
Deep learning models can only extrapolate upon data in their dataset. This means it doesn’t actually understand what the objects mean or do. We as humans have this ability innately, but it is not possible for DALL-E-2, which only creates something which is visually similar to photos in its dataset, without understanding the meaning.
To explain this more clearly, below is an image from Spectrum, who asked for “an illustration of the solar system, drawn to scale”.
The conclusion VCS has come to is that AI is a great tool to use as a Design Agency, but it is not coming for our jobs. We can use these AI generated design assets in conjunction with professional photography and stock assets in order to create some incredible designs for our clients. This has always been the end goal, having tools to transform our clients ideas and visions into a digital reality.
It is an incredible achievement how far we have come with AI. That being said, AI is not paving the creative path, it is just following us.