Fun AI Apps Are Everywhere Right Now. But a Safety ‘Reckoning’ Is Coming

If you’ve spent any time on Twitter lately, you may have seen a viral black-and-white image depicting Jar Jar Binks at the Nuremberg Trials, or a courtroom sketch of Snoop Dogg being sued by Snoopy.

These surreal creations are the products of Dall-E Mini, a popular web app that creates images on demand. Type in a prompt, and it will rapidly produce a handful of cartoon images depicting whatever you’ve asked for.

More than 200,000 people are now using Dall-E Mini every day, its creator says—a number that is only growing. A Twitter account called “Weird Dall-E Generations,” created in February, has more than 890,000 followers at the time of publication. One of its most popular tweets so far is a response to the prompt “CCTV footage of Jesus Christ stealing [a] bike.”
[time-brightcove not-tgx=”true”]

— Weird Dall-E Mini Generations (@weirddalle) June 14, 2022

If Dall-E Mini seems revolutionary, it’s only a crude imitation of what’s possible with more powerful tools. As the “Mini” in its name suggests, the tool is effectively a copycat version of Dall-E—a much more powerful text-to-image tool created by one of the most advanced artificial intelligence labs in the world.

That lab, OpenAI, boasts online of (the real) Dall-E’s ability to generate photorealistic images. But OpenAI has not released Dall-E for public use, due to what it says are concerns that it “could be used to generate a wide range of deceptive and otherwise harmful content.” It’s not the only image-generation tool that’s been locked behind closed doors by its creator. Google is keeping its own similarly powerful image-generation tool, called Imagen, restricted while it studies the tool’s risks and limitations.

The risks of text-to-image tools, Google and OpenAI both say, include the potential to turbocharge bullying and harassment; to generate images that reproduce racism or gender stereotypes; and to spread misinformation. They could even reduce public trust in genuine photographs that depict reality.

Text could be even more challenging than images. OpenAI and Google have both also developed their own synthetic text generators that chatbots can be based on, which they have also chosen to not release widely to the public amid fears that they could be used to manufacture misinformation or facilitate bullying.

Read more: How AI Will Completely Change the Way We Live in the Next 20 Years

Google and OpenAI have long described themselves as committed to the safe development of AI, pointing to, among other things, their decisions to keep these potentially dangerous tools restricted to a select group of users, at least for now. But that hasn’t stopped them from publicly hyping the tools, announcing their capabilities, and describing how they made them. That has inspired a wave of copycats with fewer ethical hangups. Increasingly, tools pioneered inside Google and OpenAI have been imitated by knockoff apps that are circulating ever more widely online, and contributing to a growing sense that the public internet is on the brink of a revolution.

“Platforms are making it easier for people …read more

Source:: Time – Technology

(Visited 6 times, 1 visits today)

Golden Globes 2021: The Complete List of Nominees | Entertainment Weekly

'Framing Britney Spears': Inside her 'unraveling' and conservatorship battle

Leave a Reply

Your email address will not be published. Required fields are marked *