GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

GUEST ESSAY: Why is everyone so angry at artificial intelligence?

GUEST ESSAY: Why is everyone so angry at artificial intelligence?

The image accompanying this article was copied and pasted from a very good blog about artificial intelligence (AI) called The Sequence. Which may have been lifted from somewhere else. Copyright was not an issue because the image was generated by AI, specifically from a text-to-image generator called Midjourney, which allows its images to be used by anyone. 

The copyright issue is why a lot of people are angry, but it is not the only one. But let’s start with that. 

Machine learning, which was once a subset of AI but now has equal billing, has one core requirement to do anything useful. And that is data. In order to learn, it needs to ingest lots and lots of training data, testing data and validation data. The more the better (and the more data, the more computationally expensive the learning task is likely to be). 

The machine learning systems take in all this data, and given some mathematical or programmatic or human guidance, it learns from its data. It learns and learns and learns, and then eventually (when stars align) it can generate something entirely new, entirely novel, something useful and very different from the data you fed it. That is why it is called “generative” AI.

So, in the case of the image above (and the millions upon millions of new images being spat out continuously by Midjourney and similar “text-to-image” AIs like Stable Diffusion and Dall-e) one can safely assume that this AI-invented young lady and her interestingly textured face is some unpredictable and complex entangled amalgam of images or parts of images that have previously been fed to it. 

And here is the crux of it. These programs are fed images from the public sources, some of it copyrighted, and some not. So, for instance, it is likely that these programs have ingested many or even most of the images available from art history books and art catalogues and art websites. Some art would presumably be out of copyright (like the old Masters), but some contemporary art would not.

Visit Daily Maverick’s home page for more news, analysis and investigations

It is some of these current artists who are in a fury about the use of their images in training data, even if the resulting images are recombinant mishmashes of many different things the AI has “seen” and not necessarily directly related to the artists’ work. There are a number of lawsuits filed against these AIs, including one by California-based artists Sarah Andersen, Kelly McKernan, and Karla Ortiz – alleging that these organisations have infringed the rights of “millions of artists” by training their AI tools on billions of images scraped from the web “without the consent of the original artists”. 

The basic complaint of these lawsuits is the same. Who gave you permission to use my art to stimulate your silicon brain? You must ask and pay for my permission if I decide to give it. 

Hang on, I demur. All human art students do exactly the same thing. They study art in books and galleries and lecture presentations and online catalogues. And then they make their own art. Which must, by definition, have been informed by the art they have seen. 

So, what is the difference between a human-trained artist and an AI-trained artist? Surely they both stood on the shoulders of history and used that view as a guide to their future (and novel) work?

There are other problems with the lawsuit. For instance, the AI didn’t actually use the images. The images were disassembled into mathematical bits, pieces, slices and cores and linked via statistical magic to other bits of mathematics elsewhere in their data libraries to create new stuff. Difficult to pin plagiarism on that. 

And then there is a legal concept called “fair use”, which was employed in the sampling wars in music, wherein the courts battled with where exactly the line should be drawn for freely using another musician’s work without permission. Length of segment? Exact copy? Clarity of original? Recognisability?

Panic and spittle

I do not know how this copyright matter will shake out, but as I indicated earlier, people are getting pretty worked up about it. Not to mention just getting worked up about AI in general.

I wonder about this – I haven’t seen as much panic and spittle since Trump won the election. Perhaps it has something to do with fear of both high- and low-skill job loss (which will certainly happen), perhaps it has something to do with fear of ethics violations, like incipient racism or sexism (already uncovered over the past few weeks, to the embarrassment of many researchers). 

The extreme reaction to AI makes me think it is deeper than that. It seems to be a territorial reaction of the species, us. No one could possibly out-think us, out-create us, out-innovate us, out-feel us. Out-sentience us. Not now, not ever. Not in a million years. Right? Right? 

I am not so sure.

Back to copyright, our subject of the day. I bring you this, spotted by a sharp-eyed friend of mine. It is a recent academic paper in which researchers from Japan’s Osaka University wondered whether an AI could generate images by simply peering into someone’s brain. No text prompts necessary. The scary title is: High-resolution image reconstruction with latent diffusion models from human brain activity. 

This struck me as an OMG experiment. A little further down the research road, a computer might simply riffle your visual memories and produce novel art. 

How does copyright law respond to that? DM

Steven Boykey Sidley is a professor of practice at Johannesburg Business School at the University of Johannesburg.

This content was originally published here.