Recent innovations in machine learning and data processing have led to the
creation of text-to-image models which can produce detailed artistic images in
response to text prompts in a matter of minutes. In addition to being widely
discussed and circulated online, the images generated by these models have also been used to create magazine covers, to illustrate children’s books, and even to win art contests. Though the “AI art” produced by these text-to-image models is undeniably impressive from a technological standpoint, its creation and use has inspired a variety of criticisms online, especially on social media. The task of this talk is to lay out some of these common criticisms and philosophically evaluate them. The first of these are what I am calling production objections, which have to do with the way that AI art is created or produced.
The second are structural objections, which label AI art wrongful because it results in harmful effects given the background social setup of our world, for example by replacing the work of vulnerable human artists or creating images that unfairly represent women or people of color. In the final section of the paper, I propose a different kind of criticism of AI art, which I call the epistemic stance objection. According to this objection, creating AI art may be wrong because the art substantially undermines our ability to rationally evaluate the world around us. In other words, it might be wrong to produce AI art because the mass proliferation of AI generated artwork forces us to adopt a kind of skeptical stance with respect to art, and the world more generally. I argue that it is only this third objection that really provides grounds for thinking AI art gives rise to novel ethical concerns.

Event Details

See Who Is Interested

  • Adegbenro samuel dare

1 person is interested in this event

User Activity

No recent activity