Meet Nightshade, the new tool allowing artists to ‘poison’ AI models with corrupted training data
Franzen, Carl. "Meet Nightshade, the new tool allowing artists to ‘poison’ AI models with corrupted training data." VentureBeat 23 October 2023. As of October 2023 available here.[1]
From the article:
Nightshade was developed by University of Chicago researchers under computer science professor Ben Zhao and will be added as an optional setting to their prior product Glaze, another online tool that can cloak digital artwork and alter its pixels to confuse AI models about its style.
In the case of Nightshade, the counterattack for artists against AI goes a bit further: it causes AI models to learn the wrong names of the objects and scenery they are looking at.
For example, the researchers poisoned images of dogs to include information in the pixels that made it appear to an AI model as a cat.
Again note that "artists" here is limited to visual artists dealing with (static?) images. Users of this Wiki are invited to link this page to coverage of AI and other forms of art (and counter-moves thereon).
+++++++++++++++
Backup link: <https://tinyurl.com/yanf276t>
RDE, with thanks to R. Sanchez, finishing, 29Oct23