AI Transparency

This project uses AI honestly, and we want to be equally honest about that with you.

What was AI-generated?

The initial dataset of extinction events and artwork entries was researched and written with the help of large language models (LLMs). This includes the event descriptions, dates, affected artwork associations, severity assessments, and fix descriptions.

Every entry that was AI-generated is marked with an AI-researched label. This label means:

Why use AI at all?

Documenting the full scope of net art extinction events is an enormous task. Decades of dependency changes have broken countless artworks, and much of this history is scattered across blog posts, archived web pages, and institutional memory. AI helped us build a comprehensive starting point — a draft dataset that would have taken months to compile manually.

But a starting point is all it is. The goal is for every entry to eventually be validated, corrected, and enriched by the community — the researchers, artists, curators, and archivists who witnessed these events firsthand.

Current status

116 AI-researched events
0 validated events
42 AI-researched artworks
0 validated artworks

How validation works

When a community member verifies an entry — confirming dates, checking links, validating artwork attributions — the ai_generated flag is removed and the AI-researched label disappears. The entry becomes part of the trusted, human-verified dataset.

You can help validate entries by opening an issue with corrections, or by submitting a pull request that removes the ai_generated: true line from a file's frontmatter after verification.

What AI cannot replace

AI can compile facts from existing sources, but it cannot replace the lived experience of someone who watched their artwork break overnight, or the institutional knowledge of an archivist who spent years preserving a collection. Those stories and that expertise are what make this archive meaningful. The AI-generated entries are placeholders for that human knowledge — not a substitute for it.

Tools used

This project was built with the assistance of large language models (LLMs), used for initial research, content drafting, and code development. The project source code, infrastructure, and editorial decisions are maintained by Marc Schütze at ZKM | Center for Art and Media Karlsruhe.