Spawning wants to build more ethical AI training datasets

Startups

Jordan Meyer and Mathew Dryhurst founded Spawning AI to create tools that help artists exert more control over how their works are used online. Their latest project, called Source.Plus, is intended to curate “non-infringing” media for AI model training.

The Source.Plus project’s first initiative is a dataset seeded with nearly 40 million public domain images and images under the Creative Commons’ CC0 license, which allows creators to waive nearly all legal interest in their works. Meyer claims that, despite the fact that it’s substantially smaller than some other generative AI training data sets out there, Source.Plus’ data set is already “high-quality” enough to train a state-of-the-art image-generating model.

“With Source.Plus, we’re building a universal ‘opt-in’ platform,” Meyer said. “Our goal is to make it easy for rights holders to offer their media for use in generative AI training — on their own terms — and frictionless for developers to incorporate that media into their training workflows.”

Rights management

The debate around the ethics of training generative AI models, particularly art-generating models like Stable Diffusion and OpenAI’s DALL-E 3, continues unabated — and has massive implications for artists however the dust ends up settling.

Generative AI models “learn” to produce their outputs (e.g., photorealistic art) by training on a vast quantity of relevant data — images, in that case. Some developers of these models argue that fair use entitles them to scape data from public sources, regardless of that data’s copyright status. Others have attempted to toe the line, compensating or at least crediting content owners for their contributions to training sets.

Meyer, Spawning’s CEO, believes that no one’s settled on a best approach — yet.

“AI training frequently defaults to using the easiest available data — which hasn’t always been the most fair or responsibly sourced,” he told TechCrunch in an interview. “Artists and rights holders have had little control over how their data is used for AI training, and developers have not had high-quality alternatives that make it easy to respect data rights.”

Source.Plus, available in limited beta, builds on Spawning’s existing tools for art provenance and usage rights management.

In 2022, Spawning created HaveIBeenTrained, a website that allows creators to opt out of the training datasets used by vendors who’ve partnered with Spawning, including Hugging Face and Stability AI. After raising $3 million in venture capital from investors, including True Ventures and Seed Club Ventures, Spawning rolled out ai.text, a way for websites to “set permissions” for AI, and a system — Kudurru — to defend against data-scraping bots.

Source.Plus is Spawning’s first effort to build a media library — and curate that library in-house. The initial image dataset, PD/CC0, can be used for commercial or research applications, Meyer says.

Spawning Source.Plus
The Source.Plus library.
Image Credits: Spawning

“Source.Plus isn’t just a repository for training data; it’s an enrichment platform with tools to support the training pipeline,” he continued. “Our goal is to have a high-quality, non-infringing CC0 dataset capable of supporting a powerful base AI model available within the year.”

Organizations including Getty Images, Adobe, Shutterstock and AI startup Bria claim to use only fairly sourced data for model training. (Getty goes so far as to call its generative AI products “commercially safe.”) But Meyer says that Spawning aims to set a “higher bar” for what it means to fairly source data.

Source.Plus filters images for “opt-outs” and other artist training preferences, showing provenance information about how — and from where — images were sourced. It also excludes images that aren’t licensed under CC0, including those with a Creative Commons BY 1.0 license, which require attribution. And Spawning says that it’s monitoring for copyright challenges from sources where someone other than the creators are responsible for indicating the copyright status of a work, such as Wikimedia Commons.

“We meticulously validated the reported licenses of the images we collected, and any questionable licenses were excluded — a step that many ‘fair’ datasets don’t take,” Meyer said.

Historically, problematic images — including violent and pornographic, sensitive personal images — have plagued training datasets both open and commercial.

The maintainers of the LAION dataset were forced to pull one library offline after reports uncovered medical records and depictions of child sexual abuse; just this week, a study from Human Rights Watch found that one of LAION’s repositories included the faces of Brazilian children without those children’s consent or knowledge. Elsewhere, Adobe’s stock media library, Adobe Stock, which the company uses to train its generative AI models, including the art-generating Firefly Image model, was found to contain AI-generated images from rivals such as Midjourney.

Spawning Source.Plus
Artwork in the Source.Plus gallery.
Image Credits: Spawning

Spawning’s solution is classifier models trained to detect nudity, gore, personally identifiable information and other undesirable bits in images. Recognizing that no classifier is perfect, Spawning plans to let users “flexibly” filter the Source.Plus dataset by adjusting the classifiers’ detection thresholds, Meyer says.

“We employ moderators to verify data ownership,” Meyer added. “We also have remediation features built in, where users can flag offending or possible infringing works, and the trail of how that data was consumed can be audited.”

Compensation

Most of the programs to compensate creators for their generative AI training data contributions haven’t gone exceptionally well. Some programs are relying on opaque metrics to calculate creator payouts, while others are paying out amounts that artists consider to be unreasonably low.

Take Shutterstock, for example. The stock media library, which has made deals with AI vendors ranging in the tens of millions of dollars, pays into a “contributors fund” for artwork it uses to train its generative AI models or licenses to third-party developers. But Shutterstock isn’t transparent about what artists can expect to earn, nor does it allow artists to set their own pricing and terms; one third-party estimate pegs earnings at $15 for 2,000 images, not exactly an earth-shattering amount.

Once Source.Plus exits beta later this year and expands to datasets beyond PD/CC0, it’ll take a different tack than other platforms, allowing artists and rights holders to set their own prices per download. Spawning will charge a fee, but only a flat rate — a “tenth of a penny,” Meyer says.

Customers can also opt to pay Spawning $10 per month — plus the typical per-image download fee — for Source.Plus Curation, a subscription plan that allows them to manage collections of images privately, download the dataset up to 10,000 times a month and gain access to new features, like “premium” collections and data enrichment, early.

Spawning Source.Plus
Image Credits: Spawning

“We will provide guidance and recommendations based on current industry standards and internal metrics, but ultimately, contributors to the dataset determine what makes it worthwhile to them,” Meyer said. “We’ve chosen this pricing model intentionally to give artists the lion’s share of the revenue and allow them to set their own terms for participating. We believe this revenue split is significantly more favorable for artists than the more common percentage revenue split, and will lead to higher payouts and greater transparency.”

Should Source.Plus gain the traction that Spawning is hoping it does, Spawning intends to expand it beyond images to other types of media as well, including audio and video. Spawning is in discussions with unnamed firms to make their data available on Source.Plus. And, Meyer says, Spawning might build its own generative AI models using data from the Source.Plus datasets.

“We hope that rights holders who want to participate in the generative AI economy will have the opportunity to do so and receive fair compensation,” Meyer said. “We also hope that artists and developers who have felt conflicted about engaging with AI will have an opportunity to do so in a way that is respectful to other creatives.”

Certainly, Spawning has a niche to carve out here. Source.Plus seems like one of the more promising attempts to involve artists in the generative AI development process — and let them share in profits from their work.

As my colleague Amanda Silberling recently wrote, the emergence of apps like the art-hosting community Cara, which saw a surge in usage after Meta announced it might train its generative AI on content from Instagram, including artist content, shows the creative community has reached a breaking point. They’re desperate for alternatives to companies and platforms they perceive as thieves — and Source.Plus might just be a viable one.

But if Spawning always acts in the best interests of artists (a big if, considering Spawning is a VC-backed business), I wonder whether Source.Plus can scale up as successfully as Meyer envisions. If social media has taught us anything, it’s that moderation — particularly of millions of pieces of user-generated content — is an intractable problem.

We’ll find out soon enough.

Products You May Like

Articles You May Like

Meet three incoming EU lawmakers in charge of key tech policy areas
Battery unicorn Northvolt files for bankruptcy, co-founder and CEO resigns
Fusion startup Tokamak Energy attracts $125M for its egg-like reactor design
Trump’s pro-fracking energy secretary pick has also invested in geothermal and nuclear startups
Solar power magnate Gautam Adani and others indicted over alleged $250M bribery scheme

Leave a Reply

Your email address will not be published. Required fields are marked *