[[{“value”:”
Adobe is updating its Terms of Use after ambiguous language sparked backlash from users about the privacy and ownership of their work.
In a blog post on Monday, the company that makes the creative software tools Photoshop, Premier, and InDesign said it would be rolling out updated language in its user terms by June 18, 2024. “At Adobe, there is no ambiguity in our stance, our commitment to our customers, and innovating responsibly in this space,” wrote EVPs Scott Belsky, who oversees product, and Dana Rao, who oversees legal and policy.
Belsky and Rao wrote that their company has “never trained generative AI on customer content,” nor taken ownership of people’s unpublished work, and that Adobe wasn’t announcing an intention to do these things with the recent ToS update. It’s worth noting that Adobe’s Firefly generative AI models are trained on contributions to Adobe’s stock library along with public domain data, but this is being treated as distinct from content created by users for their own personal and professional purposes.
“That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do,” wrote Belsky and Rao.
Last week, a PR fiasco erupted when users were notified of Adobe’s updated Terms of Use. Without clear explanation of what the changes meant, or what was changed, Adobe users assumed the worst and believed the updated terms meant drastic changes to the autonomy of their content. Specifically, users thought Adobe could now access unpublished work for training its Firefly AI models and could even assume ownership of works in progress. The updates lacked clarity and transparency during a time when generative AI tools are perceived as threatening the work and livelihoods of creatives. Swift backlash followed, including promises to abandon the platform.
But as it turns out, Adobe’s updated policy that grants the company access to user content was about screening for activity that breaks the law or violates its terms. Adobe said it never intended on training its models on user content or usurping any control. Belsky and Rao also asserted that users have the choice to “not participate in its product improvement program” (sharing content for model training), that its licenses are “narrowly tailored to the activities needed” such as scanning for inappropriate or illegal behavior, and that Adobe does not scan user content stored locally on their computers.
So, all of this could have been avoided with clearer communication, but some reputational damage has likely been done.
“We recognize that trust must be earned,” said Belsky and Rao, closing the post.
“}]] Mashable Read More
After a vague policy update led Adobe users to assume their work wasn’t private, the company has published a statement promising to be more clear.