Adobe expands bug bounty programme to account for GenAI

Adobe has expanded the scope of its bug bounty programme – which is overseen by specialists at HackerOne – to account for the development of generative artificial intelligence (AI), rewarding ethical hackers who discover and responsibly disclose vulnerabilities Adobe Firefly, its generative AI platform, and its Content Credentials service.

The organisation said that as generative AI integrates more closely into people’s daily lives it was becoming ever-more important to understand and mitigate the risks arising, and that by expanding its programme and fostering an open dialogue over safe, secure and trustworthy AI, it hoped to encourage fresh ideas and perspectives, while offering transparency and improving trust.

“The skills and expertise of security researchers play a critical role in enhancing security and now can help combat the spread of misinformation,” Dana Rao, executive vice president, general counsel and chief trust officer at Adobe.

“We are committed to working with the broader industry to help strengthen our Content Credentials implementation in Adobe Firefly and other flagship products to bring important issues to the forefront and encourage the development of responsible AI solutions,” said Rao.

Launched in March 2023 having been developed on its Sensei platform, Adobe Firefly is a family of generative AI models for designers, that has been trained on millions of images from Creative Commons, Wikimedia and Flickr Commons, as well as Adobe Stock and other images in the public domain.

In open the service up to bug bounty hunters, Adobe wants hackers to pay specific attention to the OWASP Top 10 for Large Language Models (LLMs), looking at issues arising from prompt injection, sensitive information disclosure or training data poisoning, to pinpoint weaknesses in Firefly

The second part of the expansion, covering Content Credentials, will supposedly help provide more transparency as to the provenance of items created using Firefly. They are built on the C2PA open standard and serve as tamper-evident metadata about their creation and editing. Content Credentials are also integrated across a number of Adobe products besides Firefly, including Photoshop and Lightroom.

“Building safe and secure AI products starts by engaging experts who know the most about this technology’s risks. The global ethical hacker community helps organizations not only identify weaknesses in generative AI but also define what those risks are,” said Dane Sherrets, senior solutions architect at HackerOne. “We commend Adobe for proactively engaging with the community, responsible AI starts with responsible product owners.”

Ethical hackers interested in taking a look under the bonnet can find more information on Adobe’s dedicated HackerOne page, or if they are interested in joining its private bug bounty programme, can apply here.

For cyber pros making their way to BSides San Francisco on the weekend of 4 and 5 May, Adobe will also be present at the Bug Bounty Village, and sponsoring a “dystopian” Saturday night party at which dancers will “weave an interpretive tale of technology’s ethical struggles”.

Adobe joins a growing number of tech firms taking steps to address the risks of generative AI through bug bounty programmes, among them Google, which expanded its bug bounty scheme, the Vulnerability Rewards Program (VRP) to encompass attack scenarios specific to the generative AI supply chain in October 2023.

Source

Shopping Cart
Scroll to Top