Recently, OpenAI launches its Bug Bounty Program to reward researchers who find and report security vulnerabilities. OpenAI Bug Bounty Program helps to make their systems more secure.
OpenAI promotes and develops a friendly AI research. ChatGPT is a famous model that interacts conversationally. ChatGPT can answer follow-up questions, admit mistakes, challenge incorrect assumptions, and reject inappropriate requests.
“We believe that transparency and collaboration are crucial to addressing this reality. That’s why we are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems. We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information. Your expertise and vigilance will have a direct impact on keeping our systems and users secure”, said in blog.
Companies must proactively protect their systems and data in today’s AI world, where technology advances rapidly, and cyberattacks are ever-present.
A proactive measure like OpenAI’s Bug Bounty Program can make a huge difference to the company’s security.
The Bug Bounty Program rewards depend on the severity of the vulnerability.
OpenAI partnered with Bugcrowd, where security researchers submit known vulnerabilities there.
Rewards range from $200 to $6500; the maximum reward is up to $20,000 for high-severity bugs.
Following Scope and Rewards
The following services and applications are in-scope:
1. The OpenAI APIs.
Public cloud resources or infrastructure involved in serving the OpenAI API, including:
Cloud storage accounts (e.g., Azure data blobs).
Cloud compute servers (e.g., Azure virtual machines).
As noted earlier, model safety and content issues are not eligible for a reward in any category.
ChatGPT is in scope, including ChatGPT Plus, logins, subscriptions, OpenAI-created plugins (e.g., Browsing, Code Interpreter), plugins you create yourself, and all other functionality.
Note: You are not authorized to conduct security testing on plugins created by other people.
Examples of things we are interested in:
- Stored or Reflected XSS
- Authentication Issues
- Authorization Issues
- Data Exposure
- Payments issues
- Methods to bypass Cloudflare protection by sending traffic to endpoints that are not protected by Cloudflare
- Ability to run queries on pre-release or private models
- OpenAI-created plugins:
- Code Interpreter
- Security issues with the plugin creation system:
- Outputs that cause the browser application to crash
- Credential security
Methods to cause the plugin service to make calls to unrelated domains from where the manifest was loaded
Note: If your account is approved to create plugins, you may create and test your own plugins, their functionality, and their interactions with ChatGPT. You may sign up for the waitlist, but OpenAI and bugcrowd cannot get you off the waitlist, do not ask. You may not perform security testing on plugins created by other people (e.g., Plugins in the plugin store).
- For model-related issues, please use the reporting channels discussed earlier.
- Model issues are strictly out of scope and will not be rewarded.
3. Third Party Corporate Targets
This target group consists of confidential OpenAI corporate information that may be exposed through third parties. Some examples of the types of vendors which would qualify in this category include:
- Google Workspace
- Asana, Trello, Jira, Monday.com
- The notion, Confluence, Evernote
- Intercom, Hubspot, Zendesk
- Stripe, Airbase, Navan
- Tableau, Mode, Charthop, Looker
OpenAI’s Bug Bounty Program encourages researchers to find security vulnerabilities and report ethically to the company. Researchers can report security bugs to OpenAI via the company’s bug reporting form. Once the report is received, OpenAI’s security team will review it and address the issue. A reward will be given to the researcher who reported the vulnerability once it’s fixed.
ChatGPT banned in Italy over Privacy concerns, but now the news comes around that the Italian government may lift the ban and resume service if OpenAi agrees to their terms.