In a previous post, we explored the vast potential of OpenAI’s GPT-4 in the information security space, specifically how it could augment security teams by running third-party vendor assessment with ChatGPT. We now turn the theoretical into practical by sharing our experience of running a vendor security assessment.
Building a policy library from scratch can be a daunting task, but with GPT-4, we’ve streamlined the process. The key was to converse with the AI model to create a comprehensive set of policies that are easy to understand and align with best security practices. The policies cover topics ranging from Information Security, Change Management, Incident Response, Disaster Recovery, and Third-Party Vendor Assessment.
The Vendor Assessment Policy, available on our GitHub, is one of the highlights of our library. This policy establishes a standard process for evaluating third-party vendors’ security practices and is a crucial tool for any organization that relies on external services. The policy ensures that vendors maintain appropriate security controls and adhere to high standards of data security, just like us.
As an example, let’s walk through a hypothetical scenario of a third-party vendor assessment using our newly minted policy.
A manager within the company has a new vendor request: a tool that integrates with the company’s email client and CRM system to craft and track email campaigns. The tool also requires a browser extension on the workstations of the user group.
The manager initiates a conversation with ChatGPT to start the vendor assessment process. The AI model begins by asking a series of questions to understand the data involved and the potential security implications.
“Does this service process or store any personal identifiable information (PII)? What kind of access will the service have to our systems?” ChatGPT inquires. The manager explains that the service tracks email interactions with leads and might store some PII. The service will also have access to APIs on their CRM and email client.
ChatGPT then asks about the vendor’s security certifications and the results of their latest audit. The manager provides the SOC 2 certification and reports no significant issues. The vendor also encrypts data both in transit and at rest and has a robust vulnerability management program.
Once all the information is gathered, ChatGPT applies the guidelines from the Vendor Assessment Policy to evaluate the vendor. Based on the manager’s responses, the AI model identifies potential risks and provides a list of remediation actions.
Throughout the process, the AI model facilitates the conversation, ensures all necessary aspects are covered, and provides insightful advice based on the policy guidelines. This example highlights how GPT-4 can streamline the vendor assessment process, making it more efficient and less prone to human error.
The use of AI in InfoSec policy creation and management is still in its infancy. However, our experience with GPT-4 shows great promise for the future, bringing efficiency and accuracy to tasks often burdened by complexity.