Skip to content
Close
Blog

Data privacy and security in a ChatGPT world

By Brian Weers, Enterprise Principal

A man's hands wearing a ring types on a keyboard resting on his lap

What is the status of the unofficial GPT pilot your company is unknowingly running? We believe that with the excitement, energy, and media coverage that surrounds AI and generative technologies, you need to be thinking and talking about what approved usage of these technologies looks like at your company.

According to an Increditools article, ChatGPT has been used by over 100 million people worldwide and receives approximately 10 million daily requests. Monthly traffic is estimated to be around 96 million visitors. Those statistics indicate a reasonable probability that someone in your company is already feeding information into an AI either during their workday or with company hardware. We are all familiar with “shadow IT” – now, welcome to the world of “shadow AI.”

While exciting, generative AI is not free from risk for your company. At a minimum, some duties or efforts should not be entrusted to an AI because the cost of being wrong incurs too much risk. While entrusting an AI with a response to an HR complaint brought by an employee may seem obviously wrong to you, not everyone may be sensitive to that risk. Different tools have different data policies, so it is important to consider the impact of leaking company data or IP due to the tool selected or the scenario to which AI was applied.

I recently read an article about employees at a technology company who exposed sensitive data by exposing source code to ChatGPT. This made me wonder, “What company info or IP are other folks inadvertently releasing into the wild?” 

If that question gave you a little shudder or a cold sweat, here are four steps you can take to begin the process of reducing your risk – and bring your pulse rate back down.

1. Establish a GPT use policy. In crafting your policy, consider these questions:

  • Can it be accessed on company hardware?
  • How can or should it be used?
  • What/how/when should it NEVER be used?
  • What is the new “common sense” around AI usage?

2. Communicate both approved tools and forbidden tools and practices. Make sure to cover:

  • What tools should NEVER be used.
  • What is the responsible amount of trust given to an AI answer, depending on the type of answer it is.
  • What settings need to be set to control data sharing.

3. For AI tools that you wish to allow for use in your organization, review the data agreements and know who gets access to the data you feed the tool and how that data will be used.

4. Finally, establish an approach and cadence to review and iterate on the previous three items.

Certainly, this is a starting place and not a robust final solution, but, if you did not know where to start, this is a reasonable place. Thanks for reading – I would love to hear your ideas or about your initial efforts to channel AI usage in a safe and secure way.

Continue reading

Close