Advania UK logo  Advania UK logo compact

How a private ChatGPT can help your organisation overcome risks from public AI tools

Posted On
Written by
Duration of read
8  min
Share Article
Subscribe via email

While many organisations and users are starting to see the value that artificial intelligence can bring to everyday workplace tasks, security concerns are becoming real. Organisations are seeing the consequences of sensitive data being used in public large language models (LLMs).

In his latest blog on artificial intelligence in the workplace, Head of Service Architecture Tristan Watkins explains how you can overcome the risks of public ChatGPT by using a private AI service for your organisation.

Picture the scene in your workplace. You have a time-poor user who asks ChatGPT to summarise a document. This seemingly innocuous request carries risks that few people understand.

  • What if the document contained confidential information about your organisation?
  • What if it contained personally identifiable information or other sensitive information types? 
  • What if it contained your intellectual property?
  • What if your user is attuned to all these worries – and still makes a mistake? 

 

All these data loss concerns take a new shape with ChatGPT. The experience of using it feels intimate and the interaction that triggers the leak is different than the normal data loss pattern. The user isn’t sending an email or sharing a document – they are just asking their virtual assistant for help.

While much of the focus on AI risk has orientated on questions of hallucinations, authenticity, bias and disinformation, this new data loss vector is the most immediate AI risk today. Over the last few months, a new approach to introducing controlled usage of generative AI in a Microsoft-orientated organisation has emerged.

The organisational risks of public ChatGPT use

In this blog, I want to focus thinking on thinking on the use of public AI services for work by inspecting two examples.

First, we have seen the Samsung data leak, where developers asked ChatGPT for help analysing their source code, resulting in Samsung banning generative AI tools. And we’ve seen the Italian government impose, then lift, a ban on ChatGPT when the service added an option for users to request removal of personal data from the service. For the record, OpenAI has gone beyond that now, adding an option to disable chat history (so the data aren’t held for training purposes in the first place).

You might see OpenAI’s response and think the problem is solved, but as an organisation, you can’t rely on each user going through this process. Many users won’t know that it exists, or care.

Chat history is in fact quite useful, so you don’t really want to make users turn it off, but clearly anything a user asks is then stored in ChatGPT, and by default it might be used for training future models – and you probably don’t want your work stuff in there when a future GPT is released. Additionally, the OpenAI datacentres all reside in America today, so that is something to be mindful of should you have data sovereignty concerns to respect.

Azure OpenAI Service: Helping you gain control of generative AI

With all this in mind, we’re seeing some organisations taking a very restrictive step by blocking access to ChatGPT. In my view, this is incomplete for two reasons:

  1. There are other generative AI services to consider.   
  2. Restricting without enabling a private service will simply drive users to use ChatGPT on their own devices, creating a shadow IT problem – blocking access is insufficient to stop usage without a sanctioned alternative.   


This is not to say that we think blocking access to public generative AI services is a bad idea, just that it is incomplete. A better approach would be to:  
 

  1. Block access to any public generative AI services you don’t fully trust, which can be accomplished through many security technologies like a forward proxy or an unsanctioned application in Defender for Endpoint.   
  2. Provide a compelling, private alternative.   


These approaches belong together.
Azure OpenAI Service is the most sensible service to underpin this effort for a Microsoft-orientated organisation.  

Azure OpenAI Service already ships with ChatGPT, and there are a range of interfaces that can be used to provide this private instance of GPT 3.5, GPT 4 (or other models like Dall-E 2 for image generation) directly to your users. At Advania, we are already delivering this approach to some of our clients today.

Why use Azure OpenAI for your organisation?

If you have decided that you want a private ChatGPT, and you are a Microsoft-orientated organisation, Azure is an obvious choice because the trust, compliance and security considerations have already been considered. There are many direct integrations in Azure that will become compelling as this initial foundation is extended to reach enterprise data.

Is it possible to use OpenAI’s own services like ChatGPT for Business, once it arrives, instead? Couldn’t we build elsewhere, using OpenAI’s own services, later connecting in to your enterprise data? These options are certainly possible, but it’s hard to discern any advantage over the use of Azure OpenAI, and complexity will be much lower if the generative AI lives where the enterprise data lives (which is also where users already do their work).

Additionally, there are specific privacy, security and compliance considerations that distinguish Azure OpenAI Service from alternatives. The most crucial difference is that Azure OpenAI prompts and completions are never retained for future training of OpenAI models.

Beyond that, it’s also important to note:

  • Although both OpenAI and Microsoft store prompts and completions for 30 days for abuse and misuse monitoring (this is important for AI safety), you are only able to apply to opt out of this monitoring in Azure OpenAI Service.   
  • You can tune Content Filters to your needs in Azure OpenAI Service.   
  • If you use Customer-Managed Keys (formerly known as BYOK) in Azure or Microsoft 365, your Azure OpenAI Service stored content can also be protected under keys that you control.   
  • If you control Microsoft’s access to your content with Customer Lockbox, this will also apply to data stored in Azure OpenAI Service.   
  • Where private connectivity in Azure is in place, Azure OpenAI can also benefit from these protections, for instance with Private Endpoints.  
  • Azure OpenAI Service is part of the broader Azure Cognitive Services stack, so as other AI needs emerge, they can be brought together with OpenAI Services most directly. For instance, proximity to Azure Cognitive Search will ease integration.  

 

All told, we feel the decision-making process should start from the ‘why not Azure?’ position for these reasons, as well as the probability that many more supporting reasons will emerge in the years to come.

Of course, this is all a very technology-orientated, incomplete view of AI risk, but an important part of the entire picture. If you’d like some guidance for your organisation through the wider AI risk landscape, we are keen to help you out.

What about Copilots?

You might be following this line of thinking, but wondering if Microsoft Copilots won’t solve this problem once they arrive – but we don’t believe they will. Copilots provide specific rather than generalised capabilities.

Within their domain, they are more skilled and accurate than a generalised capability like an LLM because they have specific training and narrower parameters. Copilots do not directly merge these specific capabilities with the more general (and sometimes unreliable) knowledge provided by GPT. Copilots use GPT for natural language capabilities, but they do not rely on the language model for knowledge, precisely because GPT does have limitations with reliability.

Even if Copilots did solve this generalised need, we think it’s important to provide a private ChatGPT capability today, or our most present risks will remain unaddressed. We do think Copilots will be extremely valuable in their specific domains, but we want Copilots and a more generalised capability like ChatGPT.

How can I use private ChatGPT for work?

If Copilots are good assistants for specific types of work, and it’s possible to provide a private instance of ChatGPT today, how do we connect this private ChatGPT to work data?

The concise answer to this question is that this is also possible today, but needs more consideration than may be obvious at a glance, and in some cases, there is supporting foundational work that needs to come first. But in all cases, we think it will be natural to start from the private ChatGPT position and extend to wider enterprise uses once they are well-understood and well-founded.

A private ChatGPT is the low-hanging fruit – the solution to the current risks – and the most natural first milestone on the journey to wider generative AI capabilities.

Want to find out how you can leverage AI securely within your organisation?

Get in touch – let’s find out what we can do to get you started.

Sign up to receive insights from our experts

Get the latest news and developments from Advania delivered to your inbox

Other blog articles that might interest you

Driven by client success

We’re proud to work with the some of the most ambitious and innovative organisations.

MANAGED IT SERVICES

Sign up to receive insights from our experts

Get the latest news and developments from Advania delivered to your inbox.