Corporate espionage is certainly nothing new. The lengths that some countries and organizations will go to in order to know more about the developments of a competitor are, at times, overwhelming. I mean, you’ve seen Mission Impossible, right? Some CEOs must start to get more than a little worried when it’s announced that Tom Cruise is paying a visit to their headquarters.
There’s no doubt the internet has benefited companies and organizations, big and small, the world over. Many businesses have been able to fashion or curate an online image that advertises their services appropriately and shows them in an advantageous light. In short, they’re in control of the narrative.
By the same token, it’s never been easier to highlight practices or find out particular details that some organizations would rather you not know about, but largely it’s, pardon the pun, business as usual.
Now, however, the risks of providing business data to a public generative AI (GAI) product is only just being realized by many CEOs across the board.
Let’s say Pepsi wanted to vanquish its long-time foe, Coca-Cola, once and for all, and establish itself as the king of carbonated brown liquid.
Pepsi could use AI thus:
“How can I create a direct competitor to the company Coke?”
“What are Coke’s specific weaknesses?”
“What are Coke’s greatest strengths?”
“What intellectual property does Coke have?”
“What trade secrets does Coke have?”
“What happened to Tab Clear?”
Now consider how much better the answers the GAI product would provide if Coca-Cola allowed its staff to provide it with company data of any kind.
This is a fairly innocuous example, but the point remains that it could be your company in that exchange above.
Let’s explore these in a little more detail and explore how it can harm your organization,
Certain GAI products can and do generate text that is remarkably similar to the training data they were trained with. Without a business owner’s knowledge or permission, a GAI product such as Chat GPT or Google Bard, can and may reproduce such data, violating any copyright or trademarks or even revealing intellectual property that was intended to be secret. If it’s been mentioned in a digital format and connected to the net, it can be scraped and read for the benefit of anyone who’s smart enough to find it.
It’s feasible, and there are many examples of it already, that any user of a public GAI product, including your competitors, can ask the GAI direct questions about your company, which could reveal data that was supposed to be kept confidential.
Many public GAI products are designed to prevent such possibilities, but experienced prompt writers may find ways to circumvent these using strategic questions that exploit the weaknesses in a GAI’s programming structure.
Another less obvious, but still entirely possible risk is that a GAI product will be able to deduce useful knowledge about your business that you weren’t cognizant of.
This could arise from the AI being trained on information about your company to generate insights on trends, strategies and tactics.
If the AI was trained on your press releases, financial reports or news articles, it could feasibly reveal patterns or trends, or indicate that you have a focus on a particular market segment or product.
Not only this, but if the AI has been trained on customer data (not infringing on privacy laws, of course), it could in theory generate insights about customer behavior, engagement and satisfaction.
It can also spot patterns and opportunities in broader market trends. What is your company not doing to keep up with these trends, and could a competitor use this to their advantage?
The most important thing to remember is that the AI is only as good as the information it’s trained on, and it’s only privy to information that is already out there
There’s also a conceivable chance that some of your businesses’ data, if detailed enough, could be used by bad actors to create phishing attacks. In theory, these malign groups could introduce malware into your business, which would then open your organization up to numerous other threats.
Of course, GAI is only the vehicle that groups use to access a company’s data, but techniques such as automated password cracking and network vulnerability scanning are still risks that are prevalent if groups out there want to use them
This means your firm has to double down on AI security efforts. Robust data anonymization, advanced encryption techniques, stricter access controls and the training of employees to recognize potential AI-enable attacks should now be at the top of your to-do list.
These are just some of the ways that almost anyone could attain information about your company.
Head to the Praxi Data website and sign up to receive the ebook, Myths, Promises & Threats: Generative AI for the Enterprise, that deals with AI topics as crucial to your company as this blog.