In 2009, I gave a talk to a group of about 20 Silicon Valley CIOs, who were focusing on a novel question: what to do about all the personal smartphones employees had begun using since the introduction of the iPhone two years earlier.
The inclination was to ban them for business purposes, or at least to tightly restrict their use. After all, employees had almost exclusively used company-issued electronics for their work, and CIOs liked the control that ownership of the devices gave them.
Besides, smartphones introduced all sorts of complications. What if a company sanctioned the use of personal phones and an employee got distracted and had a car accident? Would the company be liable? What about protections for corporate data? If employees routinely used personal devices for work, they could much more easily walk off with company secrets. How would you even incorporate all the different flavors of smartphones into the IT infrastructure?
Now that we're a bit more than two years into the generative AI revolution, we're about to face the same sort of adjustment and discomfort because "bring your own" AI to work will happen more and more.
The good news is that, as employers and employees cope with the confusion, we can learn some lessons from what happened with smartphones and other waves of innovation.
Let's have a look.
The first lesson is that employers can't resist the trend toward BYOAI any more than King Canute could command the tide to stop coming in. Current employers are getting accustomed to using generative AI and will want to keep using the same large language model (LLM) when they move to a new employer. All the digital natives entering the work force from college will expect to have access to the AI tools they've been using in coursework or experimenting with on their own.
Employers will have more control with BYOAI than they did with smartphones, because licensing fees for LLMs can be steep, and most employees will rely on their employers for access, rather than just heading to Best Buy and purchasing something on their own. In addition, as insurance companies increasingly develop specialized tools for agents, claims representatives, underwriters, etc., employees will have to fit into the corporate system.
Still, some uses of generative AI, such as individual productivity tools for research and writing, will be based on personal preference, and employers should be prepared to try to accommodate employees. Employers should actually encourage experimentation with models and templates built on top of them. That's because, while most innovations need to be sponsored by a manager or even the C-suite, a lot of innovation with generative AI can bubble up from the lower ranks. Yes, rethinking a claims process is a major, departmentwide or companywide endeavor, but individuals are finding ways to improve productivity for themselves and the small groups they may work in, and that sort of progress should be encouraged.
A second lesson from earlier technological leaps is that there will be legal issues. There are always legal issues.
One will concern just what I can bring with me if I go to a new employer. If I've finetuned an editing template on top of an LLM you've bought me access to and did the work on your time, is that template my property or yours? I suspect it's yours, but those are the kinds of issues courts will sort out.
Generative AI will also make it easier for people to walk off with information that companies will view as proprietary data and trade secrets. Part of the magic of generative AI is that it can amalgamate data from so many different sources, so quickly, and many companies are using that capability to speed underwriting and other capabilities. But that ease of access also means many employees may be able to get into far more internal systems than they could in the past.
There will also certainly be copyright violations, because generative AI systems scoop up so very much information and sometimes don't flag that what they're producing in reply to a prompt is drawn almost verbatim from text or an image. That's a problem, in general, but will be more severe with LLM use that happens outside of corporate standards.
The third big lesson I see is that BYOAI will make it harder to have a "single version of the truth," just as spreadsheets and PowerPoint presentations did. With spreadsheets and PowerPoints, individuals would use corporate data but would interpret it in their own way. They would sometimes get the data wrong or misinterpret it. They would also supplement the data with other sources. When they were done, they had come up with their own version of the truth, and it stayed on their computer, generally not available to others and not synched up with others' versions of the truth.
CIOs have been fighting the battle over data consistency for decades, and they'll face it again with generative AI, in general, and with BYOAI, in particular. People will again download information, process it with AI, add their thoughts and produce a version of the truth that will generally stay on their computer. That so many companies operate in the cloud these days could make synching up easier, but CIOs will have to work to make sure they don't face the kind of splintering that happened with earlier technologies.
How, then, should insurance companies react to the issues that BYOAI will raise?
Some MIT researchers have three general suggestions, as reported in the MIT Sloan Management Review.
They warn, off the top, against banning BYOAI tools:
“If we restrict access to these tools, employees won’t just stop using generative AI. They’ll start looking for workarounds — turning to personal devices and using unsanctioned accounts and hidden tools. [In that case], rather than mitigating risk, we’d have made it harder to detect and manage.”
Then, they recommend:
"1. Build specific guidance.
Leaders should develop clear guardrails and guidelines that enable employees to experiment safely with generative AI tools. Company experts in technology, law, privacy, and governance should be tapped to develop policies on sanctioned and unsanctioned generative AI use and specify which tools are acceptable and under what conditions....
"One leader on the MIT CISR Data Board [said] their organization clearly communicated approved uses of generative AI to employees, such as using publicly available information in their AI queries, versus off-limits uses, such as uploading data that contains personally identifiable information, strategic information, or proprietary data. The organization also had a clear process in place for anyone who was unsure about whether AI use was appropriate....
"2. Develop training and establish communities of practice.
"Organizations should develop AI direction and evaluation skills to help employees use generative AI tools effectively. This training should cover the AI models that power generative AI tools, ethical and responsible use of AI, and how to critically judge AI-generated content....
"For example, the data and analytics unit at animal health company Zoetis holds twice-weekly office hours during which employees can learn how to start using generative AI tools and ask questions. This helps employees learn, improve over time, and build confidence.....
"3. Authorize certain generative AI tools from trusted vendors.
"Staying current with the ever-evolving market for AI tools requires considerable time and effort. To crowdsource this process, create a cross-functional team tasked with evaluating tools and giving feedback to IT about which tools promise real value....
"To simplify AI access and encourage employee use, Zoetis set up a generative AI app store in which employees can apply for tool licenses and learn about effective and responsible use. For each tool, employees can access guides for getting started, watch training videos, read the organization’s AI policies and guidelines, and more. Employees are also encouraged to submit stories describing how they used a tool. This feedback has helped the organization understand which tools deliver the most value for employees."
I suspect we'll wind up feeling our way through a lot of the issues, through trial and error, much as those Silicon Valley CIOs wound up doing with smartphones. But at least we can get a sense from history of the sorts of issues employers and employees will face as they adjust to generative AI and can avoid some of the potholes we've hit in the past.
Cheers,
Paul