Earlier last month, 1,500 staff members from a UK law firm called Shoosmiths received the unexpected news.
The company has created a £1 million bonus spot that will be shared between them as long as it collectively uses Microsoft Copilot, a generator AI tool chosen by the company that has at least one million times this fiscal year.
Their CEO, David Jackson, didn’t think this would be that difficult.
As he pointed out to his colleagues, if everyone only used copilot four times a day, they would easily reach a 1mn target.
To help, the company tracks and publicly reports the numbers every month. It would be better to encourage the use of what Jackson called AI’s “powerful enablers.”
I didn’t hear about the move from Shoosmiths at Shoosmiths, but I had heard from two scholars at HEC Paris Business School, Cathy Yang and David Restrepo Amailes.
They’re preparing to publish some relevant and eye-opening research on the highly human methods Copilot, ChatGpt and other generation AI products are used in offices, so they found it.
Their work shows what makes perfect sense when you think about it, but it’s still unsettling. Unless you tell your boss, if you use AI, you can move on at work. Furthermore, bosses rarely know if they have used AI or not.
Researchers discovered this after they decided to consider why so many companies are hesitant to deploy AI despite obvious productivity gains.
In the experiment, they asked 130 mid-level managers from a large, unnamed consulting firm to evaluate a series of briefs compiled by two junior analysts. These were typical of what was done for potential clients looking for a consultant for the project.
Some documentation was done with the help of ChatGpt, but some were not. The manager turns out to be completely ignorant of which is which.
77% of ratings correctly said that ChatGpt was used, which was close to 73% who said they mistakenly used it when ChatGPT was not.
Also, even if managers were told that AI was definitely not being used, 44% thought it was still there.
Here are some discoveries that remained in me: The ratings that the manager gave to briefs made on ChatGpt were nearly 10% higher than what mere humans did.
When managers learned of AI use, they downgraded their ratings. Perhaps suppose it didn’t take long for the analyst to do the job.
This suggests that you may be strongly motivated to use it in sly, unless you work for an organization that encourages transparent use of AI. And, as researchers call private AI use in the workplace, the problem with this “shadow adoption” is that it puts organizations at serious risks, such as security breaches.
Many companies may thwart access to AI tools amid fears that staff can accidentally leak sensitive data by providing information to their platforms and finding a way to external users.
There is also the problem that staff have too much trust in generative AI tools that produce biased results and invent “hastisation.” Monitor employees to see who are using or not using AI risks to resolve intrusive surveillance complaints.
To avoid all this, HEC researchers believe employers should develop AI usage guidelines that encourage employees to use AI openly.
Their research suggests that staff are likely to be downgraded to own AI help, so they recommend some form of seduction to promote disclosure, like the £1 million prompt bonus from Shoosmiths’ law firm.
“It’s a very sensible incentive because it means people have to report prompts,” says Restrepo Amailes.
Shoosmiths says the bonus was actually created because AI is the basis of its future competitiveness and wants to boost its use. So far, Copilot’s prompts have been “widely on track” towards the 1MN target, said Tony Randle, partner in client technology.
“I have one partner who used it 800 times last month,” he says, delighted. “AI does not replace legal professions, but lawyers who use AI will replace lawyers who do not.”