· News

Workplace AI policies: Does your company need them?

The majority (68%) of business leaders think employees should not use AI without a managers' permission, according to technology authority Tech.co.

The survey also found business leaders are divided on who should take responsibility for AI mistakes made in the workplace.

Almost a third of respondents (32%) lay the blame solely on the employees operating the tool, while 26.1% believe that all three parties - the AI tool, the employee, and the manager - share some responsibility for the mistake.

Aaron Drapkin, lead writer at Tech.co, said if employers publishing an AI workplace policy would reduce risks and allows employees to be more innovative.

Speaking to HR magazine, he said: “If employees know how exactly they’re allowed to use AI tools, they are then able to innovate safely, and more inventive ways of applying these tools can be openly explored.”


More on artificial intelligence:

The UK's first AI employee is now for hire

HR and AI: How can HR use AI effectively and ethically?

Employees comfortable leaning on AI for admin tasks


Matt Hammond, founder of UK-based technology advisory Talk Think Do, said unregulated AI use could risk data security and intellectual property.

Speaking to HR magazine, he said: “Companies need to carefully control the exposure of their intellectual property and data. There is a risk that employees using ChatGPT, for example, with company content could compromise security. 

“A good AI policy needs to cover privacy and security threats as a priority. A known set of data protection policies can give a safe, risk-free environment for leveraging the technology.”

Pete Cooper, director of people partners and analytics at HR software Personio, said AI can be particularly risky for HR applications.

Speaking to HR magazine, he said: “AI lacks the unique flexibility of human touch businesses need for some HR situations - for example, it doesn’t understand empathy or the nuances of human interactions. 

“To use AI in an effective way, businesses should ensure they implement an AI ethics code that ties in with their company-wide communications and training programme to mitigate the risks that come with AI and inject the human touch that AI lacks.”

 

What to include in a workplace AI policy

Drapkin said policies should require staff to state when they’re using AI and specifying when it is acceptable.

He said: “It’s always good to have a clause in your AI workplace policy regarding open and transparent AI usage - for example, if employees are using ChatGPT or similar tools, they must disclose how they’re using it. 

“Businesses may also want to include other stipulations such as allowing ChatGPT to be used to generate copy for internal documents, but forbid its usage for anything that will be viewed by external clients or an audience. This could help to limit mistakes that could cause reputational damage.”

He said policies should prioritise data security, as it is unclear how AI companies store inputted data.

“The most important point to include in an AI workplace policy is the type of data that you’re happy with employees including in their inputs to AI tools such as ChatGPT," he added.

“With data breaches being so common, AI policies dictating which data is off-limits for AI inputs is crucial.” 

Dan O’Connell, chief AI officer at communication platform Dialpad added that there must be a requirement for AI output to be quality checked.

Speaking to HR magazine, he said: “We advocate the notion of ‘AI parenting’, which is continually tracking AI systems for bias, contextual understanding and ability to handle complex situations. 

“At a time when AI is being used for many job functions, businesses cannot afford to neglect their human oversight.”