Impact of AI at work
Image credit: Pexels

Despite the increasing use of AI at work, only 56% of companies educate their employees on its risks, revealed a new report.

According to the study, carried out by Kolide and Dimensional Research, a staggering 89% of knowledge workers use generative AI tools, such as ChatGPT for writing tasks and GitHub Copilot for coding, at least once a month, yet very few understand the risks. The survey also revealed a concerning gap in AI governance. Many businesses are encouraging or requiring employees to embrace AI, yet failing to provide adequate training on responsible and safe usage, noted the report.

The research findings revealed that:

  • There is a significant gap between the percentage of employees allowed to use AI (68%) and those who actually use it (89%), indicating a lack of oversight and scrutiny on AI-generated work.
  • Despite the widespread use of AI, less than half of companies educate their workforce on its aforementioned risks.
  • Workers underestimate their colleagues’ AI usage. Most (49%) of workers believe that fewer than 10% of their colleagues use AI-based applications. Only 1% of workers correctly estimate that the real number is closer to 90%.

KEY AI RISKS AT WORK

The report also sheds light on the key risks of AI in the workplace, such as:

  • AI Errors – Generative AI tools, particularly Large Language Models (LLMs), are prone to errors or “hallucinations.” The potential legal, reputational, and financial implications of AI-generated inaccuracies underscore the need for vigilant oversight.
  • AI Plagiarism –The debate over whether AI-generated content constitutes plagiarism or copyright violation is ongoing. Lawsuits involving authors, comedians, and developers highlight the need for visibility into AI use to avoid legal repercussions and maintain the quality of original work.
  • AI Security Risks – AI introduces vulnerabilities and security flaws in generated code, posing a risk of data breaches and hacks. The emergence of malware disguised as AI further complicates the security landscape, necessitating robust measures to protect sensitive information.

CALL TO ACTION

The report urges organisations to implement AI acceptable use policies to mitigate risks. These policies should focus on:

  • Getting visibility into worker AI use by establishing non-judgmental communication channels to understand how and why employees are using AI, facilitating education on risks and safer alternatives.
  • Preventing the riskiest forms of AI through creating enforceable measures to prevent unsafe AI uses, including blocking unapproved tools and AI-based browser extensions.
  • Getting cross-department input to craft AI usage policies through developing comprehensive policies, both legal and practical, that consider the needs of different departments within the organisation. Educate workers on issues such as data exposure, bias avoidance, and the use of approved AI tools.

By creating a robust framework for responsible AI use, organisations can harness the transformative power of generative AI while mitigating risks and ensuring ethical practices, concluded the report.

Click here to find out more about the study.

RELATED ARTICLES

Almost all CEOs (98%) believe that their organisations would benefit from implementing AI, however, trust remains a concern, revealed a new study.

Many workers are feeling optimistic about the impact that generative AI will have on their skills and career prospects, revealed a new study. 

Generation Z employees entering the workforce in the age of AI are more than aware of its transformative potential on the job market. 

Sign up for our newsletter