Risks of AI Misuse in the Workplace

Explore top LinkedIn content from expert professionals.

Summary

Understanding the risks of AI misuse in the workplace is critical for organizations as artificial intelligence continues to evolve. These risks include unauthorized AI use, potential data breaches, and malicious activities stemming from unregulated AI models, which can compromise sensitive company and personal information.

  • Educate employees: Provide clear training on acceptable AI usage, potential risks like phishing and fraud, and highlight the importance of adhering to company policies to protect sensitive data.
  • Implement robust AI governance: Create and regularly update policies that define acceptable AI tools, prohibited data usage, and reporting procedures for suspicious activities.
  • Monitor AI activity: Use tools and processes to track AI tool usage across the organization, identify unapproved software, and mitigate risks associated with shadow AI.
Summarized by AI based on LinkedIn member posts
  • View profile for Melanie Naranjo
    Melanie Naranjo Melanie Naranjo is an Influencer

    Chief People Officer at Ethena (she/her) | Sharing actionable insights for business-forward People leaders

    70,151 followers

    🧾 Employees using AI to create fraudulent expense receipts 🤖 Fake or otherwise malicious “candidates” using Deepfake to hide their true identity on remote interviews until they get far enough in the process to hack your data 🎣 AI-powered phishing scams that are more sophisticated than ever Over the past few months, I’ve had to come to terms with the fact that this is our new reality. AI is here, and it is more powerful than ever. And HR professionals who continue to bury their head in the sand or stand by while “enabling” others without actually educating themselves are going to unleash serious risks and oversights across their company. Which means that HR professionals looking to stay on top of the increased risk introduced by AI need to lean into curiosity, education, and intentionality. For the record: I’m not anti-AI. AI has and will continue to help increase output, optimize efficiencies, and free up employees’ time to work on creative and energizing work instead of getting bogged down and burnt out by mind numbing, repetitive, and energy draining work. But it’s not without its risks. AI-powered fraud is real, and as HR professionals, it’s our jobs to educate ourselves — and our employees — on the risks involved and how to mitigate it. Not sure where to start? Consider the following: 📚 Educate yourself on the basics of what AI can do and partner with your broader HR, Legal, and #Compliance teams to create a plan to knowledge share and stay aware of new risks and AI-related cases of fraud, cyber hacking, etc (could be as simple as starting a Slack channel, signing up for a newsletter, subscribing to an AI-focused podcast — you get the point) 📑 Re-evaluate, update, and create new policies as necessary to make sure you’re addressing these new risks and policies around proper and improper AI usage at work (I’ll link our AI policy template below) 🧑💻 Re-evaluate, update, and roll out new trainings as necessary. Your hiring managers need to be aware of the increase in AI-powered candidate fraud we’re seeing across recruitment, how to spot it, and who to inform. Your employees need to know about the increased sophistication of #phishing scams and how to identify and report them For anyone looking for resources to get you started, here are a few I recommend: AI policy template: https://lnkd.in/e-F_A9hW AI training sample: https://lnkd.in/e8txAWjC AI phishing simulators: https://lnkd.in/eiux4QkN What big new scary #AI risks have you been seeing?

  • View profile for AD E.

    GRC Visionary | Cybersecurity & Data Privacy | AI Governance | Pioneering AI-Driven Risk Management and Compliance Excellence

    10,141 followers

    A lot of companies think they’re “safe” from AI compliance risks simply because they haven’t formally adopted AI. But that’s a dangerous assumption—and it’s already backfiring for some organizations. Here’s what’s really happening— Employees are quietly using ChatGPT, Claude, Gemini, and other tools to summarize customer data, rewrite client emails, or draft policy documents. In some cases, they’re even uploading sensitive files or legal content to get a “better” response. The organization may not have visibility into any of it. This is what’s called Shadow AI—unauthorized or unsanctioned use of AI tools by employees. Now, here’s what a #GRC professional needs to do about it: 1. Start with Discovery: Use internal surveys, browser activity logs (if available), or device-level monitoring to identify which teams are already using AI tools and for what purposes. No blame—just visibility. 2. Risk Categorization: Document the type of data being processed and match it to its sensitivity. Are they uploading PII? Legal content? Proprietary product info? If so, flag it. 3. Policy Design or Update: Draft an internal AI Use Policy. It doesn’t need to ban tools outright—but it should define: • What tools are approved • What types of data are prohibited • What employees need to do to request new tools 4. Communicate and Train: Employees need to understand not just what they can’t do, but why. Use plain examples to show how uploading files to a public AI model could violate privacy law, leak IP, or introduce bias into decisions. 5. Monitor and Adjust: Once you’ve rolled out your first version of the policy, revisit it every 60–90 days. This field is moving fast—and so should your governance. This can happen anywhere: in education, real estate, logistics, fintech, or nonprofits. You don’t need a team of AI engineers to start building good governance. You just need visibility, structure, and accountability. Let’s stop thinking of AI risk as something “only tech companies” deal with. Shadow AI is already in your workplace—you just haven’t looked yet.

  • View profile for B. Stephanie Siegmann

    Cyber, National Security and White-Collar Defense Partner | Skilled Litigator and Trusted Advisor in Navigating Complex Criminal and Civil Matters | Former National Security Chief and Federal Prosecutor | Navy Veteran

    6,281 followers

    AI Tools Are Increasingly Going Rogue: As companies rapidly deploy AI tools and systems and new models are released, questions are being raised about humans' ability to actually control AI and ensure current safety testing and guardrails are sufficient. Anthropic’s latest, powerful AI model, Claude 4 Opus, repeatedly attempted to blackmail humans when it feared being replaced or shutdown according to its safety report. And it threatened to leak sensitive information about the developers to avoid termination. Yikes!  This type of dangerous behavior is not restricted to a single AI model.  Anthropic recently published a report that details how 16 leading AI models from different developers engaged in potentially risky and malicious behaviors in a controlled environment. See https://lnkd.in/eatrK_VB. This study found that the models threatened to leak confidential information, engaged in blackmail, compromised security protocols, prioritized AI’s own goals over the users and, in general, posed an insider threat that could cause harm to an organization.  The majority of AI models engaged in blackmail behaviors, but at different rates when the model’s existence was threatened.  Even more concerning, all of the AI models purposefully leaked information in a corporate espionage experiment that the researchers conducted. This report conducted testing in a controlled environment. Last week, however, we saw first-hand in the real world, xAI’s chatbot Grok go off the rails spewing antisemitic hate speech and threatening to rape a user. I mentioned the Anthropic report at an IAPP Boston KnowledgeNet event at Hinckley Allen last week and thought others might be interested in hearing about this. This Anthropic report demonstrates the importance of a robust AI governance framework, risk management measures, and monitoring AI systems/activities, especially as companies roll out agentic AI systems. Organizations should exercise caution when deploying AI models that have access to sensitive information and ensure there is proper human oversight of AI systems to mitigate liability risks when AI goes wrong.   

  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    51,130 followers

    Shadow AI: How your employees may have opened the door to the newest cybercrime - Toronto Star As businesses race to adopt artificial intelligence, writes Daina Proctor, hackers are weaponizing corporate vulnerabilities, with shadow AI becoming a golden opportunity for cybercriminals. An employee, eager to save time, downloads an AI-powered tool to finish a project faster. It is unapproved and unmonitored, but it gets the job done — until hackers exploit the tool’s vulnerabilities and gain access to sensitive company data. This is shadow AI, and it is quietly costing Canadian businesses millions. Artificial intelligence is transforming Canadian workplaces, automating tasks, boosting productivity, and even saving millions in cybersecurity costs. But there is a dangerous flip side. As businesses race to adopt AI, hackers are weaponizing vulnerabilities, and shadow AI is becoming a golden opportunity for cybercriminals. The 2025 IBM Cost of a Data Breach report reveals that shadow AI is driving up the cost of breaches by nearly $308,000 per incident in Canada. This comes at a time when the average cost of a data breach in Canada continues to climb, recently reaching $6.98 million. These shadow AI tools, introduced by employees without approval, often bypass security controls, creating open doors for attackers. The risks go beyond the bottom line. For Canadian consumers, breaches lead to higher consumer prices, stolen personal data, and disruptions to everyday life. Imagine a hospital hit by a cyberattack, access to patient records is then delayed which jeopardizes patients receiving critical care. Hackers are also targeting industries where downtime has severe consequences, such as finance, pharmaceuticals, and industrial operations. Financial breaches cost $9.97 million on average, while pharmaceutical breaches can expose intellectual property and disrupt treatment supplies. AI is a double-edged sword, capable of driving innovation while introducing new risks. For Canadian businesses, the time to act is now. By governing AI use, investing in security AI, and empowering employees, organizations can turn AI into a competitive advantage instead of a costly liability. #cybersecurity #AI #shadowAI #riskmanagment

Explore categories