Workplace Surveillance Ethics

Explore top LinkedIn content from expert professionals.

  • View profile for Roberto Ferraro
    Roberto Ferraro Roberto Ferraro is an Influencer

    Grow and learn with me: personal development, leadership, innovation. I am a project leader, coach, and visual creator, and I share all I learn through my posts and newsletter.

    108,627 followers

    The dark side of employee monitoring: trust, value, and agency 🕵🏻♂️🚫 🤔 A study found that 80 percent of top US employers use tech to track workers' productivity, often in real-time. Does our company monitor our fellow workers and us with high-tech software? Do we even know? ➡️ The missed side of value Employee monitoring encourages the mentality that the only valuable hours are those we spend in front of our computers; instead, we need to reframe what productivity is. ➡️ A trust issue "If we can't see our people, how do we know what they're doing?" Digital monitoring is an extreme form of micromanagement, a need for control resulting from a lack of trust that when people are not in the office, they are not "productive." ➡️ Monitoring can backfire Research suggests that employee monitoring can backfire, making people feel like they have no agency and increasing the prevalence of the behaviors these systems want to deter. ➡️ Rethinking knowledge work and value People may work hard to prove they are working instead of doing valuable work, constantly demonstrating their hard work. 🌱 So, how can we create cultures where people are trusted to manage their time and produce quality work? ➡️ The potential of people analytics If we can solve the trust and transparency issues, people analytics could help employees use their data to better understand and improve their work patterns. Illustration by me 😊 Extract from an article by Rachel Botsman. Link to the complete source in the first comment 👇 #productivity #trust #management

  • View profile for Mohd Suharin Sulaiman Siew

    Lawyer for Employer/ Employee

    11,084 followers

    𝗣𝗥𝗜𝗩𝗔𝗖𝗬 𝗔𝗧 𝗧𝗛𝗘 𝗪𝗢𝗥𝗞𝗣𝗟𝗔𝗖𝗘 In a typical office environment, employees generally have an expectation of privacy. From the below quoted case, the Industrial Court has decided that pointing a webcam to a workmate amounts to an invasion of privacy. 𝗪𝗵𝗮𝘁 𝗮𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗶𝗻𝘀𝘁𝗮𝗹𝗹𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗖𝗖𝗧𝗩 𝗯𝘆 𝗲𝗺𝗽𝗹𝗼𝘆𝗲𝗿𝘀 𝗶𝗻 𝘁𝗵𝗲 𝘄𝗼𝗿𝗸𝗽𝗹𝗮𝗰𝗲? 𝗖𝗼𝘂𝗹𝗱 𝗶𝘁 𝗯𝗲 𝗮𝗻 𝗶𝗻𝘃𝗮𝘀𝗶𝗼𝗻 𝗼𝗳 𝗽𝗿𝗶𝘃𝗮𝗰𝘆 𝘁𝗼𝗼? While Malaysia does not have a specific law addressing workplace surveillance, some existing laws and principles can be applied to this situation. 𝗣𝗲𝗿𝘀𝗼𝗻𝗮𝗹 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗔𝗰𝘁 𝟮𝟬𝟭𝟬; while the Act primarily applies to the collection, use, and disclosure of personal data, if an employer is using the CCTV to capture personal or sensitive data about its employees, it may violate the privacy provisions of the PDPA. The Act required that any data collected must be; fairly collected and processed in accordance with the law. 𝗧𝗲𝗹𝗲𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗮𝗻𝗱 𝗠𝘂𝗹𝘁𝗶𝗺𝗲𝗱𝗶𝗮 𝗔𝗰𝘁 𝟭𝟵𝟵𝟴; if the CTTV is connected to a network or internet system (which is normally the case) that captures and transmits images or activities of the employees, it could potentially fall under the scope of the Act, which regulates the proper use of telecommunications systems. Depending on how the video feed is transmitted or accessed, this could involve privacy concerns, especially if surveillance is done without the employee's knowledge. Therefore, to strike a balance between employers' business purpose and employees' privacy, 𝘁𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 is the key. The employees ought to be made known that CCTV is being used in the office and explain why they are needed (safety, security and productivity). Employers ought to balance the need for 'monitoring' with the employees' right to privacy. Excessive and unnecessary surveillance can be considered intrusive. #industrialcourt #employmentlaw #doitright

  • View profile for Richard Coleman MAICD

    Leading change in WHS and Sustainability

    7,298 followers

    So today we have another example of a business leader saying and doing something so unbelievably stupid in relation to WHS that my desk has an indentation where my head has been hitting it since reading the reporting in the AFR. A business called Safetrac turned on audio surveillance on the computers of its staff who were working from home…without clear policy, without telling them and definitely without anything approaching consultation. Apparently Safetrac deployed Teramind to monitor “underperformers,” enabling laptop microphones from mid-April to early June, and only expanded its four-sentence surveillance policy at the end of June. On 12 August, WorkCover agent Allianz accepted a mental-injury claim from a worker who developed anxiety after discovering the audio surveillance. Victoria Police is reportedly investigating. This is not a grey area of etiquette. It is a failure of process, consultation and risk management. In Victoria, employers must consult with employees and HSRs when identifying or assessing hazards, when deciding on risk controls, and when monitoring the health of employees and workplace conditions. Rolling out intrusive monitoring, especially audio capture, undoubtedly triggers those duties. Consultation isn’t a courtesy; it is a statutory requirement. But wait it gets worse…. Safetrac’s updated policy reportedly asserts that monitoring “in accordance with employment contracts, company policies, and relevant legislation are not considered psychosocial hazards.” I will gladly buy a decent bottle of wine for any of my contacts who can point to the law that allows CEO’s to arbitrarily define what is and what is not a hazard. Thankfully we live in a society where you can’t just do stuff to people and arbitrarily decide that what you’re doing is not evil, that what you’re proposing doesn’t have risks and that in your enlightened and lofty view people should be happy about your decisions. Psychosocial hazards are determined by the nature of work and its impacts, assessed through a risk process with worker consultation, not by policy wording. Attempting to define surveillance out of “hazard” status misses both the law and the science. If the AFR reporting is accurate, here’s what good governance should have required before any deployment: A formal psychosocial risk assessment with workers and HSRs, and clear, documented consultation. A proportionate purpose test (what problem are we solving?), and strict minimisation (no audio by default). Transparent, specific notices and informed consent—not a retrofit policy. Compliance isn’t about how cleverly you can write a policy after the fact. It’s about whether your decisions respect the law, your people, and the risks you create. On all three counts, this approach fails the test.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    56,684 followers

    The AI Act provisions on Prohibited AI systems look like they will apply before the end of the year. Many organisations think these won’t apply to them… I have set out some of the prohibitions and outlined some real-world use cases of those systems. 1. Manipulative/Deceptive AI The Act bans AI systems that use subliminal, manipulative, or deceptive techniques to significantly alter a person's behavior in ways that impair their ability to make informed decisions, leading to potentially significant harm. Eg: Digital advertising platforms using AI to send subliminal messages that exploit psychological vulnerabilities, coercing individuals into making decisions against their best interest, such as unnecessary purchases or unhealthy behaviors. 2. Exploiting Vulnerabilities It's prohibited to use AI to exploit the vulnerabilities of individuals or groups based on age, disability, or socio-economic status, resulting in material distortions of behavior that could cause significant harm. Eg: Personal finance AI applications targeting vulnerable elderly users with unsuitable investment advice, leveraging age-related vulnerabilities to influence financial decisions detrimentally. 3. Sensitive Biometric Categorisation The Act outlaws AI systems that categorise individuals based on biometric data to infer sensitive information, such as race or sexual orientation, barring law enforcement applications under strict conditions. Eg: AI-driven hiring platforms that use video interview analyses to infer protected characteristics, facilitating covert discriminatory practices by disqualifying candidates on these bases. 4. Social Scoring The legislation bans AI that evaluates or classifies people based on social behavior or inferred characteristics, leading to detrimental or unfair treatment unrelated to the context in which the data was collected. Eg: Corporate social credit systems monitoring employees' behavior beyond the workplace, affecting their opportunities or status based on non-work related activities or in disproportionate responses to their actions. 5. Untargeted Facial Recognition Databases The creation or expansion of facial recognition databases through untargeted scraping of internet or CCTV footage by AI systems is prohibited, addressing privacy and data protection concerns. Eg: Applications that build extensive facial recognition databases from online images or without consent, posing severe privacy infringements and unauthorised surveillance risks (btw - this has happened before). 6. Emotion Recognition Deploying AI to infer the emotions of individuals in workplaces and educational institutions is banned, except for medical or safety purposes, to protect against unwarranted emotional surveillance. Eg: Tools used by employers to monitor and analyse emotional states, allowing employers to weed out “undesirable” or “unenthusiastic” workers. It may be worth checking that these prohibitions don’t apply to you. Your call!

  • View profile for John Hopkins, PhD
    John Hopkins, PhD John Hopkins, PhD is an Influencer

    LinkedIn Top Voice | Top 100 Future of Work Leader | Keynote Speaker | World’s Top 2% of Scientists | Dad

    17,656 followers

    🎙️ 🏡 One of the country’s top compliance training companies recorded the conversations of its employees by turning their laptops into covert listening devices while they were at home, in a case that tests the boundaries of workers’ privacy. Victorian police are investigating claims that Safetrac breached the state surveillance laws after chief executive Deborah Coram admitted in legal documents that her company recorded the audio and screens of select members of its staff, who work from home. The idea of recording workers’ conversations, let alone their conversations at home, is unusual. Given the habit of employees’ home life to leak into their work life during remote working, the risks are extraordinary. Unions are pushing for new laws to guard against unreasonable or excessive monitoring in the workplace and state Labor governments are considering urgent reforms to update outdated surveillance laws for the WFH era. State work health and safety laws are also starting to recognise that surveillance is a potential psychosocial hazard. For the layman, privacy has long been considered an individual right. You waive the rights to your data or you consent to workplace monitoring. But cases such as Safetrac show that, much like work health and safety laws, privacy can also be understood as a collective right. Privacy is relational. Surveillance can not only affect you but those around you, including family members, friends and other third parties. ❓ Is it ever okay to record the audio and screens of employees when they are working from home, or other locations outside of the traditional workplace? As always, keen to hear your thoughts, opinions and experiences. 🙏 Link to full AFR article available in the comments section below 👇 WorkFLEX-Australia Author: David Marin-Guzman The Australian Financial Review #wfh #employeesurveillance #futureofwork

  • View profile for Julian Sng

    🍍 Growth & Strategy Leader | Data-Driven Marketing, Operations & Revenue Acceleration | Advisor | Business Owner | Lecturer | Speaker | Mentor | 13+ yrs China Experience

    9,346 followers

    PwC tells employees it will use location data to police ‘back-to-office’ rule - does this violate privacy laws? The company announced in a memo to employees that they will start tracking where its employees in the United Kingdom work, in a bid to dial back its current work-from-home culture. Employees were told they must spend at least three days a week – or 60% of their time – in the office or with clients. The company “will start sharing your individual working location data with you on a monthly basis from January as we do with other data such as chargeable hours. This will help to ensure that the new policy is being fairly and consistently applied across our business..” It would be interesting to understand what would happen for those who do not comply. For now, a spokeswoman for PWC said: “If the monthly data shows someone is consistently breaching the policy, we’d first want to understand the reasons why.” I don’t know what the legal implications will be. On one hand, whether the company can actually penalize their employees for this, and how they can penalize them - salary reduction, PIP, etc. On the other hand, what can the employees do in the situation - take legal action, go to their union, etc. Also, by using location data to police staff, is there a breach in privacy laws, specifically GDPR? Using location data to track employees returning to the office can potentially breach GDPR laws if it is done without employees' knowledge, explicit consent, or a valid legal basis. GDPR requires that data collection be transparent, limited to necessary purposes, and respectful of individuals' privacy rights. If the monitoring is intrusive or disproportionate, it may violate GDPR principles, leading to potential legal consequences for the employer. Implementing such practices must comply with data protection regulations, ensuring employee privacy and data minimization. I suppose this policy would have to be in future contracts in order to have it in black and white to be fully enforceable. For current staff, it would probably make sense to get the current staff to sign an agreement. What do you think about PwC’s plan to use location services to police staff on their back to office rule? #corporatelife #corporateculture #hr #humanresources #companypolicy #policies #remoteworking #wfh #backtooffice #privacy #GDPR

  • View profile for Steven Claes

    CHRO | Introvert Advocate | Career Growth for Ambitious Introverts | HR Leadership Coach | Writer | Newsletter: The A+ Introvert

    148,683 followers

    The most dangerous lie in business today:   'We need to monitor our people to ensure productivity.'   A friend CEO shared his 'productivity tracking' results with me.   The data was shocking when I saw it! Their most monitored team? → Highest turnover rate. → Zero innovation. → Lowest output.   And here's a controversial take ahead 🔥 (which I shared with him)   Every keystroke you track Every minute you monitor Every bathroom break you log...   You're not measuring productivity. You're documenting distrust. (a bit black or white, but still...)   So, what actually drives performance?   1/ Crystal Clear Expectations → Set measurable outcomes → No gray zones on deadlines → Define what winning looks like   2/ Trust by Default → Zero surveillance → Focus on deliverables → Celebrate achievements, not hours   3/ Adult Conversations → Quality check-ins → Address issues head-on → Solutions over surveillance   Companies still playing digital babysitter? They're losing the war for talent. (And their best people are already interviewing elsewhere)   The future belongs to companies that: ✓ Trust first ✓ Measure impact ✓ Enable autonomy   The harsh reality? Your turnover rate tells the real story. P.S. Later from that same CEO: "Deleted a lot of that monitoring. Our new productivity metric? Trust."   💭 Are you brave enough to lead with trust in your life?   — 👉 Share if you're committed to building better workplaces 🎯 Follow for more unfiltered leadership insights

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,300+ participants), Author of Luiza’s Newsletter (87,000+ subscribers), Mother of 3

    120,589 followers

    🚨 [AI POLICY] Big! The U.S. Department of Labor published "AI and Worker Well-being: Principles and Best Practices for Developers and Employers," and it's a MUST-READ for everyone, especially ➡️ employers ⬅️. 8 key principles: 1️⃣ Centering Worker Empowerment "Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace." 2️⃣ Ethically Developing AI "AI systems should be designed, developed, and trained in a way that protects workers." 3️⃣ Establishing AI Governance and Human Oversight "Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace." 4️⃣ Ensuring Transparency in AI Use "Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace." 5️⃣ Protecting Labor and Employment Rights "AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and antiretaliation protections." 6️⃣ Using AI to Enable Workers "AI systems should assist, complement, and enable workers, and improve job quality." 7️⃣ Supporting Workers Impacted by AI "Employers should support or upskill workers during job transitions related to AI." 8️⃣ Ensuring Responsible Use of Worker Data "Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly." ╰┈➤ This is an essential document, especially when AI development and deployment occur at an accelerated pace, including at the workplace, and not much is said regarding workers' rights and labor law. ╰┈➤ AI developers should have labor law and workers' rights in mind when building AI systems that will be used in the workplace. Additional guardrails might be required. ╰┈➤ Employers should be aware of their ethical and legal duties if they decide to use AI in the workplace. AI-powered systems are not "just another technology" and present specific risks that should be tackled before deployment, especially in the workplace. ➡️ Download the document below. 🏛️ STAY UP TO DATE. AI governance is moving fast: join 36,900+ people in 150+ countries who subscribe to my newsletter on AI policy, compliance & regulation (link below). #AI #AIGovernance #AIRegulation #AIPolicy #WorkersRights #LaborLaw

  • View profile for Antonio Grasso
    Antonio Grasso Antonio Grasso is an Influencer

    Technologist & Global B2B Influencer | Founder & CEO | LinkedIn Top Voice | Driven by Human-Centricity

    39,896 followers

    Safeguarding information while enabling collaboration requires methods that respect privacy, ensure accuracy, and sustain trust. Privacy-Enhancing Technologies create conditions where data becomes useful without being exposed, aligning innovation with responsibility. When companies exchange sensitive information, the tension between insight and confidentiality becomes evident. Cryptographic PETs apply advanced encryption that allows data to be analyzed securely, while distributed approaches such as federated learning ensure that knowledge can be shared without revealing raw information. The practical benefits are visible in sectors such as banking, healthcare, supply chains, and retail, where secure sharing strengthens operational efficiency and trust. At the same time, adoption requires balancing privacy, accuracy, performance, and costs, which makes strategic choices essential. A thoughtful approach begins with mapping sensitive data, selecting the appropriate PETs, and aligning them with governance and compliance frameworks. This is where technological innovation meets organizational responsibility, creating the foundation for trusted collaboration. #PrivacyEnhancingTechnologies #DataSharing #DigitalTrust #Cybersecurity

  • View profile for Randall S. Peterson
    Randall S. Peterson Randall S. Peterson is an Influencer

    Professor of Organisational Behaviour at London Business School | Co-founder of TalentSage | PhD in Social Psychology

    17,971 followers

    Rethinking Performance Measurement in the Hybrid Era Gone are the days when productivity was measured by time clocked in or physical presence in the office. In our new world of hybrid work, it's time for a paradigm shift in how we evaluate employee performance. The key? Focus on outputs and objectives, not inputs. As leaders, we need to ask ourselves: 1️⃣ Are we equipped to effectively evaluate our teams in this new landscape? 2️⃣ How can we ensure fairness and accuracy in performance assessments across different work models? 3️⃣ What tools and metrics truly reflect productivity in a hybrid environment? The challenge lies not just in measurement, but in support.  How can we empower our teams to thrive, regardless of their physical location? Here are a few strategies to consider: ➡️ Set clear, measurable objectives that aren't tied to work hours or location ➡️ Implement regular check-ins focused on progress and roadblocks ➡️ Utilize technology to track project milestones and collaboration ➡️ Prioritize outcomes over activity Remember, the goal isn't just to measure performance, but to foster an environment where high performance is possible whether your team is in the office, at home, or anywhere in between. I'm curious to hear from fellow leaders. How are you adapting your performance management strategies for the hybrid era? What challenges and successes have you encountered?

Explore categories