Biometric Security For Offices

Explore top LinkedIn content from expert professionals.

  • View profile for Panagiotis Kriaris
    Panagiotis Kriaris Panagiotis Kriaris is an Influencer

    FinTech | Payments | Banking | Innovation | Leadership

    149,591 followers

    What a few years ago seemed like science fiction, is becoming today common place: #payments with the wave of our hand. Let’s take a look. We are in the middle of a paradox: the more payments gain importance, the more they take a back seat and become invisible. Nowhere is this statement more applicable than in biometric payments, which is the ability to use biometrics such as our face, our fingerprint or even our voice to authenticate (identify) ourselves so that we can make a payment. These are real-life examples: -   Canada’s RBC bank allows since years clients to pay their bills using their voice (via the iPhone's Siri assistant).   -   Amazon launched last year to all 500+ Whole Foods Market stores in the US it’s palm recognition service for identification, payment, loyalty membership, and entry.   -   In China tech giant Tencent is going all in on #biometrics: Weixin Palm Payment allows Weixin users to pay on the subway by swiping hovering their hands over a sensor. Infrared cameras then analyze the individual palm prints and unique patterns of veins under the skin, allowing each user to be identified and payment to be processed within seconds (source: CNN).   -   Last year JP Morgan began piloting biometrics-based payments (palm and face identification for payments authentication in-store) with select retailers in the US.   -   Self-service ordering kiosks or payment terminals with biometrical functionalities are one of the main use cases gaining ground. In #China such options are available at selected supermarkets or McDonald’s locations. But why are biometric payments rising? -   In today’s rapidly evolving payments landscape competition has moved from the infrastructure to the front-end. UX is the name of the game and biometrics enable better customer experiences. -   The rise of mobile and contactless payments is driving demand for biometrics. -   Efficiency, which translates into cost benefits. Especially, when it comes to use cases such as self-service kiosks, biometrics are a reliable, 24/7 alternative that saves costs. -   Biometrics are significantly simplifying and enhancing loyalty programs by offering a faster and easier identification and check-out process (Face ID instead of the manual process of pulling out a card and identifying myself). -   Biological characteristics are much more difficult to replicate or steal and therefore offer enhanced security, which translates into reduced frauds. The numbers are indicative: -   Goode Intelligence forecasts that global biometric payments will reach $5.8 trillion and 3 billion users by 2026. -   Juniper Research expects biometrics "will authenticate over $3 trillion of payment transactions in 2025," compared to $404 billion in  2020. If biometric payments can, in the short term, address issues such as privacy & security, #technology, regulation, accessibility, trust & social acceptance, then their longer-term future looks bright. Opinions: Panagiotis Kriaris

  • View profile for Eddie Major

    AI wrangler sharing news about #AIstrategy #AIpolicy #AIgovernance and #AIsafety

    6,522 followers

    Interested in AI + privacy / personal info / biometric data? Australian hardware retailer Bunnings has got itself into some trouble, with Privacy Commissioner Carly Kind yesterday releasing her findings that the company interfered with Australians' privacy via its AI facial recognition system. What's the deal? ⚫Bunning's facial recognition system collected sensitive biometric information from members of the public without notifying them or obtaining their consent ⚫The system ran from 6 Nov 2018 to 30 Nov 2021 and was installed across 62 retail locations ⚫Bunnings used this to create a searchable database, which placed people into categories based if Bunnings thought they'd committed "Organised Retail Crime"; violent or threatening behaviour, or acts of actual or threatened violence. ⚫Bunnings "was unable to provide the total number of enrolled individuals on the Database 'due to [Redacted]'"🤔 Why's this interesting? 🔵Australia is currently considering its possible legislative approach to regulating AI use 🔵Cases like this provide useful insights as to if our existing legislative framework is robust enough deal with issues at the intersection of AI + personal privacy 🔵While people are currently fixated on ChatGPT and large language AI, I think the main area of AI for legislative focus should be systems that use personal or biometric information to assist in decision-making or provide algorithmically determined insights about individuals or cohorts.

  • View profile for Rajeev Mamidanna Patro
    Rajeev Mamidanna Patro Rajeev Mamidanna Patro is an Influencer

    Fixing what most tech founders miss out - Brand Strategy, Marketing Systems & Unified Messaging across Assets in 90 days | We set the foundation & then make your marketing work

    7,341 followers

    Just training employees on password security & hygiene is not enough. Even strong passwords can get compromised. Training & adoption has to move up a level towards Multi-factor Authentication (MFA). Organization-wide. Here are 5 best-practices for CISOs in rolling out MFA successfully. 1) Start with high-risk users & critical apps - Enforce MFA first for privileged users (IT admins, finance, HR) - Secure business-critical apps like email, ERP & cloud platforms 2) Educate employees before enforcing MFA - Prevent resistance by explaining benefits of MFA - Show how simple it is to use with step-by-step guidance 3) Offer multiple MFA options for flexibility - Biometrics may not be possible for someone - Evaluate biometrics, security keys, mobile apps & OTPs 4) Enable adaptive MFA for smarter security - Stronger authentication for risky logins (new device, location, behavior) - Reduce employee fatigue by avoiding unnecessary prompts 5) Monitor & review MFA logs regularly - Detect & investigate failed MFA attempts - These could be failed attackers MFA is a must-have now. If you're a mid-market CISO planning to rollout MFA / change your MFA, we can help. DM me & I will gladly assist you along with my team with a demo. ---- Hi, I’m Rajeev Mamidanna. I help mid-market CISOs strengthen their Cyber Immunity

  • View profile for Gbenga Odugbemi

    Attorney—Cybersecurity, Privacy, & AI

    19,783 followers

    Following up on my last post on DPIA/AIA. There are 4 major ways you can respond to discovered risks when you assess answers procured from relevant stakeholders post-completion of a DPIA/AIA Questionnaire. 1. Risk Mitigation: reduce the likelihood/effect of the risk. E.g., on security measures questions, if a stakeholder had answered “Yes, we are going to be using encryption AES-128 standard”, you can suggest AES-256. Yes, 128 has not been cracked but 256 is the standard. Or if encryption is only planned for data-at-rest, you can suggest encryption for data-in-transit as well. 2. Risk Avoidance: substitute the cause/source of the risk totally. If biometric data would be processed to have access to a platform by users and it ordinarily might cause problems — i.e., more compliance requirements (like BIPA), more cost/resources to comply — avoid the risk using biometrics brings by suggesting an alternative, e.g., username and password + MFA. 3. Risk Transference: if a platform or AI system will process customers’ debit/credit card payments, for example. This suggests the additional need to comply with the PCI-DSS. Instead of worrying about this compliance, and still running the risk of liability for breach of payment card data, engage a payment processing company like Stripe (I know, free commercial), and transfer that risk of compliance and any responsibilities for breach, etc, to them via a contract. 4. Risk Acceptance: if the “cost” of preventing a risk would be higher than the “effect” the risk would have — it might make sense to just accept the risk, don’t forget to get a sign-off. It’s not your business as a privacy professional to accept risks. But remember to assess the divergence between quantitative vs. qualitative effects of risks — very crucial.

  • View profile for Shreya Jain

    Data Privacy Lawyer I Turning Data Laws into Business Confidence I Have you scheduled any appointment yet?

    4,478 followers

    𝑩𝒊𝒐-𝒎𝒆𝒕𝒓𝒊𝒄 𝒅𝒂𝒕𝒂 𝒊𝒔𝒏’𝒕 𝒍𝒊𝒌𝒆 𝒂 𝒑𝒂𝒔𝒔𝒘𝒐𝒓𝒅 𝒚𝒐𝒖 𝒄𝒂𝒏 𝒓𝒆𝒔𝒆𝒕. Once it’s exposed, it’s gone forever. That’s why laws like 𝗜𝗹𝗹𝗶𝗻𝗼𝗶𝘀’ 𝗕𝗶𝗼𝗺𝗲𝘁𝗿𝗶𝗰 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗣𝗿𝗶𝘃𝗮𝗰𝘆 𝗔𝗰𝘁 (𝗕𝗜𝗣𝗔) are so strict—and why founders who are building with 𝗳𝗶𝗻𝗴𝗲𝗿𝗽𝗿𝗶𝗻𝘁𝘀, 𝗳𝗮𝗰𝗲 𝘀𝗰𝗮𝗻𝘀, 𝗼𝗿 𝘃𝗼𝗶𝗰𝗲 𝗿𝗲𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝗼𝗻 need to think about compliance from day one. I’ve put together a playbook especially for startups that explains: ➡️ The real risks of non-compliance (including penalties running into millions) ➡️Why investors and users both care about how you handle sensitive data ➡️Practical steps founders can take today—policies, consent, retention, and security ➡️Industry examples from fintech, healthtech, HR tech, gaming, and more At 𝗠𝗢𝗦 | 𝗖𝘆𝗯𝗲𝗿𝗗𝗼𝗰𝘀, we work with founders and teams to turn data privacy into a strength—not a hurdle. Done right, compliance builds trust, boosts credibility, and helps you scale with confidence. If you’re building products that touch biometric data, this guide is a must-read. You can find it attached in this post. I’d love for you to follow me for more practical insights on data privacy and global compliance laws—and if you need support embedding privacy into your startup’s DNA, feel free to reach out. #DataPrivacy #BiometricData #Compliance #Startups #PrivacyByDesign #MOS #CyberDocs #TrustByDesign

  • View profile for Nick Lambert

    CEO at Dock Labs AG

    5,788 followers

    Binding a biometric to a verifiable credential is the best way to ensure that only the right person can use it. Storing the biometric template as a credential makes it easy to keep the biometric data under the user’s control. Instead of storing sensitive biometric data in centralized systems, we recommend keeping biometrics locally on user devices and issuing verifiable credentials to prove that biometric matching occurred. Here's how it works: 1. When users first install their wallet, they enrol their biometric (facial scan, palm scan, fingerprint, or even signature) directly on their device. 2. The biometric service issues a cryptographically signed credential into the wallet containing the enrolment data. 3. During authentication, the wallet performs a local biometric check, verifies the credential hasn't been tampered with, and issues a new credential proving the check passed. 4. Any relying party can verify this credential without ever accessing the actual biometric data. This solves a fundamental problem in digital identity as it: > Ensures that only the rightful owner of a credential can use it for verification (even if a bad actor can access the credential wallet) > Eliminates privacy risks from centralized biometric storage  > Reduces liability for organizations handling biometric data  > Maintains strong authentication without compromising user control With Dock Labs , you don't have to choose between robust security and user privacy, you can have both.

  • View profile for P. Raquel B.

    Optimizing workflows to maximize returns | Keynote Speaker | Ethical Hacker

    10,559 followers

    Biometric data is at risk. And it's not like passwords. You can't just change your fingerprints. Stolen biometric data is a lifelong problem. Hackers can use it forever. A breach can affect millions. In 2019, over 1 million people's data was exposed. Hackers can exploit biometric systems in many ways: - Accessing biometric databases - Manipulating facial recognition - Using SQL injections via QR codes Deepfake technology makes it worse. Hackers can create convincing replicas of you. Critical facilities worldwide are at risk. Hackers could deploy malware through compromised biometric terminals. To stay safe: → Use robust encryption. → Isolate biometric readers on separate networks. → Combine biometrics with multi-factor authentication. Biometric security needs serious attention.

  • View profile for Imran Ahmad

    Partner and Head of Technology, Co-Chair Cybersecurity and data privacy at Norton Rose Fulbright

    17,374 followers

    (PRIVACY - Use of biometric data) Information and Privacy Commissioner of Ontario (IPC) recently issued a decision regarding the use of #FacialRecognitionTechnology. The facts are a bit unique in that some students alleged that "smart" vending machines appeared to use facial recognition technology that was collecting facial images without consent or proper notice. Key Issues Identified: - Lack of meaningful consent from individuals whose biometric data was collected. - Inadequate notice and transparency regarding the use of facial recognition. - Failure to demonstrate necessity and proportionality in deploying such invasive surveillance tools. Legal Findings:  The IPC concluded that the university’s actions contravened the Freedom of Information and Protection of Privacy Act (FIPPA), reinforcing that biometric surveillance must meet stringent legal and ethical standards—especially within public institutions. This decision is consistent with the analysis taken by the federal privacy commissioner in the Cadillac Fairview decision. Legal professionals and privacy officers should take note of the IPC’s emphasis on transparency, consent, and accountability when implementing AI-driven surveillance systems. #PrivacyLaw #Biometrics #FIPPA #OntarioIPC #DataProtection #Cybersecurity https://lnkd.in/gk4B4ewx

  • View profile for Federico Marengo

    Assoc Partner at White Label Consultancy | Privacy & AI | PhD | Lawyer

    34,428 followers

    The Spanish data protection authority (Agencia Española de Protección de Datos - AEPD) imposed a €96,000 fine for privacy violations in when using facial recognition technologies. The company deployed facial recognition system as the sole access method to its sports centers without providing alternatives, infringing Article 9 of the GDPR. A person refused to being subject to facial recognition and the sports center denied access to the sports facilities, without allowing the use of any identitification alternatives (such as ID card, drivers licence, etc) They also failed to conduct a Data Protection Impact Assessment as required by Article 35 and did not adequately inform members about data processing, violating Article 13. The DPIA is required even in this case, where the processing is for verification/authentication (1:1), instead of biometric identification (1:N) The AEPD noted that for the suitability analysis the controller first should define the effectiveness and error thresholds for each identification method to fulfill the same purpose of the processing, and determine whether the identification can actually be performed more effectively compared to traditional methods that use human intervention. In this case, the fined company did not: - analyze the effectiveness thresholds or error rates of the various access systems to the centers it has been using in the past (identification cards, fingerprints, facial recognition, and ID display), and - indicate why biometric using facial recognition is more effective than all other methods in verifying members' identity. The company merely stated that it decided to replace a previous system (fingerprint) with facial recognition due to the technical flaws detected in its application, without specifying or justifying these flaws. However, it did not include any analysis of the effectiveness or suitability of the other access methods that do not use biometric systems, such as identification cards and ID display. Implications for businesses: 1)Conduct DPIA before implementing any biometric technology 2) Always provide alternative identification methods when implementing biometric systems 3) Document and analyze effectiveness thresholds and error rates of all potential identification methods 4) Demonstrate why a biometric solution is more effective and necessary than traditional identification methods that use human intervention or non-biometric alternatives. 5) Provide information about the processing, including their right to use alternative ID methods

  • View profile for Jason Makevich, CISSP

    Founder & CEO of PORT1 & Greenlight Cyber | Keynote Speaker on Cybersecurity | Inc. 5000 Entrepreneur | Driving Innovative Cybersecurity Solutions for MSPs & SMBs

    7,088 followers

    Let’s talk passwords: Could biometrics and AI be the end of traditional password security? As cyber threats become more sophisticated, it’s clear that traditional password-based security systems are no longer enough. Here’s why the future might be password-free: → The Weakness of Passwords Over 80% of data breaches are linked to weak or compromised passwords. This alarming statistic is driving the shift toward password-less authentication methods, with tech giants like Microsoft, Google, and Apple already leading the way with innovations like facial recognition and hardware security keys. → The Promise of Biometrics Biometric authentication—think fingerprint scanning, facial recognition, and voice ID—offers a level of security passwords can’t match. ✨ Enhanced Security: Biometric traits are unique and difficult to replicate, reducing the chances of unauthorized access. ✨ Convenience: No more remembering complex passwords! Biometric systems provide quick, seamless access using traits that are always available. But, the transition isn’t without its challenges: 🔐 Privacy Concerns: Unlike passwords, biometrics can’t be changed if compromised. 💡 Access Barriers: Not all devices are equipped with biometric capabilities, limiting widespread adoption. → The AI Advantage AI is stepping in to revolutionize how we think about authentication: ⚙️ Behavioral Biometrics: AI can analyze user behavior—like typing patterns and mouse movements—to continuously authenticate, adding an extra layer of security. ⚙️ Real-Time Threat Detection: AI can monitor logins for unusual activity, flagging potential threats in real time. ⚙️ Adaptive Authentication: Based on contextual factors (location, device), AI adjusts authentication methods for enhanced security without sacrificing user convenience. The future outlook? The shift to password-less authentication is inevitable, driven by the need for more secure and user-friendly digital experiences. However, this transition comes with its own set of hurdles: privacy concerns, accessibility issues, and the need for infrastructure upgrades. In conclusion, while biometrics and AI show great promise, achieving a truly password-less future will require addressing these challenges—ensuring privacy, accessibility, and adaptability to evolving threats. The question is no longer if, but when we’ll make the leap.

Explore categories