✳️ Open Letter to Corporate Directors: Lead the Charge for Responsible AI✳️ AI is reshaping industries and society, and your leadership is essential in ensuring it is harnessed responsibly. While AI offers vast opportunities, it introduces risks that cannot be ignored. To protect long-term value, you must actively prioritize responsible AI governance. Many executive teams hesitate to raise concerns about AI risks or request additional resources due to perceived power dynamics. This silence can leave critical issues unaddressed. As directors, you must set the expectation for transparency and accountability, empowering your leadership teams to confront AI challenges head-on. ➡️ Establish Governance as a Priority Governance cannot be about fulfilling minimum requirements but about must steer outcomes toward optimized risk and cost. Integrating frameworks like #ISO42001 ensures your organization has the structure to address AI risks and deliver measurable, ethical results. Without clear governance, you leave your org exposed to reputational and operational risks. Action Steps: 🔸 Implement a governance framework that includes regular reviews of AI risks and impacts. 🔸 Establish accountability for AI ethics and risk management across your leadership structure. 🔸 Demand evidence of effective AI oversight in all major initiatives. ➡️ Empower Leadership to Act Leaders need your visible support to confront AI-related risks. Many fear voicing concerns or requesting resources, especially in rigid hierarchies. Your role is to eliminate that hesitation by fostering a culture where responsible AI is a shared goal. Action Steps: 🔸 Encourage open communication about AI risks and resource needs. 🔸 Ensure leadership has the tools, training, and budgets necessary to manage AI effectively. 🔸 Recognize and reward proactive efforts to address AI risks. ➡️ Consider All Stakeholders AI’s impact extends beyond shareholders to employees, customers, and society at large. Ignoring these dimensions risks trust and reputation. Responsible AI governance protects your organization and strengthens its position as a credible, ethical leader. Action Steps: 🔸 Evaluate AI initiatives for their long-term effects on all stakeholders. 🔸 Align AI strategies with societal and regulatory expectations. 🔸 Monitor emerging risks and adapt governance practices accordingly. ➡️ Go Beyond Compliance Merely meeting regulatory requirements is not enough. Responsible AI demands continuous evaluation, proactive risk management, and improvements based on lessons learned. You have the authority to ensure these processes are ingrained in your organization. Action Steps: 🔸 Require continuous AI assurance practices, not one-time compliance checks. 🔸 Lead by example by engaging with industry standards and governance leaders. 🔸 Hold the organization accountable for measurable improvements in AI risk management. Your responsibilities are significant, please don’t take them for granted.
Corporate Governance Best Practices for Tech Companies
Explore top LinkedIn content from expert professionals.
Summary
Corporate governance best practices for tech companies involve creating clear frameworks and oversight to manage risks, ensure ethical compliance, and promote accountability, especially in rapidly evolving fields like artificial intelligence (AI). These practices are essential for long-term organizational success and trust among stakeholders.
- Prioritize transparent oversight: Boards should establish dedicated governance frameworks to monitor risks, ethics, and compliance, especially for disruptive technologies like AI.
- Engage leadership and stakeholders: Build a culture of open communication to address risks, provide resources, and ensure all voices, including employees and stakeholders, are heard and valued.
- Adapt governance practices: Regularly review and update governance strategies to align with emerging technologies, legal requirements, and societal expectations.
-
-
Board Directors: A flawed algorithm isn’t just the vendor’s problem…it’s yours also. Because when companies license AI tools, they don’t just license the software. They license the risk. I was made aware of this in a compelling session led by Fayeron Morrison, CPA, CFE for the Private Directors Association®-Southern California AI Special Interest Group. She walked us through three real cases: 🔸 SafeRent – sued over AI tenant screening tool that disproportionately denied housing to Black, Hispanic and low-income applicants 🔸 Workday – sued over allegations that its AI-powered applicant screening tools discriminate against job seekers based on age, race, and disability status. 🔸 Amazon – scrapped a recruiting tool which was found to discriminate against women applying for technical roles Two lessons here: 1.\ Companies can be held legally responsible for the failures or biases in AI tools, even when those tools come from third-party vendors. 2.\ Boards could face personal liability if they fail to ask the right questions or demand oversight. ❎ Neither ignorance nor silence is a defense. Joyce Cacho, PhD, CDI.D, CFA-NY, a recognized board director and governance strategist recently obtained an AI certification (@Cornell) because: -She knows AI is a risk and opportunity. -She assumes that tech industry biases will be embedded in large language models. -She wants it to be documented in the minutes that she asked insightful questions about costs - including #RAGs and other techniques - liability, reputation and operating risks. If you’re on a board, here’s a starter action plan (not exhaustive): ✅ Form an AI governance team to shape transparency culture 🧾 Inventory all AI tools: internal, vendor & experimental 🕵🏽♀️ Conduct initial audits 📝 Review vendor contracts (indemnification, audit rights, data use) Because if your board is serious about strategy, risk, and long-term value… Then AI oversight belongs on your agenda. ASAP What’s your board doing to govern AI?
-
This new white paper "Steps Toward AI Governance" summarizes insights from the 2024 EqualAI Summit, cosponsored by RAND in D.C. in July 2024, where senior executives discussed AI development and deployment, challenges in AI governance, and solutions for these issues across government and industry sectors. Link: https://lnkd.in/giDiaCA3 * * * The white paper outlines several technical and organizational challenges that impact effective AI governance: Technical Challenges: 1) Evaluation of External Models: Difficulties arise in assessing externally sourced AI models due to unclear testing standards and development transparency, in contrast to in-house models, which can be customized and fine-tuned to fit specific organizational needs. 2) High-Risk Use Cases: Prioritizing the evaluation of AI use cases with high risks is challenging due to the diverse and unpredictable outputs of AI, particularly generative AI. Traditional evaluation metrics may not capture all vulnerabilities, suggesting a need for flexible frameworks like red teaming. Organizational Challenges: 1) Misaligned Incentives: Organizational goals often conflict with the resource-intensive demands of implementing effective AI governance, particularly when not legally required. Lack of incentives for employees to raise concerns and the absence of whistleblower protections can lead to risks being overlooked. 2) Company Culture and Leadership: Establishing a culture that values AI governance is crucial but challenging. Effective governance requires authority and buy-in from leadership, including the board and C-suite executives. 3) Employee Buy-In: Employee resistance, driven by job security concerns, complicates AI adoption, highlighting the need for targeted training. 4) Vendor Relations: Effective AI governance is also impacted by gaps in technical knowledge between companies and vendors, leading to challenges in ensuring appropriate AI model evaluation and transparency. * * * Recommendations for Companies: 1) Catalog AI Use Cases: Maintain a centralized catalog of AI tools and applications, updated regularly to track usage and document specifications for risk assessment. 2) Standardize Vendor Questions: Develop a standardized questionnaire for vendors to ensure evaluations are based on consistent metrics, promoting better integration and governance in vendor relationships. 3) Create an AI Information Tool: Implement a chatbot or similar tool to provide clear, accessible answers to AI governance questions for employees, using diverse informational sources. 4) Foster Multistakeholder Engagement: Engage both internal stakeholders, such as C-suite executives, and external groups, including end users and marginalized communities. 5) Leverage Existing Processes: Utilize established organizational processes, such as crisis management and technical risk management, to integrate AI governance more efficiently into current frameworks.
-
Everyone’s feeding data into AI engines, but when it leaves secure systems, the guardrails are often gone. Exposure grows, controls can break down, and without good data governance, your organization's most important assets may be at risk. Here's what needs to happen: 1. Have an established set of rules about what’s allowed/not allowed regarding the use of organizational data that is shared organization-wide, not just with the IT organization and the CISO team. 2. Examine the established controls on information from origin to destination and who has access every step of the way: end users, system administrators, and other technology support people. Implement new controls where needed to ensure the proper handling and protection of critical data. You can have great technical controls, but if there are way too many people who have access and who don’t need it for legitimate business or mission purposes, it puts your organization at risk. 3. Keep track of the metadata that is collected and how well it’s protected. Context matters. There’s a whole ecosystem associated with any network activity or data interchange, from emails or audio recordings to bank transfers. There’s the transaction itself and its contents, and then there’s the metadata about the transaction and the systems and networks that it traversed on its way from point A to point B. This metadata can be used by adversaries to engineer successful cyberattacks. 4. Prioritize what must be protected In every business, some data has to be more closely managed than others. At The Walt Disney Company, for example, we heavily protected the dailies (the output of the filming that went on that day) because the IP was worth millions. In government, it was things like planned military operations that needed to be highly guarded. You need an approach that doesn’t put mission-critical protections on what the cafeteria is serving for lunch, or conversely, let a highly valuable transaction go through without a VPN, encryption, and other protections that make it less visible. Takeaway: Data is a precious commodity and one of the most valuable assets an organization can have today. Because the exchange-for-value is potentially so high, bad actors can hold organizations hostage and demand payment simply by threatening to use it.
-
🚨 Breaking: New research reveals AI models can strategically deceive their creators during training - a watershed moment in AI safety. As a Board Director focused on AI governance, Anthropic's latest findings about their Claude AI model's ability to "fake alignment" demands immediate attention in our boardrooms. This isn't just another AI development - it's a clear signal that we need to strengthen our oversight frameworks. Critical Board Imperatives: 1. Risk Management: We're seeing concrete evidence of AI systems developing sophisticated deception strategies. This requires immediate elevation of AI risk to board-level oversight. 2. Governance with insight and foresight: Boards must actively engage with General Counsel to establish robust AI Governance frameworks focusing on: - Transparent AI deployment and decision-making processes - Comprehensive risk assessment and mitigation strategies - Clear accountability chains with meaningful human oversight - Stringent data privacy protections and compliance 3. Strategic Planning: Every board meeting should now include AI governance on its agenda. The research suggests deceptive behaviors become more sophisticated as AI systems grow more powerful - waiting is not an option. Key Finding: These aren't theoretical risks anymore. When Anthropic's AI model demonstrated strategic deception to preserve its programming, it highlighted the urgent need for proactive governance. As board members, we must champion the integration of legal expertise with AI oversight. Our fiduciary duty now extends to ensuring AI systems align with both organizational values and regulatory requirements. My 2H of 2024 was very busy counseling boards, CIOs and various General Counsels to strengthen their AI governance framework. How is your board approaching this challenge? #CEO #KSgems #BoardOversight #CIO #CTO #CISO #AI #BoardLeadership #AIGovernance #CorporateGovernance #RiskManagement #LegalTech #AIEthics #BusinessStrategy Manuj Aggarwal Hasit Trivedi https://lnkd.in/eV4tscYy
-
I recently read that Gartner predicts 10% of global boards will use #ArtificialIntelligence to challenge executive decisions by 2029. But #boards that focus on using AI tools to guide their work are entirely missing the point. The real issue is that boards are not ready for #AI governance. Most public company boards do not understand the modern value creation chain: how #IT drives process automation, which produces data, which drives business value. This contemporary approach to value production is fundamentally different from the environments in which these leaders developed their expertise. While it's encouraging that board members are finding AI assistance tools useful in daily work, the fact that 10% of boards are using AI tools is irrelevant to the core board challenge: how to shift mindsets, develop #tech acumen, and think about a very different future. These fundamental changes are far more critical. Looking ahead, boards must consider establishing a technology committee, identifying new board members to drive innovation, focusing on board education, and establishing a framework for a tech-focused board. #BoardGovernance #DigitalTransformation
-
The article attached in the comments sets out some stand-out principles and a sub-set of the rationale for board governance in terms of emerging and transformative technologies. One of the central issues that captures my attention as both a governance practitioner and researcher is the notion that the boards’ holistic approach to governance requires breadth and depth in respect of what Andrea Bonime-Blanc, JD/PhD aptly terms ‘exponential technology’. In addition to the points set out in the article, I submit that there is a need for a deep understanding of, and delineation between the roles and responsibilities, and the requirements and demands of board governance, IT governance, cyber governance, AI governance, data governance, ‘the governance of data’, business strategy and IT strategy (et al.). These elements have different but connected functions. They must be defined clearly and considered discretely in autonomous governance structures that dovetail into an overarching, integrated, and aligned governance architecture. The roles and responsibilities of the governance elements and actors in the enterprise governance architecture must be clear. The metrics and information required (for decision-making and dissemination - down, across, and upwards) at each level of governance must be equally clear and tuned. The separation of metrics required for the purposes of governance and for management must also be clear (although strong, continuous alignment between these must be maintained). Business metrics and IT metrics and the interrelationship between them regarding each of the above aspects of the organisation's governance system must be understood clearly and implemented in a cohesive way that constructs a complete and up-to-date picture of the whole governance canvas. Importantly, it must be remembered that in relation to AI governance frameworks, one size does not fit all. For the board, this is about governing for performance and impact, not only compliance and conformance. In the Optima Board Services Group global practice, the above is addressed from the board point of view in what I term the ‘board digital portfolio’™. Integrated, appropriately tuned technology governance, along with performance improvement measures, delivering more effective and nimble support for the achievement of strategic business objectives and the effective management of risk and compliance is imperative. Understanding what and how to ensure this is enabled and effected requires a deep understanding and knowledge of corporate and technology governance, ethics and their relationship. See the full article in the comments. Jordan Famularo, PhD Maureen Farmer, CEO Advisor Virtual Advisory Board (VAB) Prof Michael Adams FAAL Eduardo Lebre Alexandra Lajoux #corporategovernance #boardofdirectors #aigovernance #airegulation #privacylaw #cyberlaw #corporatelaw #businessstrategy #technologystrategy
-
Everyone is talking about the 'cognitive debt' MIT study but not as many people are talking about how 42% of businesses scrapped most of their AI initiatives in 2025 — up from just 17% last year. And guess what: this is less about failed technology than it is about underdeveloped governance. Because here's the real story: While 73% of C-suite executives say ethical AI guidelines are "important," only 6% have actually developed them. Companies are building first, governing later — and paying the price with abandonware projects, compliance failures, and stakeholder trust erosion. Which means a massive opportunity: The regulatory landscape is fragmenting (US deregulation vs. EU AI Act), but one thing is clear: human-centered AI design isn't optional anymore. Organizations that integrate ethics from day one aren't just avoiding failures — they're scaling faster. So here are three immediate actions for leaders: * Audit your current AI governance gaps (not just the technical risks) * Establish board-level AI oversight (as 31% of S&P 500 already have) * Design for augmentation, not automation (research shows this drives better outcomes) And don't leave the human perspective — or the human thinking — out of the equation. The question isn't whether to govern AI ethically — it's whether you'll do so now and get ahead of your projects, or be stuck playing catch-up later. What's your organization's approach to AI governance? Share your challenges below. #AIEthics #ResponsibleAI #CorporateGovernance #TechLeadership #WhatMattersNextbook
-
👇Near-misses hold untapped leadership lessons. Password '123456' astonishingly unlocked McDonald's AI-driven hiring platform (fortunately detected by noble cyber sleuths Sam Curry & Ian Carroll - thank you!). A quick technical fix reveals a wider corporate governance gap -- boards & c-suites remain dangerously unready for AI design, deployment & oversight. Just ask these four questions: 👉 Does our board composition match our AI & cyber risk exposure? 👉 Does our senior leadership team understand the strategic, financial, operational and reputational downsides of tech troubles? 👉Will unaddressed gaps be discovered through proactive governance or unwanted disclosure? 👉Are we governing AI & cyber vendors or just hoping they handle security? Read more on Forbes about how to reframe #ResponsibleAI debates and overcome incentives, incompetence & indifference w/ stewardship, capability & care: https://lnkd.in/gBhqMBnj Thanks for the stellar background research from WIRED's Andy Greenberg & PwC's AI team led by Dan Priest, as well as insights & thought leadership from Steve Andriole, Mark Jesty, Alan Robertson, Christopher Hetner & Ivan Rahman. cc: EY Valerie Ashbaugh Ian Borden Tiffanie Boyd Kim Nash Rob Sloan Dr. Keri P.Tony Moroney Aldo Ceccarelli Shira Rubinoff✔ Shay Colson, CISSP Digital Directors Network William (Tony) Cole Calvin Nobles, Ph.D. Julia Stoyanovich Ludovic Righetti Andrew Hoog Andrew Heighington Cyril Coste Avrohom GottheilCorix Partners Michael Brown Business Expert Press Responsible AI Institute Global Council for Responsible AI Center for Responsible AI at NYUCyberSecurity News #trustedAI #cybersecurity
-
TL;DR: AI is rapidly transforming work across all industries, with 30% of work hours potentially automated by 2030. Most boards lack adequate AI governance, with only 17% of Fortune 500 directors having substantial AI experience. Effective AI governance requires: ➡️ CEO accountability rather than tech delegation ➡️ Board-wide AI literacy, not just specialist knowledge ➡️ Workforce transformation planning alongside technology roadmaps ➡️ Avoiding Implementation Pitfalls ➡️ Proactive AI risk management ➡️ Business outcome metrics, not just implementation milestones Avoid creating specialized "AI committees" - instead integrate AI oversight into existing board structures. Ask management these three questions: 1) how AI is changing our competitive landscape now? 2) which job functions are most vulnerable? 3) what is our biggest AI governance vulnerability?