94% of top performers don't have the 'required' years of experience Steve Jobs. Mark Zuckerberg. Elon Musk had that in common - they did not have the "years of experience" I have seen so many recruiters and staffing teams use this metric and its all wrong "Years of Experience" as a hiring metric is: ➡️ A poor predictor of "PERFORMANCE" Fact: A 2019 study found only a 3% correlation between experience and job performance Reality: I've seen 2-year "rookies" outperform 10-year "veterans" countless times ➡️ Stifles INNOVATION • 78% of HR leaders agree: Fresh perspectives drive innovation • Example: Would Netflix have disrupted Blockbuster if they only hired "experienced" video rental experts? ➡️ Particularly flawed in tech • Tech skills have a half-life of about 5 years • A developer with 2 years in cutting-edge AI often trumps one with 10 years in legacy systems ➡️ It discriminates against career changers • 49% of employees will change careers in their lifetime • You're missing out on diverse problem-solving approaches by ignoring transferable skills ➡️ It ignores the QUALITY of the experience • 3 years of high-impact projects > 7 years of routine tasks • I once hired a 3-year product manager who increased ROI by 200% over a 10-year counterpart The Solution: Focus on these instead ✅ Demonstrated skills: Use practical assessments ✅ Learning agility: Look for continuous self-improvement ✅ Adaptability: Ask for examples of quick learning and pivots ✅ Problem-solving ability: Present real scenarios in interviews ✅ Cultural add (not just fit): How will they enhance your culture? Actionable Steps: 1. Rewrite job descriptions: Replace "X years required" with specific competencies 2. Implement blind resume reviews: Test actual abilities, not years accumulated 3. Use skill-based assessments: Focus on achievements, not timelines 4. Conduct project-based interviews: See candidates in action 5. Create diverse interview panels: Reduce bias and get multiple perspectives The result? You'll build more innovative, adaptable, and high-performing teams. What's been your experience? Have you seen "inexperienced" hires shine? #Recruitment #Hiring #HiringandPromotion #Startups #Founders RecruitingSniper and Joshua Talreja
Performance-Based Skill Measurement
Explore top LinkedIn content from expert professionals.
Summary
Performance-based skill measurement means assessing an individual’s abilities by observing and measuring what they actually achieve in real work situations, instead of relying on their years of experience or self-reported skills. This approach is reshaping how organizations hire, review, and reward employees by prioritizing real outcomes and demonstrated capabilities.
- Prioritize real outcomes: Focus on what candidates and employees accomplish in practical tasks or projects, rather than simply counting years of experience or relying on interview responses.
- Implement skill assessments: Use hands-on tests, project-based interviews, or work samples to see how people apply their skills to solve problems and contribute to your team.
- Measure progress with data: Track specific, quantifiable results—like revenue growth, productivity improvements, or successful mentorship—to clearly understand and communicate impact.
-
-
The most valuable private tech company out of Europe right now published its performance management playbook. And IMO every entrepreneur should read it. There’s a lot out there about what Revolut has accomplished ($428m in net profit last year, with $2.2bn in revenue and a global customer base of 45 million for starters). There’s a lot less written about how the Revolut team achieved this level of success. Which makes Nik Storonsky’s “Driving High Performance” playbook so valuable. It was co-written by Nik and the team at QuantumLight and somehow manages to condense nearly a decade of Nik’s best practices from growing Revolut into a 30-minutes read. What I find most notable about Nik’s playbook: 🥷 𝐀 𝐝𝐞𝐝𝐢𝐜𝐚𝐭𝐞𝐝 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐭𝐞𝐚𝐦 𝐭𝐡𝐚𝐭’𝐬 𝐬𝐞𝐩𝐚𝐫𝐚𝐭𝐞 𝐟𝐫𝐨𝐦 𝐇𝐑 𝐚𝐧𝐝 𝐫𝐞𝐩𝐨𝐫𝐭𝐬 𝐝𝐢𝐫𝐞𝐜𝐭𝐥𝐲 𝐭𝐨 𝐭𝐡𝐞 𝐂𝐄𝐎 Nik believes performance management is a science, not an art. It can be standardized and it should be a top CEO priority. At Revolut, this looks like a team of smart operators that can build the process for performance management and constantly fine-tune evaluations and incentives. 🧮 𝐒𝐭𝐚𝐧𝐝𝐚𝐫𝐝𝐢𝐳𝐞𝐝 𝐞𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧𝐬 𝐭𝐡𝐫𝐨𝐮𝐠𝐡 𝐝𝐚𝐭𝐚 𝐚𝐧𝐝 𝐬𝐢𝐦𝐩𝐥𝐞 𝐟𝐨𝐫𝐦𝐮𝐥𝐚𝐬 𝐭𝐨 𝐫𝐞𝐦𝐨𝐯𝐞 𝐛𝐢𝐚𝐬𝐞𝐬 𝐚𝐧𝐝 𝐩𝐨𝐥𝐢𝐭𝐢𝐜𝐚𝐥 𝐢𝐧𝐭𝐞𝐫𝐟𝐞𝐫𝐞𝐧𝐜𝐞𝐬 Performance is delivered over three dimensions — deliverables, skills and culture — and scorecards are used to describe ideal behavior. Assessment is standardized through yes/no answers. For each seniority level, the performance team sets a bar for expectations and goes through a quarterly process to gather performance reviews, calculate grades, calibrate results, and share those results with managers to deliver feedback. There’s no exception to this process, no matter how junior or senior someone is. The result of such a mathematical approach? Employees get evaluated on outcomes, not intuition. Which means they spend less time focused on positioning themselves positively and more time improving their metrics. 🥇 𝐃𝐢𝐬𝐩𝐫𝐨𝐩𝐨𝐫𝐭𝐢𝐨𝐧𝐚𝐭𝐞 𝐜𝐨𝐦𝐩𝐞𝐧𝐬𝐚𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐭𝐨𝐩 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐞𝐫𝐬 𝐚𝐧𝐝 𝐪𝐮𝐢𝐜𝐤 𝐞𝐱𝐢𝐭𝐬 𝐟𝐨𝐫 𝐛𝐨𝐭𝐭𝐨𝐦 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐞𝐫𝐬 When everything that matters gets measured across functions, both A-players and under-performers are easy to spot. Revolut doesn’t shy away from giving its top 15-25 percent of employees disproportionate compensation. On the other side of the performance coin, they focus on exiting the bottom 0-10% of performers as quickly as possible. At a time when all the talk is about founder mode, here is a concrete, actionable playbook for maintaining peak performance at a large scale. Is Nik’s approach for everyone? No. Can it lead to incredible results for founders that adapt this model to their own culture? Absolutely. Nik Storonsky and QuantumLight, thanks for sharing your secrets - hopefully it will inspire and help a lot of entrepreneurs.
-
Performance Management Is Changing. Are You Ready? In the next three years, the way we assess performance will be fundamentally redefined by one major force: AI integration in everyday work. As AI becomes a baseline expectation, not a bonus skill, we must evolve how we define, measure, and reward performance. Historically, performance ratings emphasized behaviors, results, and goals. But now? The “how” -- which includes leveraging technology to amplify impact -- will be just as important as the “what.” How will future performance evaluations shift? 1 - Integrated Skills Assessment: Evaluating both human expertise and how effectively employees use AI tools (e.g., ChatGPT, Copilot, Replit, Claude, etc.) to improve work quality, efficiency, and innovation. 2 - Job-Based AI Expectations: Different jobs require different levels of AI fluency. A marketer using AI to generate customer insights is different from a software engineer automating testing scripts. Leaders must tailor benchmarks. 3 - Rewarding Adaptability: The speed at which employees adapt to new tools and workflows will be a key differentiator in performance. 4 - Performance Calibration Will Evolve: Managers will need to assess not only results but how AI helped achieve it. Was it used ethically? Was an employee’s judgment applied appropriately? Future-Focused Performance Rating Scale: 1. Not Meeting Expectations = Struggles to complete job responsibilities, avoids using new tools, and resists tech-enabled workflows. Example: Continues using outdated manual processes despite available AI support; missing deadlines and quality standards. 2. Partially Meeting Expectations = Some responsibilities met, but inconsistent application of AI tools limits impact. Learning curve is still steep. Example: Tries using AI but produces work that needs frequent rework; hesitant to explore new tech features. 3. Meeting Expectations = Meets job goals, uses AI/tech tools appropriately to support tasks, and demonstrates foundational digital agility. Example: Uses AI to draft content or summarize reports; integrates output with sound judgment and team input. 4. Exceeds Expectations = Proactively uses AI and digital tools to improve quality and productivity; mentors others in effective use. Example: Automates data workflows, reduces turnaround time by 30%, and helps peers adopt similar approaches. 5. Consistently Exceeds Expectations = Expertly integrates AI into work to drive innovation, transformation, or measurable business impact. Example: Creates AI-driven customer engagement model that increases conversion rates; pilots new tools for cross-functional use. As tech becomes the partner for most jobs, we must redefine excellence. Are your performance frameworks ready for that shift? #PerformanceManagement #Compensation #HR #HumanResources #AI #FutureOfWork #TotalRewards #SHRM #WorldatWork #CompensationConsultant #Pay #PerformanceFeedback https://shorturl.at/915OT
-
I've updated one of my previous articles that's become increasingly relevant in today's challenging hiring environment: "Interviewing By Putting To Work." In a job market flooded with qualified candidates where each posting generates hundreds of applications, traditional interviews are proving increasingly inadequate at identifying the right person. They measure interview performance, not job performance. For years, I've found that having candidates actually do the job—either as short-term contractors or through substantive work samples—provides dramatically better hiring outcomes than question-and-answer sessions alone. This approach reveals not just technical skills but also how people think, collaborate, and handle uncertainty. I've outlined three practical models for implementing this approach, depending on your constraints and the role you're filling. Each offers a more reliable path to identifying talent that will truly thrive in your specific environment. In my experience, the extra time invested in working interviews pays tremendous dividends in reducing costly hiring mistakes and building higher-performing teams. The most significant predictor of future performance isn't what candidates say about their past work—it's what you directly observe them doing now. I've revised and expanded the article with evidence-based approaches and practical implementation guidance: https://lnkd.in/eY3EZdvU Sometimes our clients at Flatiron Software and Snapshot AI hire us to do a POC project first and then hire us for longer-term projects. What methods have you found most effective for identifying the right talent in this challenging market?
-
I used to rely on “feeling” like I was making progress in my career. I learnt it the hard way that this was a big mistake, and here is why: 👇 🔥 You cannot improve what you cannot measure. Period! 🔥 Early in my career, I’d walk into performance reviews with vague statements like “I worked really hard this year” and “I made a big impact.” Then I realized the professionals who advanced fastest weren’t necessarily the “hardest” workers - they were the best measurers, and improved what needed to be improved as a result. 🚀 How High Performers Measure Progress: ↳ “I increased team productivity by 23% through process optimization” ↳ “I generated $2.4M in pipeline from my networking efforts” ↳ “I reduced customer churn by 15% in my territory” ↳ “I mentored 6 junior employees, 4 of whom got promoted” 👏 The Career Metrics That Actually Matter: ↳ Revenue Impact: How did your work directly contribute to the bottom line? ↳ Efficiency Gains: What processes did you improve and by how much? ↳ Team Development: How many people did you help grow or promote? ↳ Problem Solving: What specific challenges did you solve and what was the measurable outcome? 🙌 Why Measurement Transforms Careers: ↳ Clarity: You know exactly where you’re winning and where you’re losing ↳ Confidence: You can articulate your value with precision during reviews ↳ Course Correction: You can adjust tactics quickly when metrics decline ↳ Credibility: Leaders trust people who speak in data, not feelings ↳ Promotion Readiness: You always have concrete examples of your impact How do you measure progress? Share below 👇 — ↻ ✰ Share to inspire change ✰ + ✰ Follow me for more if you found this useful ✰
-
NFL teams cracked the talent code that most miss: They identify elite performers while everyone else plays guessing games. The best franchises utilize frameworks to predict execution under pressure — frameworks that are effective across every industry. After studying elite talent spotters, I've identified 3 key frameworks that deliver results: Framework #1: Execution Context Trumps Raw Metrics Elite operators know impressive credentials mean nothing without contextual execution. Tom Brady ran a painfully slow 5.28-second 40-yard dash, but his decision-making and leadership made him the GOAT. At NextLink Labs, we scrapped credential-based hiring for contextual execution - our project success rate jumped from 62% to 91%. Context beats raw skill - every time. Framework #2: Cognitive Processing Speed The best organizations don't measure what people know. They measure how people learn. Research shows that cognitive processing speed predicts QB success 76% of the time, outperforming college statistics (43%) and physical metrics (38%). At NextLink, our top performers process technical information 2.3 times faster and connect concepts in ways others cannot. This pattern emerges in every high-stakes domain. Framework #3: Character Predicts Execution Elite NFL teams utilize AI to analyze social media activity spanning years. Why? Character directly predicts execution. Prospects who consistently demonstrate accountability online perform better under pressure. This isn't about finding "nice" people - it's about aligning individual values with team culture. Misaligned A-players destroy organizations, while the right fit amplifies performance 3- 5x. Implementation: 1. Evaluate skills in contexts, not on paper 2. Test for learning velocity over knowledge 3. Identify behavior patterns that reveal character The gap between average and elite isn't marginal - it's fundamental. Follow me for software, cybersecurity, and execution frameworks that scale.
-
Measuring Success: How Competency-Based Assessments Can Accelerate Your Leadership If it’s you who feels stuck in your career despite putting in the effort. To help you gain measurable progress, one can use competency-based assessments to track skills development over time. 💢Why Competency-Based Assessments Matter: They provide measurable insights into where you stand, which areas you need improvement, and how to create a focused growth plan. This clarity can break through #career stagnation and ensure continuous development. 💡 Key Action Points: ⚜️Take Competency-Based Assessments: Track your skills and performance against defined standards. ⚜️Review Metrics Regularly: Ensure you’re making continuous progress in key areas. ⚜️Act on Feedback: Focus on areas that need development and take actionable steps for growth. 💢Recommended Assessments for Leadership Growth: For leaders looking to transition from Team Leader (TL) to Assistant Manager (AM) roles, here are some assessments that can help: 💥Hogan Leadership Assessment – Measures leadership potential, strengths, and areas for development. 💥Emotional Intelligence (EQ-i 2.0) – Evaluates emotional intelligence, crucial for leadership and collaboration. 💥DISC Personality Assessment – Focuses on behavior and communication styles, helping leaders understand team dynamics and improve collaboration. 💥Gallup CliftonStrengths – Identifies your top strengths and how to leverage them for leadership growth. 💥360-Degree Feedback Assessment – A holistic approach that gathers feedback from peers, managers, and subordinates to give you a well-rounded view of your leadership abilities. By using these tools, leaders can see where they excel and where they need development, providing a clear path toward promotion and career growth. Start tracking your progress with these competency-based assessments and unlock your full potential. #CompetencyAssessment #LeadershipGrowth #CareerDevelopment #LeadershipSkills
-
𝗡𝗼𝘃𝗲𝗹 𝘄𝗮𝘆 𝘁𝗼 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝗹𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 𝘀𝗸𝗶𝗹𝗹𝘀 𝘃𝗶𝗮 𝗰𝗮𝘂𝘀𝗮𝗹 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 (𝗮𝗻𝗱 𝗔𝗜) 🤔 It never occurred to me that causal inference principles could be used effectively to measure something like leadership skills, but two pre-registered WIP studies by Weidmann et al. (2024, 2025) illustrate it actually could make good sense. 💡 The authors aimed to identify the causal contribution of managers to team performance by repeatedly randomly assigning managers to multiple teams of human followers and controlling for all individuals’ task-specific skills and fluid intelligence. Each time a leader was assigned to a team, a prediction was made about team performance based on the individual skills of the leader and the followers. Actual team performance was then compared to these predictions—leaders who drove performance above expectations were deemed to have strong leadership skills. In other words: a good manager consistently helps their team outperform the sum of its parts. 🔀 Those familiar with causal inference can clearly hear in this an echo of its central logic—constructing a counterfactual scenario and comparing it to what actually happened. Here, the counterfactual imagines team performance based solely on individual skills, without any added effect from the leader’s leadership skills. 🤖 Given the logistical difficulty of arranging such testing situations with human team members, the authors explored whether large language model (LLM) agents could stand in for real people. So, in a second pre-registered study, they tested this idea and found that leadership skill estimates based on teams consisting of human followers and AI-simulated followers correlated strongly (ρ = 0.81), which could make this assessment method much more practical and scalable. Besides that, the studies also uncovered several other interesting insights underlying the value and importance of objective, skill-based selection of future leaders: ➡️ Good managers had roughly twice the impact on team performance compared to good individual contributors. ➡️ People who put themselves forward as leaders tended to perform worse than those chosen at random—partly because self-nominated managers (correlated with extraversion and self-reported people skills) were often overconfident, especially about their social skills. ➡️ Managerial performance was positively linked to economic decision-making ability, social intelligence, and fluid intelligence—but not to gender, age, ethnicity, or education. ➡️ Skills were stronger predictors of managerial performance than personality traits or personal preferences. Links to the original papers are in the comments. ⚠️ Caveat: The papers referenced were intended for discussion and comment, and have not undergone peer review. P.S. Thanks to my boss, Nadzeya Laurentsyeva, for pointing me to this interesting research. 🙏
-
I have one framework that turns underperformers into A-players. It's so simple most CEOs never think to use it. But it reveals exactly how someone creates value and shows them the path to leveling up. The best part? Your highest performers will beg you to implement it. Companies that scale past $50 million do it because they hire people who know exactly how they contribute to winning. They promote based on clear outcomes, not politics. They can tell you which employee moves which needle and by how much. I've hired over 3,000 people across 6 industries. The pattern is clear: companies that measure progress accelerate it. Here are the 5 KPIs every role must have: 1. Revenue Impact KPI How much value does this person create? Sales: deals closed, revenue per deal Marketing: qualified leads, cost per acquisition Operations: cost savings, efficiency gains 2. Quality KPI How well do they do the work? Customer satisfaction scores Error rates First-time completion rates 3. Speed KPI How fast do they deliver? Response times Project completion rates Time to resolution 4. Growth KPI How are they improving? Skills acquired Certifications earned Process improvements implemented 5. Team Impact KPI How do they elevate others? Peer feedback scores Knowledge sharing contributions Team productivity when they're leading vs when they're not Every person gets 3-5 specific metrics. No ambiguity. No interpretation. The scoreboard shows them exactly how they're winning. We track these weekly. Every team member has a giant 90s-style thermometer posted next to their desk. Everyone can see who's crushing it. When someone hits their KPIs, we celebrate publicly. When someone is struggling, we have a conversation about what support they need. If they improve, they level up. If they can't after clear feedback and resources, we help them find a role where they can win. Winners love knowing exactly how they're performing. They want to see their progress. They want to know what winning looks like. The best employees don't fear metrics. They demand them. Scoreboards work at every revenue level. But at $1M+, the cost of not having them becomes catastrophic. One confused employee at $100K costs you time. One confused leader at $5M costs you six figures. I'm hosting a free workshop on the leadership systems that separate $1M companies from $10M companies. If you're ready to scale past your current ceiling, register here: https://buff.ly/B7PphCa
-
Managing Performance in the Age of AI The way we measure performance has always been tied to the nature of work — and that nature has changed dramatically over time. First era → Operational work Tasks were repetitive, well-defined, and measurable. Quantity mattered more than anything else (with a baseline of quality). It was simple: assign the task, check if it’s done, measure output. Performance was visible almost instantly. Second era → Knowledge work Things got more complex — expertise and judgment started to matter. Here, quality became far more important than quantity (though both still played a role). Businesses and clients were willing to pay a premium for quality. Timelines stretched, but that was fine if the final project met the standard. Performance measurement was still manageable: look at the outcome, assess quality, and you knew whether the person delivered. Third era → Novel work (today, in the age of AI) AI has changed the game — quality and quantity can now be generated at scale, on demand, 24×7. The differentiator isn’t how much you produce, or even how polished it looks — machines already excel at that. The differentiator is creativity: solving new problems in new ways. And that’s far trickier to measure. Creativity doesn’t run on predictable timelines. You can’t guarantee novelty just by working harder or longer. A person could put in endless hours and still not create something truly new. Which means performance measurement has become slower and riskier. Businesses may have to wait much longer before they know whether someone can deliver in this “novel work” world. So what’s the answer? I believe we need to shift focus away from lagging indicators (outputs, outcomes) and start looking at leading indicators — early signs that a person has the mindset and skills to thrive in this new environment. Here are some early indicators: 1. Work ethics – discipline, consistency, ownership of outcomes. 2. Contribution in meetings – do they bring valuable perspectives, even if you need to draw it out of them? 3. Comfort with conflicting thoughts – can they handle ambiguity and contradictions without stalling? 4. Contextual understanding – do they grasp what’s happening around them and shape solutions that actually fit the situation? 5. Articulation – can they communicate ideas with clarity? In tomorrow’s world, articulation will matter more than knowledge, because knowledge is already democratized. In short → managing performance in the AI era means measuring what machines cannot. Not just IQ, but also EQ and SQ. Not just end results, but the early signals of creativity, adaptability, and alignment. This is harder — no doubt. But it’s also necessary. Because without evolving our performance measurement, we’ll keep applying old metrics to a new world — and miss what really drives value today. What do you think — are leaders and organizations ready to rewire performance management for this reality?