For decades, engineering teams have been measured by lines of code, commit counts, and PRs merged—but does more code actually mean more productivity? 🚀 Some of the best developers write LESS code, not more. 🚀 The fastest-moving teams focus on outcomes, not just output. 🚀 High commit counts can mean inefficiency, not impact. Recent research from DORA, GitHub, and real-world case studies from IT Revolution debunk the myth that developer activity = developer productivity. Here’s why: 🔹 DORA Research: After studying thousands of engineering teams, DORA (DevOps Research & Assessment) found that the best teams optimize for four key engineering performance metrics: ✅ Deployment Frequency → How often do we ship value to users? ✅ Lead Time for Changes → How fast can an idea go from code to production? ✅ Change Failure Rate → Are we improving quality, or just shipping fast? ✅ MTTR (Mean Time to Restore) → Can we recover quickly when things go wrong? → Notice what’s missing? Not a single metric is based on lines of code, commits, or individual developer output. 🔹 GitHub’s Data: GitHub found that developers working remotely during 2020 pushed more code than ever—but many felt less productive. Why? Longer workdays masked inefficiencies. More commits ≠ meaningful work; some were just fighting bad tooling or slow reviews. Teams that automated workflows (CI/CD, code reviews) merged PRs faster and felt more productive. 🔹 IT Revolution case studies: High-performing engineering orgs measure outcomes, not just outputs. The best teams: Shift from tracking commit counts → to measuring customer value. Use DORA metrics to improve DevOps flow, not micromanage engineers. View engineering productivity as a team effort, not an individual scoreboard. If you want a high-performing engineering org, don’t just push developers to write more code. Instead, ask: ✅ Are we shipping value faster? ✅ Are we reducing friction in our workflows? ✅ Are our developers able to focus on meaningful work? 🚨 The takeaway? Great engineering teams don’t write the most code—they deliver the most impact. 📢 What’s the worst “productivity metric” you’ve ever seen? Drop a comment below 👇 #DeveloperProductivity #SoftwareDevelopment #DORA #GitHub #EngineeringLeadership
Reasons Traditional Productivity Metrics Are Ineffective
Explore top LinkedIn content from expert professionals.
Summary
Traditional productivity metrics in software development, such as lines of code or commit counts, often fail to capture meaningful impact and can lead to inefficiency and frustration. Instead, focusing on outcomes—like user value, quality, and team satisfaction—offers a more accurate and motivating measure of success.
- Measure outcomes, not output: Evaluate success based on user value, business impact, and team alignment, rather than arbitrary metrics like lines of code or hours worked.
- Focus on quality and clarity: Encourage thoughtful problem-solving, scalable solutions, and collaboration over hastily completed tasks or inflated metrics.
- Promote balanced workflows: Prioritize clear goals and flexible processes that prevent burnout and support creativity, ensuring both the team's well-being and productivity.
-
-
Intelligent CEOs and CFOs should resist McKinsey's recent offer of a magical framework to measure engineer productivity. It sounds reasonable on the surface, but it will lead to worse outcomes for companies, not better ones. [There is an excellent write up of this issue by Kent Beck and Gergely Orosz, which you can find at The Pragmatic Engineer.] Put simply, productivity is the measure of output that can be produced for a given set of inputs. If you're making steel, you can calculate the expenditures on tools, equipment, plant and raw materials, as well as labor costs, compare that to a given amount of finished goods produced and sold, and measure the productivity of steel workers in Ohio compared with those in Argentina or Japan. These figures are comparable because the making of steel is a well-defined deterministic process. Making software is not. In software development, there isn't the same kind of correlation between the efforts, activities, and outputs of engineers and the value produced by software in the market. Engineers simply writing more code, or even shipping features more quickly, does not automatically lead to value for the customer or money in the bank for the business. What does matter in producing value for the business in software development is clear objectives, a high degree of alignment across teams, and a lot of flexibility and autonomy on the ground. Measures that supposedly track productivity will in fact hinder each of these, and lead to negative economic outcomes. The thinking behind such measures is the legacy of Frederick Winslow Taylor and the bygone era of industrial production. McKinsey originated in those years and was a big champion of Taylor's work across many industries. From what we've seen lately, they haven't changed at all. If you are the CEO or CFO of a growth stage tech company, follow their advice at your own peril. -- If you do want to probe your organization to see how you can improve outcomes instead of outputs, give our diagnostic tool a try. You can find it here (https://lnkd.in/gMZZyzkm)
-
The best-performing software engineering teams measure both output and outcomes. Measuring only one often means underperforming in the other. While debates persist about which is more important, our research shows that measuring both is critical. Otherwise, you risk landing in Quadrant 2 (building the wrong things quickly) or Quadrant 3 (building the right things slowly and eventually getting outperformed by a competitor). As an organization grows and matures, this becomes even more critical. You can't rely on intuition, politics, or relationships—you need to stop "winging it" and start making data-driven decisions. How do you measure outcomes? Outcomes are the business results that come from building the right things. These can be measured using product feature prioritization frameworks. How do you measure output? Measuring output is challenging because traditional methods don’t accurately measure this: 1. Lines of Code: Encourages verbose or redundant code. 2. Number of Commits/PRs: Leads to artificially small commits or pull requests. 3. Story Points: Subjective and not comparable across teams; may inflate task estimates. 4. Surveys: Great for understanding team satisfaction but not for measuring output or productivity. 5. DORA Metrics: Measure DevOps performance, not productivity. Deployment sizes vary within & across teams, and these metrics can be easily gamed when used as productivity measures. Measuring how often you’re deploying is meaningless from a productivity perspective unless you’re also measuring _what_ is being deployed. We propose a different way of measuring software engineering output. Using an algorithmic model developed from research conducted at Stanford, we quantitatively assess software engineering productivity by evaluating the impact of commits on the software's functionality (ie. we measure output delivered). We connect to Git and quantify the impact of the source code in every commit. The algorithmic model generates a language-agnostic metric for evaluating & benchmarking individual developers, teams, and entire organizations. We're publishing several research papers on this, with the first pre-print released in September. Please leave a comment if you’d like to read it. Interested in leveraging this for your organization? Message me to learn more. #softwareengineering #softwaredevelopment #devops
-
Product Management can't make Engineering go faster. That's the responsibility of the CTO or VP of Engineering. And even then, there's a ceiling to how much faster code can be written and tested before quality suffers. (Even with newfangled AI tools.) Sadly, too many product managers, and even product leaders, are stuck in a cycle of pursuing dev velocity. But going faster to deliver low incremental value isn't the win. "We increased velocity from 35 points per sprint to 42!" Nice. Unfortunately, those features didn't move the needle for customers or the business. So, was it really worth it? OTOH, here's an item in the backlog that may take longer to build, but will help us win $1M in business this quarter. Yeah, I'll sacrifice sprint velocity for that. The ROI is self-evident. Here's the problem focusing exclusively on sprint velocity: 1. It assumes all features are created equal. (They're not.) 2. It will ultimately max out for every dev team unless more people are added. (Which, ultimately, has diminishing returns.) Sprint velocity may be a fine productivity metric... But it's not an impact metric. Sprint velocity is a measure of operational optimization... Not of return maximization. What we, in product management, can do is: ✓ Maximize the value being delivered from existing R&D resources. ✓ Deliver healthy economics for the product. IOW, its not sprint velocity that should be our focus. It's margin velocity. That is, the speed at which revenue and margin can be realized. Every product manager can impact this. Here's how - including how we can talk about it in performance reviews and job interviews: https://lnkd.in/duB_bRGW Our work isn't about whipping engineering into shape. Our value comes from maximizing the return on product development efforts. Get that story straight and you'll find your career transformed. ~~~ Help me get to 5,000 followers! 👍 Like this post. ♻️ Repost it to your network. 💭 Comment below. ➕ Follow Shardul Mehta to become a better PM.
-
Your software development team feels overloaded and struggles to deliver. Leadership wants an easy solution, they're considering measuring productivity by lines of code or commit counts. But here's the problem: Counting lines of code is like judging a chef by how many ingredients they throw in a pot. More ingredients don't always mean a tastier meal. And counting commits? That's like rating a soccer player by how many times they kick the ball and not by whether they score goals. If you use these flawed metrics: • You'll encourage unnecessary complexity • You'll waste valuable time on meaningless work • Your best developers might become frustrated and leave The real fix is simple: Measure success by clear, meaningful results. • Are customers happy? • Is your team delivering quality products? • Can they handle the workload without burnout? Focusing on outcomes, not arbitrary numbers, helps your team thrive. Shift your metrics.
-
𝗥𝗮𝗱𝗶𝗰𝗮𝗹 𝘁𝗵𝗼𝘂𝗴𝗵𝘁 𝗼𝗳 𝘁𝗵𝗲 𝘄𝗲𝗲𝗸: OKRs are a popular tool for driving results, but the side effects can be fatal to your product. How can you measure and drive progress more effectively? Let's first look at the 𝗸𝗻𝗼𝘄𝗻 𝗮𝗱𝘃𝗲𝗿𝘀𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝘀: 𝗚𝗼𝗼𝗱𝗵𝗮𝗿𝘁'𝘀 𝗹𝗮𝘄: When a measure becomes a target, it ceases to be a good measure. Charles Goodhart, British Economist, wrote about this in 1975 based on his observations in economics and finance. In 1976 David Campbell, a psychologist and social scientist expanded on this with Campbell's law. 𝗖𝗮𝗺𝗽𝗯𝗲𝗹𝗹'𝘀 𝗹𝗮𝘄: The more a given metric is used to 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲, the more it’s likely to be 𝗴𝗮𝗺𝗲𝗱 and the 𝗹𝗲𝘀𝘀 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲 it becomes as a measure of success. 𝗧𝗵𝗲 𝗮𝗱𝘃𝗲𝗿𝘀𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝘀 𝗼𝗳 𝗢𝗞𝗥𝘀 𝗵𝗮𝘃𝗲 𝗯𝗲𝗲𝗻 𝗸𝗻𝗼𝘄𝗻 𝗳𝗼𝗿 𝗱𝗲𝗰𝗮𝗱𝗲𝘀 and we've seen Goodhart's law and Campbell's law play out: when a teacher is measured by test scores, it leads to teaching to the test or even faking results, but it doesn't improve learning in a classroom. Other examples include law enforcement where the incentive sometimes is to increase arrest rates without improving security, and healthcare where sometimes doctors are incentivized to maximize their success rate metric by refusing difficult cases. Setting targets for product metrics through OKRs or using metrics to evaluate individual performance only guarantees that those metrics will be optimized (or gamed). The OKR approach tempts us to bolster numbers on a failed experiment rather than let go of that metric and try a different experiment altogether. To build successful products, we need to learn what's working in our product and what we need to do better. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝘄𝗼𝘂𝗹𝗱 𝗺𝗮𝗸𝗲 𝗢𝗞𝗥𝘀 𝗺𝗼𝗿𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲? There's a special place in hell for people who name laws after themselves but here's 𝗗𝘂𝘁𝘁'𝘀 𝗹𝗮𝘄 𝗱𝗲𝗿𝗶𝘃𝗲𝗱 𝗳𝗿𝗼𝗺 𝗥𝗮𝗱𝗶𝗰𝗮𝗹 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴. 😆 𝗗𝘂𝘁𝘁'𝘀 𝗟𝗮𝘄: Metrics can 𝗼𝗻𝗹𝘆 be effective in improving the product if they’re used towards 𝗰𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝘃𝗲 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴. Instead of setting targets or gauging individual performance through metrics, write hypotheses for every element of your Radical Vision and Strategy that you want to test, and measure to test those hypotheses. Then you can have regular discussions to share metrics and discuss what's working and where you want to course correct. Here's the link to the RPT book (Chapter 6 delves into this topic): https://lnkd.in/gZxpH2GM Share your experiences below with targets and OKRs! #product #radicalproductthinking
-
🤔 The Developer Performance Paradox I've been fascinated by paradoxes for years, and today I'd like to share one I've discovered (I might not be the first one to discover it though): "The Developer Measurement Paradox." Here's the core insight: The moment you decide to use a metric to measure developer performance, that metric becomes unreliable. Conversely, when you decide not to use a metric for performance evaluation, it suddenly becomes a more reliable indicator. Let me explain: Imagine a company announces they'll evaluate developers based on the number of bugs fixed. Almost immediately, you'll see artificial inflation – developers might intentionally create bugs to fix them later, or split one bug into multiple tickets. However, if this metric isn't tied to performance evaluation, developers have no incentive to game the system, making the bug fix count potentially more meaningful. This paradox applies universally to any quantitative metric in software development, whether it's lines of code, commit frequency, or story points. It's relevant for any complex system involving creativity and human ingenuity. Is there a solution? While we can implement countermeasures (like using multiple metrics or introducing human feedback), these often lead to increasingly complex evaluation systems. The reality is that software engineering performance assessment remains more art than science. 💡 Key Takeaway: In the world of software development, the most reliable performance metrics might be the ones we choose not to rely on. What are your thoughts on measuring developer productivity? Have you encountered similar paradoxes in your field? Let's discuss in the comments below. #SoftwareDevelopment #Engineering #Performance #ProductivityMetrics #TechIndustry
-
Measuring developer productivity by code output is meaningless. The average developer writes just 6 lines of code per day. When I first learned this, I thought it was ridiculous. How could that be productive? Now, I finally understand why. The reality is that the best developers spend most of their time: Understanding the problem Architecting solutions Reading existing code Planning for scale Considering edge cases Reviewing other's code Mentoring junior developers The most successful development teams I've led weren't the ones who wrote the most code. They were the ones who solved the right problems in the right way. Want to build a high-performing development team? Stop counting lines of code. Start measuring impact.
-
Stop Measuring Hours. Start Tracking Results. Google recently stated: “We measure results, not hours.” Yet, so many companies still cling to outdated metrics that reward time over output, leading to inefficiency, burnout, and disengagement. If you’re still tracking time in chairs instead of value created, here’s why it’s time to rethink your approach: ⛔ The Problem with Measuring Hours: ↳ More time ≠ More productivity: Sitting at a desk longer doesn’t mean better work gets done. ↳ Encourages inefficiency: Employees stretch tasks to fill time rather than optimizing for impact. ↳ Fosters burnout, not performance: Long hours for the sake of it = fatigue, poor decision-making, and turnover. ↳ Creates a culture of presenteeism: People stay late to “be seen” rather than produce meaningful work. ↳ Discourages smart automation: Employees may avoid efficiency tools if working faster isn’t rewarded. ↳ Fails to recognize different work styles: Some thrive in bursts, others in structured blocks, hours ignore this. ↳ Penalizes high performers: Someone who delivers top results in half the time shouldn’t be punished for efficiency. ↳ Shifts focus away from real impact: What matters isn’t how long something took, but what was achieved. ✅ The Power of Measuring Results: ↳ Drives true performance: Success is based on impact, not just attendance. ↳ Encourages ownership & accountability: Employees focus on what actually matters. ↳ Boosts engagement & morale: People work smarter, not just longer, leading to happier teams. ↳ Recognizes and rewards efficiency: Top performers get credit for results, not hours logged. ↳ Fosters creativity and problem-solving: Employees focus on finding the best solutions, not just “doing time.” ↳ Supports flexible and remote work: When results matter, employees can work when & how they perform best. ↳ Increases agility and adaptability: Teams focus on outcomes, making them quicker to pivot and innovate. ↳ Creates a high-trust, high-performance culture: People are measured on what they contribute, not how long they sit at a desk. 💡 The Bottom Line Your company’s or team’s success won’t be defined by hours logged, it will be defined by the quality of the work delivered and the value created. The best leaders and companies in the world know. Do you? Drop a comment below to share how you measure the results of the success you’re trying to achieve. 👇 _______ ➕ Follow me, John Brewton, for content that Helps. ♻️ Repost to your networks, colleagues, and friends if you think this would help them. 🔗 Subscribe to The Failure Blog, where we learn more from our failure to enable our success, via the link in my profile.
-
I see leaders getting stuck with “my developers are telling me X but my metrics are telling me Y”. Your developers are always right. Jeff Bezos recently shared, “when the anecdotes and the data disagree, the anecdotes are usually right. There’s something wrong with the way you are measuring.” In my experience, this is always true when it comes to developer productivity. I met with an organization that was looking into the impact of build times. Some of their developers said: “What are you talking about? I didn’t do a build.” Turns out that their build speed metrics included robotic background builds that had no impact on developers. Don’t focus on metrics without also consulting your developers. If the data and stories don’t line up, your developers are likely right.