Don’t Focus Too Much On Writing More Tests Too Soon 📌 Prioritize Quality over Quantity - Make sure the tests you have (and this can even be just a single test) are useful, well-written and trustworthy. Make them part of your build pipeline. Make sure you know who needs to act when the test(s) should fail. Make sure you know who should write the next test. 📌 Test Coverage Analysis: Regularly assess the coverage of your tests to ensure they adequately exercise all parts of the codebase. Tools like code coverage analysis can help identify areas where additional testing is needed. 📌 Code Reviews for Tests: Just like code changes, tests should undergo thorough code reviews to ensure their quality and effectiveness. This helps catch any issues or oversights in the testing logic before they are integrated into the codebase. 📌 Parameterized and Data-Driven Tests: Incorporate parameterized and data-driven testing techniques to increase the versatility and comprehensiveness of your tests. This allows you to test a wider range of scenarios with minimal additional effort. 📌 Test Stability Monitoring: Monitor the stability of your tests over time to detect any flakiness or reliability issues. Continuous monitoring can help identify and address any recurring problems, ensuring the ongoing trustworthiness of your test suite. 📌 Test Environment Isolation: Ensure that tests are run in isolated environments to minimize interference from external factors. This helps maintain consistency and reliability in test results, regardless of changes in the development or deployment environment. 📌 Test Result Reporting: Implement robust reporting mechanisms for test results, including detailed logs and notifications. This enables quick identification and resolution of any failures, improving the responsiveness and reliability of the testing process. 📌 Regression Testing: Integrate regression testing into your workflow to detect unintended side effects of code changes. Automated regression tests help ensure that existing functionality remains intact as the codebase evolves, enhancing overall trust in the system. 📌 Periodic Review and Refinement: Regularly review and refine your testing strategy based on feedback and lessons learned from previous testing cycles. This iterative approach helps continually improve the effectiveness and trustworthiness of your testing process.
Test-Driven Development (TDD) Practices
Explore top LinkedIn content from expert professionals.
Summary
Test-driven development (TDD) practices are a programming approach where tests are written before the code itself, helping programmers ensure reliability, maintainability, and confidence in their work. With TDD, each feature or change starts with a failing test, guiding the direction of development and building a steady workflow.
- Focus on trustworthiness: Regularly verify that your tests can actually fail if something breaks, so you know they're catching real issues and not providing a false sense of security.
- Start simple: When adding tests to existing code, begin with setting up a single test and address each build or compile error one at a time, celebrating milestones along the way.
- Use testing as workflow: Let the rhythm of writing tests inform your coding decisions, helping you pause, reflect, and improve your design as you go.
-
-
After Training a group of Embedded C programmers TDD, we usually have to deal with the fact that most their development work involves existing code. We have to go from the training environment and put the ideas in existing code. My Legacy Code Workshop is where we transition from the ideal training environment to the reality of adding tests to the code that is paying the bills. In the workshop, we get the test environments set up and write a single test case to prove the test runner is under our control (pass a test, fail a test). After that the fun begins. Embedded C/C++, with target HW and RTOS dependencies, can be very hard to drag into the test environment. Often people want to give up after about 15 minutes. Sorry, that is not an option if I am there. We choose a function to call, and start pulling the code into the test environment one build error at a time. We look at the first error, solve it, then continue. By focusing on only one error at a time, we find the natural order to solve the problem that "this code is not in the test environment". Usually, the code under test is not designed to be tested, and knows too much about the target system. So, build errors are expected and can be discouraging. Naturally, we first must track down header files dependencies, with an intermediate goal of compiling production code header file in the test case. Sometimes we also bump into vendor specific compiler problems, like non-standard header files for sizing datatypes and keywords that give access to hardware registers. Once the test case compiles, we celebrate. Then we add the production code to the test build. Now we get to chase compile problems again. Eventually we get to linker errors, another milestone to celebrate. With the code under test's linker errors, we must decide do we want the real depended upon code or a test-stubs. My legacy-build script makes the choice easy for C code. The script will plug each external dependency with an exploding-fake. An exploding-fake is a test-stub that announces the function's name and fails the test. Now we can run the code and guess what, it explodes on the first call to an exploding-fake. Decision time: should we add the real depended upon code, or make a better fake? On the first test's encounters with exploding-fakes, we keep the fake dumb, hard code a return value and let the test run. Eventually, the code builds and we have a test that is executing a path through the function we called. The main frustration in the process is dealing with compiler and linker problems. Once those are solved, we turn our attention to designing tests that force the code through one path at a time, making test-subs smarter as needed. That first test is expensive. The next tests are a lot cheaper. We consider the cost of adding that first test, the cost of doing business. It is a cost associated with the change we are about to make.
-
So, you've written a test. Your code compiles. The test runs without errors or exceptions, and it produces a green checkmark. All is well, right? Well... Maybe. Do you know if your test can actually fail? If it can detect a change in the behaviour of the piece of logic that you're testing? Until you know that to be true, you're not done. TDD is useful for this purpose, in that it starts with writing a failing test, which means that your test was able to fail at some point in time. But does that test retain its ability to fail as your codebase evolves? How do you know? I'd love to see more teams spending a bit more time finding out whether those tests they rely on in their development process and build and deployment pipelines are actually deserving that trust. Test your tests. Never trust a test you haven't seen fail. Don't trust a test you haven't seen fail for a while, either. Find out if it can (still) fail. This applies double to those tests that are generated for you, through an AI-powered tool or otherwise. Don't blindly trust those tests. Make sure they're testing what you think they're testing. Make sure they can fail. (and yes, mutation testing can help here)
-
If you didn't have to write tests would you? If you could point a smart AI (some future very capable one) bot at your code project and it would write test to cover it? That's a pretty interesting question to me. Do want to spend time writing tests to validate that my code works? Sort of no. Assuming a capable AI bot that could do it. Do I want to spend time writing tests? No...but I use TDD not just to write tests but as a essential part of my programming workflow now. The artifact of unit tests is almost a very valuable fringe benefit. TDD is the Yang to my Evil Programmer Monkey Brain (EPMB) Yin. TDD keeps my EPMB in check. It is the good voice whispering in my ear about design. Things like; "What's the simplest thing that you could do first?", "Is that really the simplest thing?", or "Hey, pause and reflect after writing that line of code and maybe refactor." TDD inserts a natural cadence and flow to my writing code. That cadence has breaks/pauses that keeps me from just layering EPMB code on top of EPMB. I helps me to break the "sunk cost fallacy" with those breaks (I'm much more likely to delete 1 line of code I spent 10 seconds on vs. 10-40 lines of code I've spent an hour on). When I'm writing the test, because the cadence and process is muscle memory/mastered, I'm actually not thinking about writing the test, I'm thinking meta and in a different headspace leveraging automaticity. It's like showering and washing my hair. When you are showering and washing your hair, you aren't focusing 100% on everything, you are on auto-pilot. That's why so many people have epiphanies when doing wrote tasks like showering. I get focused epiphanies on my coding task while I'm writing the test. TDD ensures that the code that I write is testable and almost SOLID out of the box. Because I offload my code validation to the tests in my process, I can go days without using the console or debugger. My tests give me that, out of the box. I'm sure there will be coders reading this that are thinking something like, "I get all that stuff w/o TDD, you're just not as good as me." Sure that's possible. How do you describe the benefits of caffeine in the morning to someone who's never experienced it before and their life is just fine without it? They wake up and function without it just fine in the morning. Except they could have caffeine! (this is just an example...for the record I don't do caffeine anymore either) So if there was a capable AI bot that could write all the tests for my project. I'd probably use it as a starting point for a legacy project that I was digging into. For writing new code, TDD is so integral to my code writing process and has so many more benefits than just the test artifact, I'd keep doing it.
-
I don't #TDD everything, but I do TDD almost everything, especially production code. What do I skip? 1. Exploration, learning, discovery: e.g., how do I use the #SpringFramework's WebSocket support with #htmx on the front-end? Create a sample project that has no tests, just the minimal code to create what I want. Once I'm confident with how it all works, I TDD that code into my project. (This is how I implemented my WebSocket-based shared remote timer, that worked exactly as I planned the first time.) 2. HTML (server-side generated UI), mainly because for the types of apps I create, the ROI of html-testing isn't worth it. Most of the behavior is underneath the UI anyway, and that's TDD'd, and I can test HTML generation more directly if/when I need it. 3. Calling external APIs, including OAuth2/OIDC, as the former I test manually once (often using an automated test) and then run the tests again if the SDK changes or the remote's API changes. For AuthN, I manually test it, then trust the framework to continue to work.