At the beginning of my career, right out of Drexel University (with a year and a half of co-operative education experience in my field), I saw a chart which resonates with me to this day. That chart is included with this article, and represents how the cost of identifying and fixing a defect increases dramatically throughout the phases of the Systems Development Life Cycle.
My first project out of school was for a state government agency. I was a member of a team building a client-server application using Object Oriented Development in a waterfall methodology. As we were not a cross-functional team but rather a “hodge podge” group of full-time employees from the client-side, from the consulting-firm side (my employer), contractors, and interns – each with different loyalties, perspectives, cultures, and approaches to development and standardization. Suffice to say, we definitely had our challenges from the moment we had our kickoff meeting. We had a project manager who was non-technical and was only concerned about profit margin and our billable hours – not how we performed, collaborated, or communicated as a team, let alone the quality of our work.
With nobody taking the helm of leading the team, and with that chart showing the cost of identifying and fixing a defect fresh in my mind, I assumed a team lead role and instructed each of the team members to write unit test cases before they even started development. I then had another member of the team perform a “sanity check” of those unit test cases, or a “peer review”, to make sure that no unit test case was missed and that the unit test cases were appropriate. We then started development (and I made sure that we all used the same standards for naming objects, commenting our code so the client could continue to support the application after we released it to production, etc). After each developer finished coding their modules, I had them use the same unit test cases to test their module to make sure it met the customer’s requirements and had no defects. I had a peer also perform testing of those same unit test cases to make sure that all of the test cases passed. I was then responsible for integrating all of the modules into the main application and performing integration testing to ensure that all of the modules worked well together as well as independently. (I had a QA lead review my integration test cases and use them in testing the application after I performed my own testing as well.)
The end result was shocking: we found an unexpectedly high number of defects (over 70) during unit testing, and our developers were able to fix these defects before they submitted them to me to integrate them into the main application. By the time I integrated all modules, we had a total of 5 defects. The project manager built two months of testing into the project schedule to account for defect identification, resolution, and retesting. Instead of two months, we had our application tested for integration and signed off by the QA lead in two WEEKS. The client received an application that was virtually defect free, a month and a half earlier than scheduled. (The consulting firm which employed me was happy that our client was happy, but not thrilled at losing an additional month and a half of billable hours. But that’s another story.)
In 2006, Ken Schwaber advised having a ScrumMaster teach Quality goals should not be sacrificed in response to time pressure. Ken Schwaber in 2006 advised: “We can only change ourselves (it’s your responsibility to fight for quality). We have a professional responsibility to reject delivering poor quality or overcommitting on iterations, not just because of quality, but because it can kill your company.” https://www.infoq.com/news/Ken-Schwaber-Sacrificing-Quality. Quality should never be sacrificed.
So here are some tips on how to preserve and improve quality:
1) Product Owners should not apply excessive time pressure to the development team, as this can reduce quality.
2) One of the three prerogatives of the development team is to produce quality work.
3) Development teams can use technical practices to improve quality, such as continuous integration (this may help to detect integration errors) and refactoring (this improves product quality and minimizes adjustments for new features).
4) As Scrum Teams mature, it is expected that their definitions of “Done” will expand to include more stringent criteria for higher quality.
5) During the Sprint Planning meeting, if the Scrum Team does not set aside enough time for technical debt and bugs, then the quality of the Product Backlog Increment and future product delivery capability will suffer.
6) During the Sprint Retrospective, inspection (one of the three pillars of Scrum) is assessing and analyzing the quality of the sprint and then creating and implementing an action plan to address improvement opportunities and improve the quality of future sprints.
7) Metrics should be created to measure quality. These metrics can be used to track effectiveness of action plans as a result of the Sprint Retrospective, as well as throughout the course of the sprint. Such metrics may include build success ratio (working tested features), issues/defects and related costs by sprint, and the % test pass rate (must be at least 80 to 95%).
8) The Scrum team assigns themselves tasks and are accountable/responsible for the quality of their work (management is not telling people what to do).
9) When the development team fails to adopt Scrum in its entirety, communication and collaboration may be impacted, which in turn may affect quality.
10) Finding defects sooner results in higher quality releases which means lower cost and tech debt.