Scrum Is Hard
We are always in trouble in software development. Our customers want everything, they want it free, and they want in now. If we are successful, they just give us a harder problem next time.
Thirty years ago when I started in software development, we really didn’t know how long it was going to take to build something big enough to be interesting. We may have had a project plan and some charts, but things never actually worked out the way the plan said they would.
Ron Jeffries often says that a traditional project has no API. You can’t get any real information out of it and you can’t put any guidance into it. We never really know our progress, except that we are not going fast enough.
Knowing Our Progress
One of the reasons we use a process such as Scrum is to understand our progress. We have a new version of our product at the end of each Sprint. We can show it to our customers and have them tell us how they would like it to change. We can all look at it and make some intelligent guesses about how much we are likely to get done by some future date.
For this to work, our product has to actually work the way we all expect it to. The progress we are showing on our burn charts has to be accurate. If our product has bugs, if it has defects, and we don’t know how many there are and we don’t know how long it will take to correct them, then we don’t know our real progress. Our Product Owners and our Stakeholders will be making product decisions based on false information. All you know is that your true progress is the sum of your published progress minus your bugs.
And, you don’t now how many bugs you have.
If you are going to do this Scrum/Agile thing because you want to know your true progress, then you can’t have any bugs in your software! If we are going to get rid of the bugs, first we have to know where they are. This calls for testing. I call these kinds of tests Customer Tests (using the XP term for what Scrum calls Product Owner). These tests belong to the Customer. They let her know that the product we are building does what she wants it to do. The Customer may not be able to build them or run them, but she has to understand them well enough to trust them to do their job. If you don’t have comprehensive Customer Tests, you won’t know your real progress and you will have lost the benefit you are trying to get from Scrum.
Unlike a traditional process, where you wait until the end to test, we have to test every Sprint. Otherwise, we will be over-reporting progress. If we do this testing manually, with testers typing from a script, the effort required to do the testing will increase every Sprint. That is not sustainable. The only workable solution is to automate the tests. You write new tests every time you add a new feature, and run them essentially free thereafter. Nothing else works.
Delivering At A Constant Rate
Scrum tells us that at the end of every Sprint we are to have a shippable Product Increment. Some folks get confused about shippable. It is not about minimum viable product, it is about quality. If this is all the thing needs to do, then we can ship it now. The idea is that the business people get to decide things like when to ship and the techie folks get to decide things like how long it will take to build a given feature. But, that is a story for another day.
If our team is going to have a shippable Product Increment at the end of the first Sprint, two weeks from when we start, we aren’t going to be able to spend a whole lot of time coming up with an elaborate design, building cool frameworks, and so on. We are going to have to start with a Simple Design: one that only supports the features we are going to build in the first Sprint. We are going to have to elaborate and extend that design as we add new features, so that by the time we really ship, we have an appropriate design and cool frameworks.
When we add a new feature, we are going to have to build it so that it looks like it was supposed to be there. The problem with this is that no one knows how to do it! What happens is that when we add a new feature our design gets a little messy. It gets a little crufty. As we add more and more features, our design deteriorates, so that after a few Sprints it get harder and harder to add new features.
Pretty soon we have a big ball of mud. We have all seen that happen. It takes longer and longer to get anything done. We change something and then something way over there breaks. Maybe we should have looked at all the features we were going to build and come up with a design to support them all at the beginning?
Congratulations, you have just invented Waterfall!
No, what you have to do is keep the design clean. Every time we add a new feature, we have to clean up our code and improve our design.
We call this Refactoring. It means improving the design of existing code without changing its behavior. If you don’t do this all the time, your design will deteriorate and it will take longer to add new features. If that happens, we can no longer predict our rate of progress. (Remember, that is why we were doing Scrum to start with).
Ok, we’ve got to Refactor. But if Refactoring is changing the design without changing the behavior, how do we know we haven’t changed any behavior? We test. We already have Customer tests, is that good enough?
No. Those are too coarse grained. We need finer tests. Let’s call those Developer Tests. They belong to the Development Team. They are used to prove to the developers that the code they are writing does what they think it is supposed to do. It explains how it should be used, and maybe how it shouldn’t be used. It protects the code when things around it change.
Bringing It All Together
We need a development process with an API. The folks paying our salaries want to know how much we are going to get done by some arbitrary date. In order to know that, we are going to use an Agile process like Scrum. In order for that to actually work, the product we are building can’t have any defects. Otherwise, we will be over-reporting progress. To do this, and not be overwhelmed by testers, we have to have automated Customer Tests. Nothing else works.
We have to start with a Simple Design. There isn’t time to do anything else. We have to keep that design clean as we add new features. This is done by Refactoring as we go. Nothing else works. When we Refactor, we have to be sure we haven’t broken anything. This means we have to have fine grained Developer Tests. Nothing else works. The best way we know to do this is called Test Driven Development, or Test Driven Design.
Once we have all these tests, we would be crazy not to run them. This is called Continuous Integration. CI is a set of tools and practices so that every time someone changes the code and checks it in, the product is built from scratch and all of the tests are run. Doing this means that you know about a regression within minutes of it being inserted.
Teams that don’t do these things do not prosper. They get to the end of a Sprint and they don’t know how much progress they have made. They start out making great progress and then after a few Sprints, they start going slower and slower.
These practices are important. If your team is not using them now, they need to start!
Interview with Chet Hendrickson and Ron Jeffries
If you’d like to learn more from Chet then listen to the interview with him and Ron Jeffries recorded as part of our Get Agile Podcast.
Chet Hendrickson
have over eighteen years experience coaching and training Agile teams. A Co-author of Extreme Programming Installed, he is a popular and effective trainer in the Scrum, Extreme Programming and Agile disciplines. Chet is well-known contributor to software conferences world-wide. He has a quarter century’s experience in information technology and software development. He has been active in Agile software development since its beginning, and was the team leader on the Chrysler C3 project, the first project to follow all the practices of Extreme Programming. Chet was the first signatory of the Agile Manifesto.