In my last post I wrote about missing feedback if you install a bug-fixing team. Recently I encountered another form of missing and late feedback:
One of my customers was introducing Scrum a couple of months ago in several teams. One team and later two of the teams were working on a self-made framework (I will call them core-team) on which the customer system was built by the other teams (who I will call customer-teams for now). As the core-team was working their way through the framework-to-built, the customer-teams were using the newly built functionality to develop their new code.
A common pitfall of technical teams is the heavy dependency between them. And it came as expected: The customer-teams often couldn't finish their stories due to a lack of functionality delivered by the core-teams. Even worse: they began to declare their stories as "almost done" which led to less pressure to change the situation. As the release date came nearer the pressure increased because of the unfinished stories.
The teams got rid of the message (we have a problem with the dependency) to ease the pain but the wound (the dependency) was still there and etched its way through the release. The missing feedback made the release even more painful because it led to a certain blindness of how far the teams were, which should be one of the core capabilities of scrum.
The second pitfall added up to the first due to the lack of automated tests: When the core-teams changed functionality they might interfere with the stories done by the customer-teams. There was a situation where a customer-team could not deliver anything at the end of a sprint because of a change made by the core-team shortly before the review. Before this change, the stories would have been declared as finished.
When they tried to get their own stories finished, the core-teams had no feedback at all that their work would cause the customer-team stories to crash. They tested their features directly but not with indirect tests through features of the customer-teams. If they had automated tests up and running at least for the newly-built functionality of the customer-teams chances are that they would have had the feedback of interference between the features. A quite useful and fast feedback tool are automated tests.
The third pitfall, however, has nothing to do with how their teams are composed but how they test. As with many other teams, tests are known to be only useful at the end of the development cycle. Testing apart from spare programmer testing only takes place on special QA-computers. In combination with rare commits to their version control system and a build process that involves another department (yes, I know that this is common!), their average time between writing code and having it tested is, I'm pretty sure about that, above one day.
This is very slow feedback. As a programmer, I always wanted to be sure not to have broken anything that worked before I felt sure about check in some changes. The faster I got feedback, the faster I could develop as I didn't have to double-check everything by hand. An automated test suite is a blessing to my nerves. Even if I know that it will never catch everything that could possibly go wrong.
The customer now struggles to install automated tests to unburden their testers of time-consuming regression tests. Maybe even to support the programmers as I would have suggested. But there's another pitfall: Working only with a very expensive testtool which acts on the GUI of the system under test, they're bound to test on the QA-computers again. So they won't reduce their feedback cycle. In fact the testers write masses of tests now but the recent functionality is still untested by automation.
In this case as in many others, the chosen tool with very expensive licences prevents the programmers to test on their computers before checking in. And here is the worst thing about these tools. Prevention of fast feedback. For the programmers, these tools are of little use because the tests cannot drive their development. Even for the testers I consider this feedback as way too slow.
Testing through the GUI leads to another pitfall which I'm not yet sure they will encounter: Thinking to have tested everything only because the software is tested through the GUI. I once heard somebody say that he'd rather not test below the GUI as he might leave something out. That tells me that he banks on the GUI tests. But as a software is never tested until it is checked and explored, as Elisabeth Hendrickson describes it, I would never expect my test automation to find all the issues on my GUI (if any). GUI tests tend to be hard to install, brittle and very slow. I like neither. And as I expect to explore my software anyway, I am pretty sure to cover GUI-issues as well.
Again this is related to feedback: There are GUIs which are not automated easily (there are programmers who don't help by inserting some test-hooks) so it takes time to work it out. And the GUI slows the feedback down.
There are so many ways to slow down your feedback. Look for them actively! Fast feedback also helps with fast development. It doesn't automatically lead to a faster speed but slow feedback can surely slow you down! And be careful not to ease the pain without curing the source of pain!