Assessment is required every time decisions need to be made. This happens daily during software development. Daily assessment is a process meant to be integrated in the overall development process.
Software testing used to be an activity for lesser programmers. It was tedious, it was repetitive, it was everything a respectable programmer would hate. It took two guys, a couple of classes and a nice xUnit naming scheme to turn the game around. Now, you know a respectable programmer by the tests he writes.
The battle was not easy. It took a decade of agility to make tests gain their rightful place. And it's not even a closed chapter as we can still find skeptics pointing their finger to the costs of writing tests, and how this takes away from overall effort that could otherwise be spent on active programming.
Even if at first writing tests does involve extra costs, we know they save us great pain later on, because we know that as we write more code we unwillingly break assumptions from the existing code. In other words, we favor the possibility of not being able to control all the details of the system, and thus we ensure that the computer will be able to do it for us. And, the funny thing is that it turns out that thinking of tests makes us better programmers, too.
But, what exactly does make unit tests so useful? First, the ability to run them continuously makes for a fantastic feedback machine. Second, because they tend to be extremely contextual, they provide the kind of feedback that can lead to immediate actions. And finally, when unit testing is deeply intertwined with the daily development it has a multiplying effect: it acts as a checker, it shapes the design of the system, it documents the functional intentions of system, and it enables communication.
While testing is important, it concerns but one aspect of a software system, namely the expected functionality. However, there are many other concerns that require similar attention. The internal quality of the structure of the system is as important. The interaction between the different technologies used is important. The traceability of features is as important. The conformance to external guidelines and constraints is as important. The performance is as important. The security issues can be as important. Even the cleanness of the build system can be an important issue.
The goal of the daily assessment is to identify concerns that are relevant for the team and for the system, to figure out the actions that need to be undertaken to fix the issues, and to integrate these actions in the daily development.
The starting point is identifying and making explicit concerns that are important for at least one member of the team. These concerns can be of various forms covering a large space from broad architectural decisions to low level usages of technology. Example of concerns can be:
Concerns can take many forms but the important part is that they are important for someone. They do not have to be abstract or smart to be relevant. As soon as there is a stakeholder, they are legitimate because it means that the concern bares value in the context of the current project.
Once the technical concern identified, create an automatic checker, and integrate it in the continuous integration mechanism. The checker is much like a unit test, only for concerns that are not about functionality. The team needs both the skill, and a technical infrastructure to do this. Moose is one such infrastructure.
Everyday, the team holds a short technical assessment stand-up to discuss the violations of the concerns. This is not to be confused with the regular stand-up meeting. The assessment stand-up only involves the engineers because the discussion needs to be deeply technical to be effective.
The stand-up has a simple ritual. First, the assessment facilitator describes the concerns of the day. Anyone can challenge a concern. In this case, the concern is debated and the primary characters are the challenger and the stakeholder of the concern (remember that a concern exists only if it has a stakeholder). The result can be one of three:
Once the concern is agreed upon, the discussion is focused on identifying the actions needed to rectify the problems related to it. This discussion must get specific by going through the actual code places that present the problem. This is the moment when code reading becomes important to deal with the little details.
While the main goal is to get to actions, the hidden goal of this exercise is to spread the knowledge about important decisions and to help the team reach a consensus. In the grand scheme of things, whether a concern is invalidated or not, is much less relevant than the high level arguments that are discussed around it. These are the very decisions of which the overall design and architecture is made of, and this is what gets to be discussed.
Finally, when the team knows the path of actions to correct the situation, the effort is estimated. Problems that require a significant effort, are pushed on the backlog to be planned for, so to not interfere with the overall goal of the iteration. However, small tasks get fixed in the same day. A task is considered small if it can be solved in a short amount of time (typically 15 minutes).
A surprising number of concern violations are solvable in less then 15 minutes. Perhaps a class needs to be renamed, or be moved in another package. Maybe the client code is calling a private server component, instead of the published API. Or perhaps an annotation does not have the right parameters.
Everyone has 15 minutes. If tasks get to be distilled in chunks of less than 15 minutes that still offer some feeling of progress, things get fixed. And if repeated daily, they get fixed all the time.
We need customized tools that provide contextual feedback. One reason regression tests are so useful is that they provide contextual feedback. It would be great to create assessment tools just like we now create tests. The only trouble is that the cost of building such tools is too high. Or at least it is perceived to be, especially if you have to write them from scratch. A middle ground can be found by having a platform upon which these tools can be built. Drawing from the testing parallel, we would need the correspondent of an xUnit-like platform.
Moose is such a platform that was conceived exactly to ease the building of complete and customized assessment tools. While Moose tackles the problem of assessment globally, depending on the goal you can have other solutions, too. For example, for querying you can use text-based ones like grep or more complex ones that deal with AST information like PMD. Using platforms like these makes tool building practical.
Once you control such an infrastructure, building a small analysis tool costs almost zero. That is especially if you compare the building costs with the costs saved by using the tool.
However, one cost still remains and it should not be taken lightly. Just like in the case of unit testing, the greatest challenge is to shift the state of mind from what to do, to how to do what to do.
At first it might be clumsy, but as you get used with continuous and contextual feedback will get you hooked.
Just give it a try. Leap. Daily.