Blog

Architecture happens

Your system's architecture shapes in time.

It can do it silently, or it can do it openly, but it happens anyway. The question is: Do you do anything about it?

Posted by Tudor Girba at 30 December 2013, 9:48 am with tags assessment, process link
|

Treat software as data

Suppose I tell you that the data depicted in the graph below comes from a productive database and it shows the connections between some tens of thousands of nodes and some hundreds of thousands of edges.

Argouml.png

Now suppose that I would ask you every day to investigate the impact of modifying some variables from this graph, how would you approach the problem? The problem is obviously complicated and holds too many variables. Thus, you would likely not want to do it by manually inspecting textual descriptions because it would not scale in the long run.

Instead, you would probably want to rely on a tool that enables you to extract and browse the meaningful information. And if you do not have this tool, it would likely be better to build it first, especially if you are a software engineer.


Now, I tell you that the data does not come from an insurance database. It comes from a software system and it depicts a fraction of the connections in that system (classes, methods, and attributes as nodes; containment, inheritance, and calls as edges). If I now ask you to investigate the impact of modifying some methods from this graph, how would you approach the problem?

The problem is obviously complicated and holds too many variables. Thus, you should not want to do it by manually inspecting textual descriptions.

Posted by Tudor Girba at 2 September 2013, 11:00 pm with tags process, assessment link
|

Bring your system in the planning meeting

Everyone knows that estimating software projects on the long term is hard. Thus, agile teams do it mostly in the context of short iterations.

This estimations happen in a dedicated planning meeting. The way I saw it happening goes like this. The Product Owner spells the story out. The team then tries to figure out the meaning of the story, to understand the implications and, at the end, to estimate.

This exercise has two goals. The most obvious goal is to decide the sprint scope. The less obvious goal is to get the team to have a collective understanding of the stories, to get an overall feeling of implications and identify possible pitfalls.

And it works well. When it works.

Here is the problem. While the theory sounds fine, in practice planning should take into account the existing reality of the system. Most commonly, the implementation of the planning meeting relies on the basic assumption that if you put all the right people in one room they should find the best solution. However, human memory is not reliable enough when you have to deal with millions of details.

Unreliable-memory.png

The current state of the system matters a great deal when reasoning about a new feature, its possible implementation strategies and their implications. Thus, the system must be made integral part of the conversation. The facts must be invited at the round table.

"Wait," you might say, "it's not like people do not want to have accurate facts." Indeed, it's not. But, they still do not do it. The main reason for this is that retrieving facts is perceived to be an expensive operation. Only it does not have to be expensive at all. All it takes is you to invest in your assessment skill.


Let me give you a couple of examples.

In one case, we had to estimate a story around the communication between two different systems. Let's call them the Emitter and Receiver. The new story required that when the Receiver received a message from the client side of the Emitter, the user interface of the Receiver had to refresh. Nobody in the room knew much about how the communication happened, but we knew that if the communication was directly going to the Receiver server side, we would have had to build a whole infrastructure for refreshing the user interface. However, if the communication would already pass by the Receiver's client we could simply hook in. As a consequence, the estimation could have gone from small to large.

To figure out the best way to go, we set up a little experiment:

  • we placed several breakpoints on the server side of the Receiver;
  • we triggered the behavior from the user interface of the Emitter;
  • in the debugger, we looked up the stack to see if we see any class belonging to the Receiver's client.

And we found that the Receiver's client indeed was involved. So, we decided that the solution would be cheap, and we felt good about it (it also turned out that we were right to feel good about it). The whole experimentation and follow-up discussions took less than five minutes. During these five minutes, the discussion continued, and when the information came in, we just picked it up and went with it.

At another time, the team had to approach an older part of the system that was developed by others. When it came to stories, the discussion got swamped simply because the engineers could not relate to the topics, and the estimations were all over the place. They could not picture possible solutions because they did not know what existed already. Both the team and the Product Owner got frustrated and the atmosphere was anything but positive. To brake this cycle, we stopped and took some 10 minutes to identify things we would like to know about the system. During this time we also looked the information up both using the regular code editor and more sophisticated tools like Moose. We then went back to the drawing board and suddenly the brainstorming became more fluent and constructive. Those 10 minutes made all the difference.


Your system matters. You can ignore it in the planning room, but the reality will bite you during the sprint. Instead, bring (at least) a laptop with you in the planning meeting and be ready to experiment. Whenever the discussion relates to the state of the system, think of a little experiment or analysis and look up the information. A practical way to not get everyone looking at their laptop all the time, is to appoint a facilitator to look up facts quickly, and have the rest of the team focus on the high level brainstorming.

It won't work flawlessly from the very beginning, but you will be surprised to notice the difference.

Posted by Tudor Girba at 28 August 2013, 6:52 am with tags assessment, process link
|

How to approach an assessment problem?

Humane assessment covers a large spectrum of software engineering problems that require decision making. In order to design ways to approach these problems, we need to understand their nature, and for that we need to first identify their interesting characteristics.

Classically, people characterized problems based on type of analyses being used. Some examples are:

  • static vs. dynamic,
  • history vs. one version,
  • code vs. bytecode, or
  • metrics vs. queries vs. visualizations.

These are all important distinctions from a technological point of view, yet they say very little about the overall context for which they should be utilized.

For example, to solve a performance problem, I recently built a tool that grabbed dynamic information from log files. The log files were populated via several means: bytecode manipulation on one part of the system, and direct insertion of logging instructions into another part. The dynamic data was than linked with the static information about the source code. The performance improvements was tracked through the execution history of the same use cases at different moments in time. Furthermore, the tool computed metrics to summarize execution and offered visualizations that aggregated data in various views.

Is this static analysis? Is it dynamic analysis? Is it history analysis? Is it even code analysis? Is it a metric? Is it a visualization? It's none of them, and all of them.

But, does it matter? The distinction between analysis classes is useful for building up communities that develop them or for comparing various technologies, but it is less useful for describing practical problems. In practice, what matters is how these questions are being mapped on existing techniques.

Splitting problems along technological spaces is good for technologists, but it is a limiting proposition for practitioners. Assessment is a human activity, and as such, it is more productive to start from the nature of the problem, rather than it is to focus on the technical nature of the solution.

To this end, the simple processes proposed in humane assessment build on two simple questions:

  • How often do you care about the problem?
  • How large is the scope of the problem?

Processes.png

Daily assessment

If your problem is of continuos interest, you need to look at it continuously, regardless of what the issue is. For example, ensuring that the envisaged architecture is followed in the code is a permanent concern regardless of whether you want to ensure how you use the Hibernate annotations, or whether you want to ensure that your course grained layers are properly separated. The technical part of identifying problems is actually less interesting because the problems have been encountered before. More important is the communication side in this case. Architecture is a group artifact that has to be regulated by the group. This is the focus of daily assessment.

Spike assessment

If your problem has a singular nature, the granularity of the problem plays a significant role. If the problem is clearly defined technically, you need focus on getting to the bottom of it as soon as possible. For example, when you have to fix a bug related to a memory leak, finding the root cause is the single most pressing problem. In this situation, your productivity depends on how fast you can generate hypotheses and check them against the system. The faster you can do that the better your chances are to fix things quickly. This is the focus of spike assessment. Once solved, you do want to check if the lessons you learnt during the spike assessment are of continuous interest. For example, suppose you identified the cause of the memory leak to be a missing release of a resource, you can probably institute a checker for identifying such cases in the future. In this case you may want to make it a concern for daily assessment.

Strategic assessment

If your problem does appear once but the scope is broad, the focus shifts towards identifying the technical sub-problems and synthesizing the findings in order to reach a decision. This is the focus of strategic assessment. For example, a performance problem stated in terms of "the system is slow" cannot be answered technically directly. What does slow mean? Is it an algorithmic speed problem? Is it a scalability problem? How fast should it be? What parts are actually slow? These are the types of questions that have to be clarified before you can start solving the various issues through spike assessments.


Not all problems are created equal. Humane assessment provides a taxonomy to distinguish three classes of problems, and offers dedicated solutions. Based on my experience, these solutions work consistently. However, the taxonomy should be perceived as a starting point rather than the ultimate answer. I am certain that others can find other meaningful ways of categorizing problems.

After all, a discipline matures only when we can understand the problems in the field from multiple points of view. And, assessment is a discipline.

Posted by Tudor Girba at 24 August 2013, 3:12 pm with tags assessment, economics, process link
|

The technical and non-technical nature of assessment

Suppose you need to migrate from an old API to a new API that offer similar capabilities. Something like this:

public interface OldAPI {
  public TheReturnType doSomething();
  ...
}
public interface NewAPI {
  public ASlightlyDifferentReturnType doKindOfTheSameThingButDifferently();
  ...
}

The only requirement is that when the next version of your application ships, there should be no trace of the OldAPI. How do you approach this task?

Straightforward. You take the API clients one by one and migrate them to the new API. Of course, if the changes are 100% backward compatible, you can even automate the process. But, even if it's not automate-able, it is still a straightforward technical problem.

Now suppose that the API is used in 100 different places, and that each place requires some 15 minutes to change. The problem is slightly different as now, given that you cannot do everything in one batch, you have to plan effort and track the progress.

But, what happens when the 100 places are spread over 10 different projects? You now need to involve all the teams working on the respective projects, keep track of their effort and ensure that when the new version of the overall application ships there will be no trace of the OldAPI. To achieve your goal, you need all projects to pull in the same direction pretty much at the same time.

All of a sudden the problem becomes an organizational one, and it requires a solution that goes beyond the technical issue and touches the process and the structure of the organization. This is the nature of assessment.

Posted by Tudor Girba at 5 August 2012, 1:31 am with tags assessment, story, process link
|
<< 1 2 3 4 >>