Blog

Software assessment is not code reading

Software construction is not writing code. Writing is indeed happening at some point, but it is not the most significant activity.

Software assessment is not reading code. Reading is indeed happening at some point, but it is not the most significant activity.

Do you want proof?

If writing would be the dominant software construction skill, software projects would be staffed with typists.

If reading would be the dominant software assessment skill, software projects would be staffed with speed readers.

Instead, software projects are staffed with software engineers. Or at least they should be.

Posted by Tudor Girba at 26 December 2013, 10:21 am with tags assessment, economics link
|

Myth: if it's legacy, you do not need to comprehend it

A software engineering myth says that if a system is legacy, you do not need to comprehend it. Instead, you are often advised to wrap it and work with it through a facade without worrying what happens underneath the black box.

But, as long as you still need something from your legacy system, you still need to understand at least that something. After all, many companies can live out of charging support even if they give you a free black box system. You do not need to modify that system, but you still need to figure out how it does what it does.

Take something like JBoss. If you use it, you might not consider it legacy, but you likely work with it through the advertised facade. But, once you do anything meaningful with it, you get to learn what it takes to understand what happens in its dungeons.

The only system that does not require comprehension is a dead one.

Instead of pretending that you avoid comprehension, better get good at digging up what you need fast and cheaply. It's more profitable. And more realistic.

Posted by Tudor Girba at 6 October 2013, 8:30 am with tags assessment, economics link
|

Software architecture is an emergent property

If you pay attention to complexity theorists you might notice that they start by emphasizing the difference between various classes of problems. For example, the Cynefin framework distinguishes between simple, complicated, complex, chaotic (there is also disorder, but let's leave it for another time).

The typical issues in software development oscillate between the complicated and complex domains. On the one hand, a complicated problem is a deterministic one. In a complicated domain, you detect problems, devise analyses for them and extract precise answers that you apply directly.

One the other hand, a complex problem is one that is not necessarily repeatable, and it is the result of multiple agents interacting with each other. In a complex domain, it is essentially impossible to predict outcomes precisely. Instead you start from a hypothesis that seems to make sense, you check it to build an empirically verified theory, and then you react.

Complex is not necessarily worse than complicated. Neither is the other way around. However, it is important to distinguish between them because otherwise you risk of approaching a complex problem through a complicated mindset.

Controlling software architecture has long been approached as a command and control type of problem: the architect lays out a master plan at the beginning, and development follows it in every detail. In the best case, some adjustments are allowed during development, but they are always subject of architect's approval. According to this view, architecture is a deterministic artifact and it approached from the point of view of a complicated mindset.

A different point of view says that architecture is not a picture, but the structure defined in the code. The current structure of the code is the result of multiple forces. Multiple developers work at the same time in different corners of the system in order to build new functionalities. Developers work together with product owners, testers, designers and other stakeholders and the system is shaped as a result of this interaction - often relying on social behavior rather than on rigorous planning. Furthermore, the actual technical solutions are not built in vacuum, but they are also subject to the constraints posed by the existing state of the system, and by the underlying technologies and frameworks used in the system.

In other words, architecture is an emergent property produced by multiple agents interacting constantly with each other. This places architecture in the complexity realm.

This suggests that approaching architecture through a prescribed ordered (or even worse, simple) lenses can have a limited impact. More suitable is to rely on setting boundaries and using attractors. For an entertaining brief explanation of how to steer emergent behavior, listen to Dave Snowden's comparison of the different ways of organizing a children's party.

A key element of this approach is that neither the boundaries nor the attractors are static. Instead, they are to be reshaped constantly in order to achieve desired emergent properties.

Daily assessment offers a game for steering the architecture. The game combines social interactions with hard checks against the actual data. Architecture is constrained through an explicit contract between all players. This contract changes over time through social interactions and common decisions. No single person is in control, yet the situation is controllable.

An interesting effect of this approach is that if you want to control the quality of the architecture, you focus less on what exists in the system at any given time, and more on the types of problems that are being explicitly looked at. For example, as long as developers only look for granular issues related to low level technological issues (such as memory leaks), you should not be too confident on the overall architecture. Once high level constraints are placed on dependencies, or on domain-specific patterns, your confidence level should raise.

In this game, the current state is less important. The level of the game matters more.

Posted by Tudor Girba at 22 September 2013, 3:50 pm with tags assessment, daily, economics link
|

How to approach an assessment problem?

Humane assessment covers a large spectrum of software engineering problems that require decision making. In order to design ways to approach these problems, we need to understand their nature, and for that we need to first identify their interesting characteristics.

Classically, people characterized problems based on type of analyses being used. Some examples are:

  • static vs. dynamic,
  • history vs. one version,
  • code vs. bytecode, or
  • metrics vs. queries vs. visualizations.

These are all important distinctions from a technological point of view, yet they say very little about the overall context for which they should be utilized.

For example, to solve a performance problem, I recently built a tool that grabbed dynamic information from log files. The log files were populated via several means: bytecode manipulation on one part of the system, and direct insertion of logging instructions into another part. The dynamic data was than linked with the static information about the source code. The performance improvements was tracked through the execution history of the same use cases at different moments in time. Furthermore, the tool computed metrics to summarize execution and offered visualizations that aggregated data in various views.

Is this static analysis? Is it dynamic analysis? Is it history analysis? Is it even code analysis? Is it a metric? Is it a visualization? It's none of them, and all of them.

But, does it matter? The distinction between analysis classes is useful for building up communities that develop them or for comparing various technologies, but it is less useful for describing practical problems. In practice, what matters is how these questions are being mapped on existing techniques.

Splitting problems along technological spaces is good for technologists, but it is a limiting proposition for practitioners. Assessment is a human activity, and as such, it is more productive to start from the nature of the problem, rather than it is to focus on the technical nature of the solution.

To this end, the simple processes proposed in humane assessment build on two simple questions:

  • How often do you care about the problem?
  • How large is the scope of the problem?

Processes.png

Daily assessment

If your problem is of continuos interest, you need to look at it continuously, regardless of what the issue is. For example, ensuring that the envisaged architecture is followed in the code is a permanent concern regardless of whether you want to ensure how you use the Hibernate annotations, or whether you want to ensure that your course grained layers are properly separated. The technical part of identifying problems is actually less interesting because the problems have been encountered before. More important is the communication side in this case. Architecture is a group artifact that has to be regulated by the group. This is the focus of daily assessment.

Spike assessment

If your problem has a singular nature, the granularity of the problem plays a significant role. If the problem is clearly defined technically, you need focus on getting to the bottom of it as soon as possible. For example, when you have to fix a bug related to a memory leak, finding the root cause is the single most pressing problem. In this situation, your productivity depends on how fast you can generate hypotheses and check them against the system. The faster you can do that the better your chances are to fix things quickly. This is the focus of spike assessment. Once solved, you do want to check if the lessons you learnt during the spike assessment are of continuous interest. For example, suppose you identified the cause of the memory leak to be a missing release of a resource, you can probably institute a checker for identifying such cases in the future. In this case you may want to make it a concern for daily assessment.

Strategic assessment

If your problem does appear once but the scope is broad, the focus shifts towards identifying the technical sub-problems and synthesizing the findings in order to reach a decision. This is the focus of strategic assessment. For example, a performance problem stated in terms of "the system is slow" cannot be answered technically directly. What does slow mean? Is it an algorithmic speed problem? Is it a scalability problem? How fast should it be? What parts are actually slow? These are the types of questions that have to be clarified before you can start solving the various issues through spike assessments.


Not all problems are created equal. Humane assessment provides a taxonomy to distinguish three classes of problems, and offers dedicated solutions. Based on my experience, these solutions work consistently. However, the taxonomy should be perceived as a starting point rather than the ultimate answer. I am certain that others can find other meaningful ways of categorizing problems.

After all, a discipline matures only when we can understand the problems in the field from multiple points of view. And, assessment is a discipline.

Posted by Tudor Girba at 24 August 2013, 3:12 pm with tags assessment, economics, process link
|

The myth of the magic button

In the movie Contact, Jodie Foster plays a scientist, Ellie, studying extraterrestrial activity. After a long time of dreaming, the unexpected seemed to unfold in front of her eyes. She and her team captured a signal from another galaxy. After the original excitement, they started to want more. The signal was played on an audio system, and there was a certain regularity to it. However, they still could not make sense of what it means. They wanted to decipher it.

They tried all sorts of approaches but none seemed to get them closer to the meaning. That was until the blind scientist shushed them all, and after listening carefully he conjectured that there are in fact two signals mixed up in one: an audio and a video signal. It made sense, and everyone got excited. Now, all they needed to do was to split the signal. Even at that time, they were using dedicated software systems with multiple options for manipulating signals. They did several things, but the key was to use the button that split the fields in the signal (see below). In the end they managed to get an audio signal and a video signal.

The-magic-decoder-button.png

At first sight, this whole scenario might appear natural. There were two signals, and they chose the relevant splitting. And yet, this scenario is ridiculously lucky. Why lucky? Because the radio button could only distinguish between two signals. What would have happened if there were three signals? Two is just a number. Three would be just as possible.

And how about the way in which the signals were mixed up? And what if the encoding would have been different? Just think of the video codecs mess we have to deal with these days just to decipher signals that are known to be video and that are known to come from Earth.

Ellie and her team were lucky, but let's look at the problem from a different perspective. At some point, someone bought the tool that the team got to use. Could you imagine how would it be if we would get a message from outer space with three signals mangled up with a funky codec and the tool would only be able to distinguish between two standard ones? (Ok, ignore all sorts of details for the sake of argument) It would have been a monumental failure.

While not all tools have such a large impact potential, still tools shape your productivity and the process of acquiring your tools matters. If you only look for standard solutions, you will only solve standard problems. Yet, it is most likely that your value comes precisely from what is not standard.

How can you stay away from the entrapment? Look for the magic button.

Magic buttons come with great promises and seemingly cheap prices. All you have to do is press the button, and the report will be provided. You hardly even have to think. The magic buttons automates everything. This is luring, but its usefulness is limited.

The value of a tool resides in its ability of solving a problem. If both you and I are sharing the same magic button, it means that both your and my problem are equal and so are the solutions. Yet, our systems are not equal at all. Think of the last static analysis tool you have used. You likely downloaded it, install it, point it to your system, press the magic button (or command) and you got a report. No customization needed. It's cheap. But what does your system have in common with mine? Perhaps they share the same programming language. So, the tool will focus on that. It will reveal issues related to language idioms such as using nulls where you should not. Granted, some problems can be due to not following these idioms, but that is hardly where our significant problems come from. Our systems' value reside in what we build with the language. That value and the associated problems deserve much more attention, and that is what a tool should focus on solving.

If you find that the tool is offering no customization possibility, stay away. And if you are a software engineer, you should go further and demand your tools to be programmable. After all, it is programming that makes you valuable.

Posted by Tudor Girba at 21 August 2013, 7:22 am with tags story, economics, tooling link
|
<< 1 2 3 >>