I got asked several times about how did humane assessment come to be. Here is the short story.
In the other decade I spent most of my time playing research: I got paid to juggle with ideas for making software engineering better. And, I juggled, I implemented, worked with others, and even wrote a bit about all this. The writing was actually the only mandatory research task, because this is what brought the money to keep the fun going.
Soon, it became clear that there was some economical value in the analyses we were playing with. And Moose became strong enough to look like a suitable vehicle to get these ideas practical. Together with other colleagues, I started to look into bringing these analyses into real projects. At that time, the focus was on how to build a tool that would be easy to install and use (ideally a one click tool) and that would bring enough value to practitioners that they would want to invest in it.
There were multiple ideas and tries. For example, given the long Moose tradition of research around metrics and visualization, these were the most at the top of our preferences. However it did not quite work. It was not that the metrics or visualizations we produced were not nice or accurate, but they just did not catch on. After every demo, the feedback received from engineers was an enthusiastic one, but for some reason the same engineers did not seem to see themselves using this tools for their work. For example, when shown with the classic System Complexity (showing the hierarchy of classes) visualization, they were keep asking: Nice, and?. For a while, I thought the main reason is that they did not know how to ask the questions, and that they were not prepared to digest the shown visualizations.
But, some partners did go with our proposal. At first, I assumed that most of the work will be about using the tool. It turned out that only about one third of the effort was spent around the tool. The rest was in figuring out how to manipulate data, how to interpret it, and how to present it so that customers can utilize them for their work.
I was intrigued, but even more puzzling was that even if the systems we were facing were somewhat similar (e.g., most projects were using Java and about the same frameworks), there seemed to be little repeatability in the analyses. The only repeatable thing I could distinguish was that analyses needed to be changed, mostly through programming, to match the context. Only through these changes could I say something sensible about the problem at hand.
After several of these projects, I realized two things:
First, this meant that solely relying on clicking tools does not quite work in practice. Second, it also meant that we need describe accurately activity.
This is how humane assessment came to denote the process of making decisions in software engineering, and the essence of it relies on crafting and integrating custom analysis tools to solve contextual problems.
Ask more questions via @humaneA, @girba, or by email.