jdt2famix 1.0.1

Over the last couple of weeks I worked on a new importer for Java code that can be used for Moose, and I am happy to announce that I released version 1.0.1.

jdt2famix is an open-source project, and the repository can be on Github. It is implemented in Java as a standalone project, and it is based on JDT Core (developed as part of the Eclipse project and available under the EPL2 license) and Fame for Java (originally developed by Adrian Kuhn and is available under the LGPL license).

The current implementation has an extensive coverage of entities that it can import, namely:

  • classes, parameterizable classes, parameterized types, enums, anonymous classes, inner classes
  • annotation types, annotation type attributes
  • annotation instances, annotation instance attributes
  • methods, constructors, class initializers
  • parameters, local variables
  • attributes, enum values
  • accesses
  • invocations, constructor invocations, class instantiations
  • thrown exceptions, caught exceptions, declared exceptions
  • comments associated with named entities
  • source anchors for named entities

The simplest way to use it is through the command line:

  1. Download and unzip the self contained binary 1.0.1 release.
  2. Go to the root folder where you have the sources and dependency libraries of a Java system (e.g., mysystem)
  3. Execute /path/to/

The result will be an MSE file having the same name as the name of the folder from which you executed the command (e.g., mysystem.mse).

Let’s look at a concrete example. For this purpose, we pick the Maven project. We first download the 3.3.9 sources. These are only the plain sources, but to be able to import the complete model, we also need to have the dependencies available in the same folder.

As Maven is a Maven project (no pun intended), we can use the configuration to download the dependencies locally:

mvn dependency:copy-dependencies -DoutputDirectory=dependencies -DoverWriteSnapshots=true -DoverWriteReleases=false

(If you do not have a Maven project, you have to adapt this step and obtain the dependencies depending on the specifics of your project)

Then we simply execute:


And we get a maven-3.3.9.mse file that we can load into Moose.


Posted by Tudor Girba at 3 August 2016, 4:08 pm with tags tooling, moose link

An informal experiment about not reading code

Over the past year, I conducted an informal study on how developers think about code reading. The study took the form of a dialogue that I have with software engineers. I had it in various forms with more than 1000 engineers.

The dialog goes like this:

Me: I help teams to not read code.
Engineer: Not to read code?
Me: Yes.
Engineer: Intriguing. What do you mean?
Me: Well. Do you agree if I say that you spend 50% or more of your time reading code?
Engineer: Hmm. Yes.
Me: Ok. When was the last time you talked about it?
Engineer: About what?
Me: About how you read code?
Engineer: Talk about reading code … I don’t remember … never?
Me: In fact, nobody really talks about it. Is it not strange that we are spending most of our budget on something we never talk about?
Engineer: Indeed. I never thought of it in this way …
Me: I think we have to start talking about it. If we would talk about it we would see that we do not really want to read code. We want to understand code.
Engineer: Right.
Me: This means that reading is just the approach we employ. Yet, reading is the most manual way to retrieve information out of data. And our systems are just that: data. So, not only that code reading is expensive, it is also the least scalable approach.
Engineer: Hmm… so, what is the solution?
Me: The solution is to look at programmers at what they are: programmers. Their job is to automate someone else’s decision making and they can use exactly the skills they are paid for to improve their own decisions...

The amazing thing is that I encountered only minimal deviations from this dialogue regardless of the experience level, programming technology background, domain and even country of the interviewee.

Posted by Tudor Girba at 14 June 2016, 2:44 pm link

Discovering and managing GTSpotter extensions

GTSpotter is moldable. The default Pharo 5.0 image comes out-of-the-box with 122 distinct types of search processors. Understanding and controlling these processors is key for using Spotter to its full extent.

The first thing about it is to learn what these extensions mean. Each of these extensions is defined in a method, and the average size of such a method is 8.5 lines of code (including the header and the annotation line). In most cases, the code of the method is the best description of what they do.

Let’s take an example:

TClass>>spotterMethodsFor: aStep
    <spotterOrder: 10>
    aStep listProcessor
            title: 'Instance methods';
            allCandidates: [ self methods ];
            itemName: [ :method | method selector ];
            filter: GTFilterSubstring

As this method is defined in TClass, the search processor will get activated when Spotter will get to a class object. In this case, there will appear a search category that:

  • will be entitled 'Instance methods’
  • will be applied to self methods
  • will use the method selector as textual description for the item, and
  • will filter using a GTFilterSubstring strategy.

Most Spotter extensions look like this. Once you know how to read one, you will understand most of the others.

While each search processor can be small and easy to understand in isolation, understanding the overall landscape of all extensions is another problem on its own. To make it easier to grasp what exists, there are several places to look at.

First, the Spotter help lists all extensions available in the image split by name of the corresponding classes. This is a useful entry point to get a quick overview.


However, if you look for live examples you can simply inspect GTSpotter, and you will get the list of all extensions.


These two tools can allow you to keep track of extensions. Keep in mind that the default start object when opening Spotter is an instance of GTSpotter. Thus, to look only for the top level extensions, we can inspect this expression:

GTSpotter spotterExtendingMethods select: [ :each | each methodClass = GTSpotter ]


In the Moose image, this returns 25 extensions. But, what if you do not want all these extensions?

For this you can turn to the Settings Browser. Every extension in the image gets associated dynamically a setting element. Unchecking the setting will disable the corresponding extension. These settings can also be stored and loaded for use in further images like any other settings.


This mechanism can also be useful if you want to change the way a search works. For example, the top search for classes happens based on a substring filter strategy:

GTSpotter>>spotterForClassesFor: aStep
    <spotterOrder: 10>
    aStep listProcessor
            allCandidates: [ Smalltalk allClassesAndTraits ];
            title: 'Classes';
            filter: GTFilterSubstring;
            itemIcon: #systemIcon;
            keyBinding: $b meta;
            wantsToDisplayOnEmptyQuery: false

If you want to use a regular expression instead, you disable the default class search in the Settings Browser, and add another extension method in your own package with a regular expression filter:

GTSpotter>>mySpotterForClassesFor: aStep
    <spotterOrder: 10>
    aStep listProcessor
            allCandidates: [ Smalltalk allClassesAndTraits ];
            title: 'Classes';
            filter: GTFilterRegex;
            itemIcon: #systemIcon;
            keyBinding: $b meta;
            wantsToDisplayOnEmptyQuery: false

Done. Now, you can search for classes using regular expressions. In the same way you can manage all other extensions.

Posted by Tudor Girba at 5 June 2016, 8:13 am with tags gt, tooling, pharo link

The impact of moldability

We say that Glamorous Toolkit (GT) is a moldable development environment. There were a couple of emails on the Pharo mailing list in the recent period that questioned the usefulness of moldability. More specifically, the questions were raised in relation to the newly introduced GTDebugger. The concept of a moldable debugger is new to any IDE, and for this reason we cannot compare with the standard IDE behavior.

Nevertheless, the question is certainly legitimate. Given that we cannot compare with other solutions, we can observe the impact of other moldable interfaces. If we take a step back, the same concept was applied to the GTInspector and GTSpotter before. We introduced these in Pharo 4.0. Let’s see what is the impact.

In Pharo 3, the EyeInspector offered a basic extension possibility, and the Pharo image shipped with 8 such extensions. Together with the introduction of the GTInspector, we shipped 138 extensions. One year later we have 165 in the core image. In the meantime there are many more extensions in external packages. For example, the Moose 6.0 image which is based on Pharo 5.0 ships with about 230 extensions.

Also, in Pharo 4, we shipped 92 extensions to GTSpotter. In Pharo 5, there are 122 extensions. Similarly, there are several more extensions in external packages. For example, in Moose 6.0 we have 135 and if we include the generic way of searching through models we have several hundred more (more about this in a future post).

The explosion of extensions shows that there is a need to have such extensions. This is a validation of a hypothesis put forward by the humane assessment approach a long time ago which started from the observation that context is key in software development, and as such, tools should take this context into account. This idea was first explored in the context of Moose and it stands at the very core of GT. You can see this embodied now in 3 distinct tools, and we will see more as we proceed with the project.

Now, why the difference between Pharo 3 and Pharo 4? First, the cost associated with an inspector extension went down from ~19 lines of code in a separate class to ~9 lines of code in one single method. Second, the value of the extension increased because of the interaction workflow that came with the GTInspector design.

Interestingly, after we introduced the GTInspector, almost all discussions were geared towards the Raw presentation because it introduced a new kind of interaction. Almost no email was about the extension mechanism. The same pattern happened with GTSpotter where messages focused almost exclusively on searching classes and methods. And rightly so, as the default behavior is what people see at first. We have exactly the same type of issues with the GTDebugger.

Now, in the default Pharo image, there are 3 different debuggers: the default one, the bytecode one and the SUnit one. In the Moose image there are 6 debuggers and there are a couple in outside packages. For example, here are two screenshots of two such debuggers: one of a PetitParser debugger, and one of a debugger debugging the update of the debugger.


The cost of these debuggers is measured in hundreds of lines of code. We will certainly not see as many debugger extensions as in the case of the inspector because the granularity is larger and because the cost is larger, but we will certainly see more custom debuggers.

We still need to learn more about how to reach the balance between extensibility and usability. We are at the beginning, but there is clear value in extensibility and we should not discard it as unimportant. The key here is the ability of creating extensions with low effort and this is unprecedented. Let’s put this in perspective: Eclipse started more than a decade ago with a plugin architecture. Right now, the Eclipse marketplace ( has 1722 tools. Granted these are more extensions than we have and they are larger, but at the same time the community that builds those is several orders of magnitude larger than the Pharo one. Yet, we can already compete with this because of the radically low cost structures.

Until now we looked at quantitative plain data. Nevertheless, are these extensions actually affecting productivity qualitatively? In my experience they can have a significant impact, and this site and blog features multiple examples of how this is so. Furthermore, more recently I realized that my workflow has changed quite significantly and the amount of time I spend in the inspector is around 60%.

But, let me give you another perspective. I went around the world over the past year and I asked directly more than 1000 developers working in various languages if they agree that they read code for 50% or more of their time. The vast majority agrees (this is on top of research showing the same thing). Yet, when I ask them if they talk about it to find new ways of understanding systems, they acknowledge that they almost never do. This basically means that people are spending half of their budget on something they never talk about. These are just the direct costs, and many systems see some 80% of their overall effort spent in maintenance. Understanding systems is the single most expensive activity, but the industry does not approach this explicitly. And typical IDEs focus to a large extent on the active part of creating code. For example, is it not ironic how in all IDEs the reading happens in an editor? In Pharo, we look at this problem in a novel way and we have the chance of affecting business costs radically.

Moldability is a competitive advantage. This is why we would like to encourage people to play with these mechanisms and push the envelope of software engineering.

Posted by Tudor Girba at 4 June 2016, 6:16 pm with tags assessment, tools, gt, pharo, moose link

Communicating Pharo 5.0

Pharo 5.0 is out. This was a great ride.

As it has become customary, I worked on capturing one aspect of the work around Pharo 5.0 in a visualization. Choosing only one prominent aspect was not easy this time. For example, the VM just got 30% faster. Just like that. And there is a new moldable debugger. And the reflection mechanisms just got a significant boost to make code instrumentation ridiculously inexpensive. And there are more. All these make for amazing stories, and we would have been happy with only one of these.

And that is exactly the most interesting story to tell. There used to be a time when one person could keep track of what is going on in the community. Not anymore. We have reached a tipping point at which there are just too many things that happen simultaneously. We have seen an amazing surge in the energy invested around Pharo, and this has been keeping up for some time now.

So, how should we capture this activity? Looking closer, we noticed that the attribution list included 100 distinct developers that have contributed code that entered in Pharo 5.0. Indeed, not all have contributed in equal measure, and some actually produced code a while ago. Still, when so many brains come together to build a common system, amazing things can happen.

100 contributors is a lot, especially when it happens in an open-source distributed environment, and when their effort is about modifying the core of an ecosystem. Enabling all these people to work in concert is a goal of Pharo. That is why Pharo is more than code. Pharo is a project of building an evolving community that reinvents software development.

To expose this effort I built the visualization that accompanied the official release announcement.


The picture depicts in red all authors of new code using automatically rendered glyphs based on a technique called VisualID. This is a new feature available in Roassal2 and was mainly created by Ignacio Fernandez and Alexandre Bergel. The rendering algorithm requires a way to retrieve the similarity of the rendered entities, and in our case, the similarity is determined by the prefixes of the packages that the different authors have touched. The authors are connected to the classes that they have touched using red edges, and the classes are connected to their packages using gray edges. The overall graph is laid out using a force-based layout.

The main goal of the visualization was to capture the activity of the authors. We can spot different clusters of people both due to location proximity and due to the similarity of glyphs. But, the most amazing thing to observe is that only one author is far apart from the rest (at the top), while all others are linked in one way or another. This is an amazing showcase of collaboration.

As is always the case, building this visualization was not a linear process. The overall effort spanned several days, and dozens of variations. Here are some.


Building each of these did not take long. What took longer was to trim the data and find a balance that captured both the activity and the amount of work that made Pharo 5.0 happen. Building pictures is not difficult. However, building meaningful pictures requires multiple iterations and this is where a live rich environment plays a critical role. I leave it up to you to decide if the end result was worth that effort.

The code that produces the final visualization can be found at, and it also includes the loading of the Roassal visualization engine, the analysis of methods and the extraction and trimming of authors. You can simply paste it in a Spotter opened in a plain Pharo 5.0 image to reproduce the Playground as seen below.


Posted by Tudor Girba at 16 May 2016, 9:00 am link
<< 1 2 3 4 5 6 7 8 9 10 11 12 >>