Blog

Tracking a FastTable bug with GTInspector

In a previous article we looked at how moldable tools can change the development experience when tracking down a bug. The bug in question was a duplicated behaviour in GTDebugger: triggering an action from the context menu of the stack triggered that action twice.

A few weeks after that bug was fixed, another interesting bug was reported for GTInspector, affecting the navigation feature of the inspector: when an object is selected in a view, the pane to the right is created four times, as if the user selected that object four times.

In this article we detail the workflow that we followed to understand and fix this bug. This workflow consists in treating software problems as data problems, and formulating and solving one hypothesis at a time. Hence, we do not try to gain knowledge just by reading code. Instead we use tools to query and visualize our data, the code. When an appropriate tools is not available we build it. We then use the gained insight to formulate the next hypothesis and iterate until we understand the cause of the bug, and are able to fix it. As much as possible, we make all hypotheses explicit.

Reproducing the bug

The bug in question only appers when the user selects an object within certain inspector views. For example, we reproduced the bug by creating and inspecting a SharedQueue object, and selecting any value in the 'Items' view. To get a visual indication of the bug, as the SharedQueue that we created contains integers, we added an #inform: message to the method Integer>>gtInspectorIntegerIn: creating the 'Integer' view. This way we can see that when selecting an object in the 'Items' view, the method is called four times:

Inspector_SharedQueue_WrongBehaviour.png

However, the bug did not appear when selecting an object in the 'Raw' view of a SharedQueue object:

Inspector_SharedQueue_RawPresentation.png

By looking at the code of SharedQueue>>#gtInspectorItemsIn:, we observe that it uses a Fast Table presentation. Given that the bug that we previously investigated was also related to Fast Table, we like to check first whether this bug is also related to Fast Table. One way to verify this hypothesis is to create an identical presentation for SharedQueue objects that does not use Fast Table to display a list, but the previous list renderer based on Pluggable Tree Morph. We can view the code of SharedQueue>>#gtInspectorItemsIn: and create the new view directly in the inspector:

SharedQueue_ListView_Code.png

When redoing the previous scenario using the newly created view, we can indeed confirm that the bug is not present; Integer>>gtInspectorIntegerIn: is called only once. Hence, this bug is again related to Fast Table.

Inspector_SharedQueue_PluggableMorphCorrect.png

Comparing a buggy and a correct scenario

Given that we have a correct and a buggy scenario, before going any further, we would like to quickly check if there are any differences between an execution using Fast Table and an execution using Pluggable Tree Morph. We hypothesis that the problem could be caused by the Glamour renderer for Fast Table not following the same logic as Pluggable Tree Morph. Glamour, is the browsing engine on top of which GTInspector is implemented. Glamour provides two distinct renderers for displaying lists, one for Fast Table and one for Pluggable Tree Morph. If we do not see a divergence in the call stacks of the two renderers, we can focus our investigation on the Fast Table renderer.

We could approach this task by opening two debuggers and then manually scrolling through the stacks to find a difference. However, a less manual and error-prone approach consists in creating a custom view that shows the two call stacks using a tree, highlighting the points where the stacks diverge.

To build this tool we first need to log the two stack traces. For that we can rely on the Beacon logging engine and replace the #inform: call from the method Integer>>gtInspectorIntegerIn: with a logging statement that records the stack of method calls (MethodStackSignal emit).

Integer>>gtInspectorIntegerIn: composite
 <gtInspectorPresentationOrder: 30>
	
 MethodStackSignal emit.
	
  ^ composite table
    title: 'Integer';
    display: [ | associations | "..." ]
    "..."

We then start the recording using MemoryLogger start, and trigger the two scenarios by selecting an item in the views 'Items (simple)' and 'Items'. Next, we stop the recorder and inspect the collected stacks using the GTInspector:

MemoryLogger instance recordings collect: #stack 

We observe that we got five stacks. Four correspond to the buggy scenario, and one to the correct scenario. We also notice that there is a difference in the number of stack frames between the two scenarios.

Beacon_Logging_CorrectWrongStacks.png

Nevertheless, if we briefly look at one correct and one buggy stack trace, we notice that there are a lot of frames that are not related to Glamour, caused by how the two graphical widgets (FTTableMorph used by the Fast Table renderer and PaginatedMorphTreeMorph used by the Pluggable Tree Morph renderer) handle the selection of an element. To verify our hypothesis, we are only interested in those stack frames related to Glamour.

Stack_Inspector_Comparison_CorrectWrong.png

To build the custom view, we switch to the 'Raw' view of the inspector and use a Roassal script. First, we select a correct and a buggy stack trace and then remove all stack frames that are not related to Glamour. We rely on the fact that all Glamour classes have the prefix 'GLM'. Then, we add the filtered stack frames in one set, draw edges between consecutive entries, and arrange the graph in a tree. We also attach an index to methods for ensuring that multiple occurrences of the same method in the stack will have different entries in our view.

| view stacks |
view := RTMondrian new.
stacks := ({self first. self second}) collect: [ :aStack |
  aStack select: [ :frame | frame methodClass name beginsWith: 'GLM' ] ].

stacks := stacks collect: [ :aStack |
  aStack withIndexCollect: [ :aFrame :index |
    index -> aFrame method ] ].

view shape label text: [:each | each value gtDisplayString truncate: 50 ] .
view nodes: (stacks flatCollectAsSet: #yourself).
stacks do: [ :aStack |
  aStack overlappingPairsDo: [ :a :b |
    view edges
      connectFrom: [:x | b ]
        to: [:x | a ] ] ].
view layout tree.
view.

Executing the script in place shows us the view in a new pane to the right directly in the inspector:

Stack_Roassal_Comparison_CorrectWrong.png

We immediately notice that most of the two stacks are identical, with some small differences at the top. We zoom in and slightly rearrange the top contexts to better understand what causes this difference:

Stack_Roassal_Comparison_CorrectWrong_Zoom.png

In this case, we discover that the difference appears because the Glamour renderer for Fast Table and the Glamour renderer for Pluggable Tree Morph have a different way of propagating the selection. However, in the end, both renderers call the correct method GLMPresentation>>#selection:. Hence, the problem is most likely related to Fast Table. We use this newly gained insight to steer the focus of our investigation.

Comparing the four incorrect stack traces

Our next hypothesis is that in the four buggy stack traces the execution branches in a certain method because of a loop. Verifying this hypothesis requires a tree visualization that shows execution branches caused by the same method being sent to different objects. In the previous view, we only took into account methods. To build this new view, we also need to log the actual objects from the execution stack. We can do this be replacing the MethodStackSignal logger with ContextStackSignal logger in the method Integer>>gtInspectorIntegerIn::

Integer>>gtInspectorIntegerIn: composite
<gtInspectorPresentationOrder: 30>

ContextStackSignal emit.

^ composite table
  title: 'Integer';
  display: [ | associations | "..." ]
  "..."

We then clear and start the MemoryLogger, and select an item in the buggy `Items` view. Inspecting the log shows the four stack traces:

Beacon_Logging_FourWrongStacks.png

We create next the tree vizualization using Roassal and execute it in place. We follow the same steps as for the previous visualization, this time without doing any filtering of the stack frames:

Stack_Roassal_Comparison_FourWrongStacks.png

In the resulting view we can see that we have four stack traces because the execution branches in two places: an initial branch point that occurs only once, and a second one that occurs two times.

Stack_Roassal_Comparison_FourWrongStacks_Full.png

Next, we can zoom in and investigate each stack frame corresponding to a branch point in detail:

StackExploration_BranchPoint_1.png StackExploration_BranchPoint_2.png

In each case we notice that the inspected announcer has some duplicated subscriptions. Having some knowledge of the Glamour renderer, we suspect that this should not be the case; each subscription should be registered only once. We use this insight to continue the investigation.

Reducing the scope

Now that we detected a possible cause for the bug, before going any further, we can devise a simpler example exhibiting the same behaviour:

GLMCompositePresentation new
  with: [ :c |
    c fastList 
      send: [ :anInteger | self inform: '#send:'. anInteger ] ];
   openOn: (1 to: 42)

After executing the code above and selecting an element in the list, we get the same four notifications as before.

Simple_Bug_Example.png

Finding the cause of the duplication

To verify if those announcements need to be registered twice, we need to find the places where the registration happens. Searching for the methods that reference the class FTSelectionChanged (one of the duplicated announcements) we discover that it is called from the method GLMMorphicFTRenderer>>initializeAnnouncementForDataSource. The fact that the subscription is registered twice indicates that the method is called twice. We proceed as before, and instead of putting a breakpoint in the method, we add a Beacon logging statement, and only record events while we open the buggy browser.

GLMMorphicFTRenderer>>initializeAnnouncementForDataSource
  ContextStackSignal emit.
  "..."	

RecordingBeacon new
  runDuring: [
    GLMCompositePresentation new 
      with: [ :c | 
        c fastList 
          send: [ :anInteger | self inform: '#send:'. anInteger ] ];
      openOn: (1 to: 42) ].

We see that indeed there are two calls to #initializeAnnouncementForDataSource. As the log event recorded the entire stack, we can select and explore the two contexts making the call to the method #initializeAnnouncementForDataSource that registers the FTSelectionChanged announcement:

Exploring_ProblematicCall_1.png Exploring_ProblematicCall_2.png

We observe that the first call is made from the method GLMMorphicFastListRenderer>>render:, and the second from the method GLMMorphicFastListRenderer>>dataSourceUpdated:. To understand the relation between these two methods within the execution, we again construct a tree with the two call stacks. We already have a visualization for this task, as we built it in a previous step.

Exploring_Two_Stacks_DataSource.png

We immediately see that the two stacks diverge in the method GLMMorphicFastListRenderer>>render:. This is actually the method that makes the first call to #initializeAnnouncementForDataSource. We can now navigate through the other call stack to understand why it was made. We can basically use the GTInspector as a port-mortem debugger!

Exploring_ProblematicCall_Roassal_1.png Exploring_ProblematicCall_Roassal_2.png

Before moving forward, we need to clarify a relevant aspect regarding the design of the Glamour renderer for Fast Table. GLMMorphicFastListRenderer is the Fast Table renderer. The renderer creates a graphical widget (FTTableMorph), a data source and links them together. The widget displays the elements visually and the data source provides the elements that will be displayed.

We see in the previous view that the second call to #initializeAnnouncementForDataSource happens when the graphical widget FTTableMorph is initialized. The reason is that whenever the data source is changed within a FTTableMorph widget, the method #initializeAnnouncementForDataSource is called to link the graphical widget with the new data source. We can see this in the method GLMMorphicFastListRenderer>>#dataSourceUpdated: that is called whenever the data source is changed in the widget, as a result of the GLMDataSourceUpdated announcement (GLMMorphicFastListRenderer>>readyToBeDisplayed):

dataSourceUpdated: announcement
  tableModel ifNotNil: [ self unsubscribeDataSource: tableModel ].
  tableModel := announcement newDataSource.
  self initializeAnnouncementForDataSource

If we look for methods referencing GLMDataSourceUpdated, we discover that the link between the announcement GLMDataSourceUpdated and the method #dataSourceUpdated: is created when the renderer (GLMMorphicFastListRenderer) is initialized:

initializeAnnoucementForPresentation: aPresentation
  aPresentation when: GLMDataSourceUpdated send: #dataSourceUpdated: to: self.
  aPresentation when: GLMContextChanged send: #actOnContextChanged: to: self.
  aPresentation when: GLMPresentationUpdated send: #actOnUpdatedPresentation: to: self 

At this point we gained a good understanding of the factors causing the bug. To summarize them: when a Fast Table view is created, the method GLMMorphicFastListRenderer>>render: instantiates a new data source and calls #initializeTableMorph. #initializeTableMorph creates a graphical widget and sets its data source. After the graphical widget is initialized GLMMorphicFastListRenderer>>render: calls #initializeAnnouncementForDataSource to properly set the announcements between the graphical widget and the data source. However, this method was already executed when the data source was set in a FTTableMorph widget in #dataSourceUpdated:. Hence, we can fix the bug by removing the explicit call to #initializeAnnouncementForDataSource from GLMMorphicFastListRenderer>>render:.

Documenting our finding

Now that we found and fixed the bug we can document our finding. To ensure that this bug will not happen in the future, the best solution is to rely on a test. We create a test that verifies that there are no duplicated subscriptions in an announcer. We then apply this test on the two announcers from Glamour that have duplicated subscriptions. This way, if at any point in the future a duplicated subscription is introduced, we will be notified and can check if the duplication makes sense or if it is a bug.

testNoDuplicateRegistrationsInFastTableRenderer
  | table |
  window := GLMCompositePresentation new
    with: [ :c |
      c fastList ];
    openOn: (1 to: 42).

  table := self find: FTTableMorph in: window.
  self assertNoDuplicatedAnnoucementsIn: table announcer.
  self assertNoDuplicatedAnnoucementsIn: table dataSource announcer.

Building a toolset

Solving this bug was done through building custom tools. In this particular case, the tool consisted in a view for displaying stack traces using a tree. Initially, when we solved the previous bug related to Fast Table, we also built such a view, but then we did not know if we will even reuse that view (tool) so we threw it away. Now, we had to reuse the view again and make a few adaptations: filter stack frames using a condition and create edges based only on method calls. Hence, we can now spend 15 minutes more and transform this view into a tool that we can then reuse in the future, whenever we need to compare stack traces.

We transform this view into a tool by putting in into a dedicated class and adding an API for configuring it:

BeaconRTStackViews>>#executionTreeForContextSignals: aCollectionOfSignals
  | stacks |
  stacks := ((aCollectionOfSignals 
    select: [ :each | each isKindOf: ContextStackSignal ])
    collect: #stack).
  ^ self executionTreeForContexts: stacks 
      select: [ :each | true ] 
      transform: [ :each | each ]
BeaconRTStackViews>>#executionTreeForContexts: aCollectionOfStacks select: aFilterBlock transform: aTransformBlock
  | view stacks |

  stacks := aCollectionOfStacks collect: [ :aStack | aStack select: aFilterBlock ].
  stacks := stacks collect: [ :aStack |
    aStack withIndexCollect: [ :aFrame :index | aTransformBlock cull: aFrame cull: index ] ].

  view := RTMondrian new.
  view shape label text: [:each | each value gtDisplayString truncate: 50 ] .
  view nodes: (stacks flatCollectAsSet: #yourself).
    stacks do: [ :aStack |
      aStack overlappingPairsDo: [ :a :b |
        view edges
          connectFrom: [:x | b ]
          to: [:x | a ] ] ].
  view layout tree.
  ^ view

We are still not done. We invested effort into building this tool, however, if other developers are not aware that this tool exists they will not use it. Hence, we should invest a few more minutes to address this.

This view is applicable when we inspect a collection of Beacon events of type ContextStackSignal. We extend the GTInspector with a custom action for collections that is only applicable if the collection contains at least one Beacon event of that type. We further implement the action so that the view is opened in a new pane to the right, preserving thus the workflow in the same inspector window.

SequenceableCollection>>#gtInspectorActionExecutionTree
  <gtInspectorAction>
  ^ GLMGenericAction new
    title: 'View contexts execution tree';
    category: 'Beacon';
    action: [ :aPresentation |
      aPresentation selection:
        (BeaconRTStackViews new executionTreeForContextSignals: self) ];
    condition: [ self anySatisfy: [ :each | 
      each isKindOf: ContextStackSignal ] ]

Now, whenever somebody inspects a collection containing stack traces recorded with Beacon, she will be able to discover and open this view directly from the inspector:

Beacon_Custom_Extensions.png

Remarks

We started this session with the goal of understanding the cause of a bug and end it by adding a custom tool to our environment. We achieved this as we were able to rapidly create the necessary tool and easily incorporate it into the IDE. We could do this as Pharo is a moldable IDE where creating a new custom extension is just as easy as creating a test.

Posted by Andrei Chis at 25 April 2017, 7:30 am with tags assessment, tools, gt, pharo, moose, story link
|

Introducing PetitParser2

I am happy to announce the availability of PetitParser2, a redesign of the original PetitParser developed by Jan Kurš. The version is already present in the latest Moose 6.1 development image.

The main goals of the redesign is performance and flexibility. The speedup ranges between 2-5x. This is achieved through the idea of a kind of a parser compiler that performs a static analysis of the parse tree to identify patterns that can be optimized and then changes the actual parser execution.

The original implementation of the compiler was actually implemented in PetitParser. However, the problem was that the resulting parser was not easy to understand. This was the main reason that prompted the redesign of PetitParser. In PetitParser2 the concept of a Parser is split in two:

  • Nodes: These essentially provides only the AST of the parser, but without actual logic inside. They mirror the previous PPParser classes.
  • Visitors. These provide the actual parsing behavior. Optimizations are implemented in this hierarchy.

For a typical external user, PetitParser2 has the same structure as PetitParser. The main difference is that asParser is changed to asPParser. For example, the following original snippet:

#letter asParser, #any asParser star

Becomes:

#letter asPParser, #any asPParser star

In this way, the two versions of PetitParser can co-exist in the same image to ease the transition. This transition is needed for parsers that rely on deeper constructs from the original PetitParser, such as concrete names of classes.

Another new feature is the availability of streams. The original PetitParser required the whole string to be present in memory to allow for indefinite backtracking. PetitParser2 offers stream utilities that apply parsers on a stream by keeping a buffer in memory for backtracking purposes.

Of course, the new version comes with all development tools that we got used with PetitParser. The nice thing is that the tools work even when we are using the optimized version of a parser.

For example, the picture below shows the PetitParser browser opened on the Smalltalk parser.

Pp2-default.png

The one addition is a larger run button which performs the optimized parsing. Triggering it for our case renders the result as shown below. The only difference is that in the above version, there were 907 individual parsers involved, while the optimized version only used 258 of them.

Pp2-optimized.png

To get a better feeling of how the magic happens, let’s look at a small sequence parser.

#any asPParser star

Inspecting the parser allows us to play with the sampler.

Pp2-simple-default.png

The unoptimized version of the parser triggers independent parsers for each of the letters, and then concatenates the results in another collection. However, triggering the optimized version only involves one parser that knows how to consume the entire sequence in one collection.

Pp2-optimized-default.png

All in all, PetitParser2 is an exciting development. Take a look at let us know what you think.

Posted by Tudor Girba at 8 November 2016, 5:07 pm with tags moose, tooling link
|

Moose 6.0

We are happy to announce version 6.0 of the Moose Suite, the platform for software and data analysis built in Pharo:
http://moosetechnology.org/#install

Moose-6-0.png

Description

The key highlights are:

  • It is based on Pharo 5.0 including the latest version of the Glamorous Toolkit.
  • It includes the SmaCC parsing framework together with parsers and abstract syntax trees for Java, JavaScript and Swift.
  • Roassal2 comes with several enhancements.
  • Famix features a new and generic query API engine.
  • Moose Finder and GTInspector come with more custom presentations and visualizations.
  • SmaCC comes with a dedicated debugger.
  • The debuggers for Glamour, PetitParser and Announcements received a new update.
  • DeepTraverser is an order of magnitude faster.

Extra highlights:

Installation

The Moose Suite 6.0 comes for each platform as a separate bundle:

The Moose Suite 6.0 can also be loaded in a Pharo 5.0 image either from the Configuration Browser, or by executing the following script:

Metacello new
 smalltalkhubUser: 'Moose' project: 'Moose';
 configuration: 'Moose';
 version: #stable;
 load.

Enjoy,
The Moose team

Posted by Tudor Girba at 15 August 2016, 9:10 am with tags moose, tooling link
|

jdt2famix 1.0.1

Over the last couple of weeks I worked on a new importer for Java code that can be used for Moose, and I am happy to announce that I released version 1.0.1.

jdt2famix is an open-source project, and the repository can be on Github. It is implemented in Java as a standalone project, and it is based on JDT Core (developed as part of the Eclipse project and available under the EPL2 license) and Fame for Java (originally developed by Adrian Kuhn and is available under the LGPL license).

The current implementation has an extensive coverage of entities that it can import, namely:

  • classes, parameterizable classes, parameterized types, enums, anonymous classes, inner classes
  • annotation types, annotation type attributes
  • annotation instances, annotation instance attributes
  • methods, constructors, class initializers
  • parameters, local variables
  • attributes, enum values
  • accesses
  • invocations, constructor invocations, class instantiations
  • thrown exceptions, caught exceptions, declared exceptions
  • comments associated with named entities
  • source anchors for named entities

The simplest way to use it is through the command line:

  1. Download and unzip the self contained binary 1.0.1 release.
  2. Go to the root folder where you have the sources and dependency libraries of a Java system (e.g., mysystem)
  3. Execute /path/to/jdt2famix.sh

The result will be an MSE file having the same name as the name of the folder from which you executed the command (e.g., mysystem.mse).

Let’s look at a concrete example. For this purpose, we pick the Maven project. We first download the 3.3.9 sources. These are only the plain sources, but to be able to import the complete model, we also need to have the dependencies available in the same folder.

As Maven is a Maven project (no pun intended), we can use the configuration to download the dependencies locally:

mvn dependency:copy-dependencies -DoutputDirectory=dependencies -DoverWriteSnapshots=true -DoverWriteReleases=false

(If you do not have a Maven project, you have to adapt this step and obtain the dependencies depending on the specifics of your project)

Then we simply execute:

/path/to/jdt2famix.sh

And we get a maven-3.3.9.mse file that we can load into Moose.

Maven-3.3.9.png

Posted by Tudor Girba at 3 August 2016, 4:08 pm with tags tooling, moose link
|

The impact of moldability

We say that Glamorous Toolkit (GT) is a moldable development environment. There were a couple of emails on the Pharo mailing list in the recent period that questioned the usefulness of moldability. More specifically, the questions were raised in relation to the newly introduced GTDebugger. The concept of a moldable debugger is new to any IDE, and for this reason we cannot compare with the standard IDE behavior.

Nevertheless, the question is certainly legitimate. Given that we cannot compare with other solutions, we can observe the impact of other moldable interfaces. If we take a step back, the same concept was applied to the GTInspector and GTSpotter before. We introduced these in Pharo 4.0. Let’s see what is the impact.

In Pharo 3, the EyeInspector offered a basic extension possibility, and the Pharo image shipped with 8 such extensions. Together with the introduction of the GTInspector, we shipped 138 extensions. One year later we have 165 in the core image. In the meantime there are many more extensions in external packages. For example, the Moose 6.0 image which is based on Pharo 5.0 ships with about 230 extensions.

Also, in Pharo 4, we shipped 92 extensions to GTSpotter. In Pharo 5, there are 122 extensions. Similarly, there are several more extensions in external packages. For example, in Moose 6.0 we have 135 and if we include the generic way of searching through models we have several hundred more (more about this in a future post).

The explosion of extensions shows that there is a need to have such extensions. This is a validation of a hypothesis put forward by the humane assessment approach a long time ago which started from the observation that context is key in software development, and as such, tools should take this context into account. This idea was first explored in the context of Moose and it stands at the very core of GT. You can see this embodied now in 3 distinct tools, and we will see more as we proceed with the project.

Now, why the difference between Pharo 3 and Pharo 4? First, the cost associated with an inspector extension went down from ~19 lines of code in a separate class to ~9 lines of code in one single method. Second, the value of the extension increased because of the interaction workflow that came with the GTInspector design.

Interestingly, after we introduced the GTInspector, almost all discussions were geared towards the Raw presentation because it introduced a new kind of interaction. Almost no email was about the extension mechanism. The same pattern happened with GTSpotter where messages focused almost exclusively on searching classes and methods. And rightly so, as the default behavior is what people see at first. We have exactly the same type of issues with the GTDebugger.

Now, in the default Pharo image, there are 3 different debuggers: the default one, the bytecode one and the SUnit one. In the Moose image there are 6 debuggers and there are a couple in outside packages. For example, here are two screenshots of two such debuggers: one of a PetitParser debugger, and one of a debugger debugging the update of the debugger.

Custom-debuggers.png

The cost of these debuggers is measured in hundreds of lines of code. We will certainly not see as many debugger extensions as in the case of the inspector because the granularity is larger and because the cost is larger, but we will certainly see more custom debuggers.

We still need to learn more about how to reach the balance between extensibility and usability. We are at the beginning, but there is clear value in extensibility and we should not discard it as unimportant. The key here is the ability of creating extensions with low effort and this is unprecedented. Let’s put this in perspective: Eclipse started more than a decade ago with a plugin architecture. Right now, the Eclipse marketplace (marketplace.eclipse.org) has 1722 tools. Granted these are more extensions than we have and they are larger, but at the same time the community that builds those is several orders of magnitude larger than the Pharo one. Yet, we can already compete with this because of the radically low cost structures.

Until now we looked at quantitative plain data. Nevertheless, are these extensions actually affecting productivity qualitatively? In my experience they can have a significant impact, and this site and blog features multiple examples of how this is so. Furthermore, more recently I realized that my workflow has changed quite significantly and the amount of time I spend in the inspector is around 60%.

But, let me give you another perspective. I went around the world over the past year and I asked directly more than 1000 developers working in various languages if they agree that they read code for 50% or more of their time. The vast majority agrees (this is on top of research showing the same thing). Yet, when I ask them if they talk about it to find new ways of understanding systems, they acknowledge that they almost never do. This basically means that people are spending half of their budget on something they never talk about. These are just the direct costs, and many systems see some 80% of their overall effort spent in maintenance. Understanding systems is the single most expensive activity, but the industry does not approach this explicitly. And typical IDEs focus to a large extent on the active part of creating code. For example, is it not ironic how in all IDEs the reading happens in an editor? In Pharo, we look at this problem in a novel way and we have the chance of affecting business costs radically.

Moldability is a competitive advantage. This is why we would like to encourage people to play with these mechanisms and push the envelope of software engineering.

Posted by Tudor Girba at 4 June 2016, 6:16 pm with tags assessment, tools, gt, pharo, moose link
|
<< 1 2 3 4 5 6 7 8 9 10 >>