Is software engineering research relevant?

Tuesday, June 12, 2012
I try do be a software engineering researcher. If I scrutinise myself I am an engineer, not a researcher, and a systems engineer on top of that (but with a clear focus on software-intensive systems).

While trying to finish my thesis I often question if I investigate relevant problems from an industrial perspective, which I believe is the only valid perspective in software engineering, everything else is just playing around. Since software development is a completely artificial activity there are not natural phenomena to study, everything is the result of human activity (and ingenuity).

After attending my latest conference, which had a mix of industrial presenters (probably not peer reviewed) and academic presenters (peer-reviewed, of which I was one) I discussed with a colleague. The conclusion we reached was that the academic researchers had a lot of substance in their presentations, but lacked relevance, while the industry presenters had relevant topics, but lacked substance. The latter seemed in many cases more like a sales pitch.
I have also heard in panel debates at other conferences that software research are diverging from industry needs. This is serious! But what is the problem? Are researchers really addressing the wrong thing (I have already complained about formal methods research)? Or is the step from a published research paper to implement it beyond a simplified academic setting too large for practitioners to take? Or are there so many conflicting research "results" that it is impossible to draw any conclusions as a practitioner beyond personal heuristics? In structural mechanics there are the laws of physics that constrain what is working or not, in software there is no clear boundary like that.
Personally I believe it is the gap, it is just not possible for practitioners to grasp how the results are relevant in their context since research is trying to be as general as possible, i.e. trying to cover as many contexts as possible and in the end not being relevant for any context. Resarch results only valid in a specific context (e.g. a deep case study) are considered second-rate by the research community and often does not pass through the eye of the needle.
There are many different views on the relevance, for example read this report on the future of software engineering research.

What can be copyrighted?

Tuesday, June 5, 2012
Since taking the course on Ethics and Intellectual Property at Chalmers I have become increasinlgy interested in how copyright and patent laws affect software development. United States of America have different patent laws compared to western Europe, usually more obliging of what can be patented, but the legal rulings in the former concern also software used in the latter.
Just a few days ago the verdict in the case of Oracle v. Google regarding copyright infringement on Java API was determined with the concölusion that it is not possible to copyrigth an API, but certainly the implementation of it. A summary of the verdict was written by the judge, William H. Alsup, and is most interesting to read. I think it shows a great understanding of the context of software development from a legal perspective.

Watch an interesting film about the american patent system and software patents: Patent absurdity.

XP 2012

Friday, May 25, 2012
Here are some random notes taken during the 13th International Conference on Agile Software Development in Malmö, Sweden.

This conference has a heavier presence of professionals compared to many academic conferences. To simplify I would say that presentations in academic conferences are heavy with content, but on irrelevant topics, while presentations form professional are on relevant topics but are lacking in substance.

Best quote today, from Jutta Eckstein: “A sign of lack of trust is formalism”. In organisations where trust is completely pervasive they tend to rely on formalism, i.e. meeting protocols, process documentation, etc.

Clarke Ching's TDD Test Drive

Tankesaft, a blog about innovation, inspiration and development by Karl-Magnus Möller – only in Swedish.

So far there have been some highlights: Jan Boris-Möller talked about making food from an engineering perspective. He specialises in catering for events, which is quite different form working at a restaurant. It isn’t very agile in the sense that the prerequisites change once the event takes place, but is is very agile since he does not want to keep notes/plans/… from one event to the next avoiding pushing any preconceived solutions to the client. He talked about the order to implement things as a professional chef (colour->…->amount->texture->taste, and not taste first which is the amateur's starting point). He also talked about balancing quality attributes; the client can only have two of fast, cheap and good, but never all three (if you you want it fast and cheap it can never taste good).
Later in the evening he mingled and talked about the necessity of having both experience and knowledge (and they not being the same), and the necessity to educate the client/customer about the consequences of what they order in order to meet their expectations.  Her also wants to inform the guests before eating of the thinking about designing the dishes which he thought enhanced the experience. a lot of developers could learn from this. And yeah, the food was excellent, traditional Swedish resources and flavours in new ways. Definitely the best conference dinner I have had.

One of the most interesting presentations was from Lars Arrhed, a lawyer, talking about contracts regulating agile development. It contained too much information to convey but some a points was: Suppliers (developers) and clients talk different languages. Clients often have a vision of what the resulting product/system should be, which may be very vaguely described in the contract (“Fully integrated system” and “functional administrative system” were two examples from real contracts in court cases). Therefore a supplier should show progress instead of telling about techniques and methods.

I attended some workshops and tutorials on testing and test-driven design. It felt good I could reason about how to refactor a program in a programming language I don't know (Python). On the other hand I was completely stumped when trying to do "wolf-pack programming" in Smalltalk. I could reason about the program in general terms, but it was very hard to write working lines of code.

In test-driven-design I understand the value of producing high-quality code, but the very small loop increments seem almost to hinder elegant designs that are easy to understand.

Where does a good S/W design come from

Thursday, May 24, 2012
One of the 12 principles behind the agile manifesto says
The best architectures, requirements, and designs emerge from self-organizing teams.
I have seen cases where I think some teams interpret this as good architecture emerges by itself as long as you follow good practices, e.g. XP, TDD, Scrum, etc. Caveat: I haven’t asked them…
This does not happen in reality. I have experienced colleagues who calls this a “manifested architecture” (every system has an architecture, deliberate or not), in a quite derogatory manner. The common problem with a manifested architecture is that it may fulfil all functional requirements (user stories, features, use cases, …) but does not really support any key quality attributes.
Good design, at all levels, are based on a vision of what the system should be, in terms of functionality/properties and structure. In a self-organised team this is hopefully a shared vision. You can appoint a single person to capture and disseminate this vision (commonly known as an architect), or you can use other principles to define and share the vision. The latter is my positive interpretation of the agile principle above. But you don't hope it happens without concious effort and don't do anything about it...
So how do you learn good design? The best (only) way to do this is to look at other good designs, with as much detail as available. This provides the “type of context dependent knowledge that research on learning shows to be necessary to allow people to develop from rule-based beginners to virtuoso experts”. (Bent Flyvbjerg)

Is there any use of the V-model?

Wednesday, May 23, 2012
I’m at the 13th International Conference on Agile Software Development in Malmö Sweden. As usual when I listen to presentations these tend to trigger new liner of thoughts instead of pondering the details of what was said. I guess this is a flaw in how I internalise knowledge from others.
Not surprising, nobody here mentioned the V-model. Too bad,since I think it is one of the most important metaphors regarding software (and system) development there is. But it is also on of the metaphors that have been used in ways which are completely inappropriate.
My “condensed understanding” of the strengths of the V-model is this:
It defines a set of activities on the left side of the V with associated artefacts aimed at exploring the problem and detailing the solution down to the actual product (the code!). The important part is that these design activities have corresponding activities of verification & validation on the right hand side of the V, in a one-to-one relationship between the design and V&V activity. The number of levels is individual, but usually about 3-5, including code is reasonable.
Where it goes wrong is when the V is laid out in time as a template for progress of a project. This may make sense if you are designing for manufacturing-heavy products, but is useless for software, the time between understanding the problem (top-left part of the V) and where you verify that your understanding is correct is just way too long to be competitive. This is definitely not news for software developers, and is a key point in agile software development (but probably phrased differently).
But it seems to me that in order to optimise the efficiency and speed there is a tendency to downplay the activities above the point of the V ( the working code). Big-upfront-design is seen as such a bad thing that it ends up in no design at all, there is a direct jump between formulating the problem (e.g. user stories) and start coding.
I believe that in an agile project all activities in a V must be executed in each sprint in order to claim to be “truly agile”. Each sprint means a better understanding the problem, a better design to solve it, and a better implementation in the code. Of course all of these are verified and validated. That does not say that these activates are equally large, or the size of the V is the same in alls sprints, but if you leave out the conscious architecture in the sprints you don’t learn to do better in the next sprints.
You should design the activities in the V to provide the maximum added value with respect to the spent effort, which obviously depends on thousands of context-specific factors), but saying they are irrelevant is only reasonable for the simplest of systems, where you have a definition of the problem and code.
Personally I rather define the result of the activities, i.e.the artefacts, and let the teams decide themselves on how to populate them. I know blog readers may cry about excessive documentation, but what I am talking about is the necessity to externalise  information necessary to share between developers, and teams, and to preserve this information over time, something which code never can do.
Long rant, and I’m not even close the exhausting my thoughts on the subject….

Working hours

Sunday, April 1, 2012
I am glad I don’t work in the US. This article confirms my personal experience that is is not productive to work long hours and still sustain creativity.

Objectives or activities?

Tuesday, March 27, 2012

Volvo Cars sets objectives for the company each year. Same as I guess almost every other company. Every department is then required to break down these objectives.
One thing I notice is that often there is confusion between what is an objective and what is the activity to achieve the objective.

Concrete vs. abstract and detail vs. overview...

There often seems to be confusion about two concepts relevant to representing design information, including models.
First there is the scale of abstract versus concrete. Second there is the concept of overview versus details.

If one looks at how a design progresses over time, regardless if one follows a waterfall or agile process, one moves both from abstract to concrete and from overview to details, but they are not the same thing. And I think that agile and waterfall puts different emphasis on on to make these transisions, and where to start and end (topic for another blog post!)


Going from overview to details is simple, the more you know about the problem and the solution the more information is added. think of going from a sketch where you only elaborated on the key classes to understand the system to a class diagam with all classes that are implemented. Using a map as analogy; you increase the scale to see more details, the coastline gets more details, or you see all roads and not only the big ones.

Going from abstract to concrete is also about adding details, and I think that is where the confusion begins. But the details that are added are not of the same type as in the more abstract representation. Here it could be adding all attributes of classes or ports to components. You don't get this information by adding more classes or structure components. In the map analogy it would mean adding indvidual houses on a map which previously only had roads, lakes and forests. It is a new type of information which you cannot deduct (transform) from the more abstract representation.

Testing

Friday, March 23, 2012
I have to admit that I know too little about software testing in practice to be a well-rounded software developer. That does not mean that I am adverse to testing, on the contrary I think that many architecture focus on properties discernible by the end user and not the developers where testers are a key developer role.
So I am curios if there are any patterns on how to design a testable architecture.

I enjoy the blog thoughts from the test eye, which also has a blog roll if you want to explore further test blogs.
I also stumbled across a very good powerpoint summary about testing in industrial settings from Lappeenranta University of Technology.

Quality starts with me

Monday, March 12, 2012
Outside the main dining hall at Volvo Cars R&D headquarters there is some information about ongoing quality work under the heading "Quality starts with me". A quick google search shows this sentence is not something unique to Volvo Cars, but it got me thinking.
To start with: For software (or any product I guess) quality can be seen from to viewpoints:
  1. Free from defects
  2. Appropriate according to the needs of the user
In many cases one obviously wants to achieve both these objectives, but the actions to reach them vary. Bad case is when one uses an action/method/principle that supports the first objective while believing it is the second one.
Let's say that I review a software specification if all requirements are SMART requirements (insert alternative acronym if desired). Does this support quality of type 1 or 2 above?

When it comes to reviewing or auditing I think it can be applied towards both objectives above, but it requires a lot more of the reviewer if it is the second objective that is the goal of the audit.

Writing user stories

Thursday, January 26, 2012
The blog is updated very seldom nowadays. It is not because I have lost interst, it is just that I write scientific articles with an unprecedented speed in parallel with doing improvement work at Volvo.

I just wanted to put in a reminder of how you define a user story (e.g. for use in a Scrum backlog), shamelessly stolen from a presentation by Dean Leffingwell:

As a <role>
I can <activity>
So that <business value>

"As a Gmail user, I can select and highlight a conversation for further action"