Saturday, June 25, 2011

Software in pictures

Did you ever try to explain what is "bad code" to someone who does not come from software background ? How do you do that ?
In this post, I would like to visualize some bad design and coding using some real life parallels.

But first a disclaimer: as bad as many of the following examples look like, sometimes compromises need to be made - to meet a short-term deadline, to save cost in the short-term, due to temporary lack of expertise, making some feature work for a customer on the spot, etc.
As long as you acknowledge in advance that what you did is not the right way to do it, and are willing to improve and proceed to apply the proper solution...

First, let's take a look at the equivalent of some messy code.
Source: http://bit.ly/jmL7RC

Source: http://bit.ly/kiWmRV













When one of the power lines dysfunctions, how long do you think it would take to fix it ? Or just to find it ?
Same is with bugs in code that looks like this...


Now, how about using a workaround solution rather than upgrading the infrastructure to handle it properly ?

Source: http://bit.ly/myfL5v

Sometimes, just because you can make two pieces of software work together doesn't mean you should:

Source: http://bit.ly/lpx3vE

Here is how it looks like when you apply a patch rather than rewriting a broken piece of code:


And finally, this how is it looks like when you leave some useless code around:

Source: http://bit.ly/jfFKqE


For contrast, here is the probable visualization of a good piece of software - fast, sleek, great design, and it gets you to where you want to go to:

Source: http://bit.ly/jpZAGU
Do you have any more similar design or code visualizations ?

Friday, June 10, 2011

Your code - keep it covered !

Code coverage by unit-tests (or integration tests - we will not discuss the differences here) is the easiest to add - in many cases much simpler than adding complex end-to-end system tests, and provides good protection against many stupid and easy-to-detect bugs. Here I am only referring to tests that take just a few minutes to run, and can be run as part of the basic build process. No one is saying that it makes the code full-proof against bugs, but no one can argue that having ready tests reduces risks when having to fix a bug or make functionality changes in some code. Code coverage measurement tools are able to provide reports on the portions of code covered by these tests.

Naturally, some developers are adding much more tests on average than others. I don't know where does the lack of writing tests come from - is it out of pressure to meet a deadline, lack of skills or experience, laziness, just not caring, not presenting a sense of ownership ("it's someone else problem"), or any other reason... Also, sometimes, for the same reasons, people tend to remove or disable tests rather than fixing them - either at the same moment, or shortly after committing the code that broke these tests. This, inevitably, leads to having the coverage percentage going down.

Quite for some time, I have been thinking about a method that will hopefully continuously increase code coverage by tests. I am sure that I am not inventing anything new here, but so far I did not encounter any teams that are actually using such a method, and anyway - this is a great way to get feedback on ideas :)

Continuous integration tools, such as Jenkings (former Hudson), have appropriate plugins for displaying code coverage results over time (see a screenshot example) . Moreover, these tools can be configured to mark the build as failed, if the code coverage percentage for the current revision is lower than for the previous revision.
This way, if someone commits code with lower coverage percentage than the current percentage in the trunk  (by not writing enough tests for the new/fixed code, or disabling existing tests), the build will be broken, just as if this person committed code that doesn't compile, or caused any tests to fail, and it will require a similar response (either provide a quick fix, or revert the commit).

For example, let's say we start the process at the current state, and we have 40% code coverage by such measurable automatic tests. The above process ensures that we will never go below this coverage percentage. But how do we go up ? Well, statistically, some of the developers will have much higher code coverage on their code, thus moving the overall average higher. This will automatically set a new, higher, goal from that moment on.

But what if adding more tests for some specific change is highly difficult and is not cost effective ?
Well, my suggestion is that in this case, since the developers do not want to break the build, they much compensate by adding tests for different (possibly even unrelated) code, merely so that the overall coverage won't go down. Over time, such an "escape route" will ensure that at least all the easy-to-test areas in code will be well covered.

Of course, reaching 100% coverage is virtually impossible, and also at most times non cost-effective. So we can set a lower percentage threshold (75-85%) for coverage, and once reached, just make sure that the coverage does not go below this threshold. In addition, at all times the code that still does not have code coverage, will be the code for which writing tests is the least cost-effective, or that is better to be covered by more complex system tests, that cannot be run as part of the continuous integration system measurement tools.

I am sure that such an approach is a bit controversial, so I would be happy to hear feedbacks and suggestions how to improve such a method.
My main concern for adoption of such a method is the social concern - how well will it be perceived by the developers, will the developers that indeed lowered the coverage percentage take the responsibility to fix it quickly, or will there be just a continuously "red" continuous integration system status, that no one cares about...

I must say that I still do not have any actual experience or results for this method. Hopefully we will be able to try it in our group in the next months.
I will update on any insights and conclusions as we go...