The issue of ‘bad’ code is always difficult to address in any organization. Relentless pressure to deliver more and more features at a rapid rate forces the developers to take shortcuts, which in hindsight makes the code rather hard to maintain and extend. The developers and testers, on the other hand, will always blame the business guys for not giving them ‘enough’ time for writing ‘good’ code or refactoring ‘bad’ code. The more passionate developers might even get slighted and get defensive – “How does it matter, it works, doesn’t it? These are all theoretical exercises, let us just get the job done.”
As a result of all this, most software development managers give up on the engineering excellence aspect of SDLC. Refactoring, re-architecting, and re-engineering are placed in a bucket list of tasks that will forever happen, sometime in the future. And in the meanwhile, life goes on as usual.Unfortunately, this approach has serious consequences and high costs that will probably be borne by someone in the future. Recently, many software gurus, notably Uncle Bob (Robert C Martin) – have been warning us about a major catastrophe that’s just around the corner and the root cause of which is poorly written and tested code. Now that is a serious matter, and as software professionals who take pride in our craft, we must address the ‘bad’ code elephant head-on.
Before we explore the consequences, let us examine the properties of ‘bad’ code, from a level that is understood by business and project/program managers. ‘Bad’ code primarily has the following attributes:
- It is not readable – if a new person joins the team, it takes them an abnormally long time to be productive – fix bugs, add features, and so on. This aspect of ‘bad’ code can be due to several reasons, here are some of them:
- Badly organized code and poorly structured or very large control flows – the largest control flow sequence I have seen, had a straight block of three-level nested if-then-else blocks that ran into approximately 5000 lines, wrapped in a try-catch block
- Carelessly named classes that do not give a clue to the reader about the intent of the programmer
- Architectural and design issues such as modules with too many in-coming and/or outgoing dependencies, tightly coupled components, and so on
- Poor attention to detail on issues of scalability, security, etc. – again an architectural issue
- It is not testable – I have always maintained that testability is one of the hardest properties to understand and address as an architect. Of course, by looking at the code, one can detect poor testability, due to the reason explained in point 1(1) above. But since many of the project management and executive management members do not have the time to engage at this level of detail, they often miss this important non-functional ability of a system. However, this can be easily observed from the simple activity of measuring the time spent on testing/debugging vs the time spent on development, as the project progresses. This might seem like a blindingly obvious metric to track, but many teams do not have this information readily available.
- It is hard to extend – because of the combined effects of lack of readability and testability, defined above. How does one detect it, other than by looking at the code? If, as the project progresses, you find that even trivial features are taking a long time to deliver, then you have a problem of extensibility.
The consequences of these can prove to be extremely frustrating and expensive:
- The general perception of the quality of the output begins to deteriorate as the team spends more and more time in fixing bugs than in actually creating new features
- In extreme cases, adding features becomes virtually impossible with seemingly unrelated parts of the system failing due to the addition of a feature or, worse – fixing an earlier bug
- Costs of adding features increases at an alarming rate with each delivery cycle
- Adding more people becomes difficult, as they take a long time to come up to speed; sometimes, new team members are blatantly scared to touch large parts of the code, due to the fear of failure
- Customer complaints increase, putting intense pressure to write an even worse code – to keep fixing the issues quickly
Most seasoned project managers might say that these consequences have been evident for a long time and question whether or not they need to be immediately addressed. The software has been written like this for years and we can continue forever in the same manner. But, unfortunately, that isn’t the case. The world is increasingly dependent on software – with connected devices, vehicles, wearables, and large software back-ends that are acquiring the characteristics of asynchronous, real-time systems that scale by concurrency. It is not easy to visualize when, where, and what will fail if one part of this interconnected world suddenly malfunctions. The recent hypothesis about poor software being the root cause of the problems in the Boeing 737 MAX aircraft – is an early warning to all of us about how severe the issue is.
So, what do we need to do?
Well, we need to get back to basics. Managers and businesses must not accept code that has the above-mentioned characteristics – i.e. ‘bad’ code. If they find such a code, they must prioritize and fix the problems with the same intensity and seriousness that they apply to feature development. Software development teams must be open to adopting automated and instrumented systems for measuring the quality of code, and managers must keep a close watch on the primary output of the team – the code.
But eventually, it is the practitioners-programmers, designers, architects, and testers – who need to take responsibility. This can only be done by acquiring a high level of self-discipline and professionalism in the way the code is designed, written, and tested. We must take pride in our craft and ensure that we do not write ‘bad’ code.