Every feature has a cost

This is something we all know, but we still somehow prefer to add all the features we can in fear that the user might be missing something. In doing so, we spread our effort across many actions and thought patterns and allow context switching to take bigger chunks of our time. When attempting too many things in parallel, the usability and usefulness of what we release start to suffer. It becomes harder to understand what we did right or wrong after the fact, even when getting feedback, because the relationships within the system became so many that it is no longer self-explanatory what effect each inter-component interaction has on the final outcome.

If we add five people to an existing team of five, this will very much change the dynamics of the relationships. Some may grow stronger, others weaker over time, which will be enough to create some problems that weren't initially there. A product may be great, but if it is shipped late or advertised in an obtrusive way, the perception about it will change rapidly. A service may have many clients, but changes in its privacy policy may cause people to abandon it if they disagree with the new rules. These are examples how adding one more thing can change the outcome in an unexpected way.

Adding more features to our software has similar effects when these features start to talk to each other. If someone is able to record the talk on just one communication channel, they may gather information how to make the system more self-revealing, which in turn will give them additional ideas how to exploit it. This is why software shouldn't do more than it is absolutely needed; everything extraneous could only expand the attack vector or adversely impact the work of the various stakeholders. Some people may go further and repeat as often as needed that such a system would be a failure. Although this seems an overstatement, it isn't, because we always tend to understand the severity of the problems a bit too late. We can read now how the next year security professionals will be among the most desired ones to have on a team. We should be asking ourselves to what extent we have created this new need through our actions and willingness to have an abundance of features and cover every possible need through a single piece of software. An effect always follows its cause and we usually don't need to wait long. When a machine has only few services running, it is more predictable and safer to work with. As soon as we choose to install a variety of software packages and their patches, we introduce disorder, increase the number of actively running processes, open new ports and expose the system to a wide range of intruders. This is a basic security principle, but one that could potentially lead to a conflict with how we write software in the future. If we need to increase complexity to be able to meet greater challenges, but cannot really afford it, we may be approaching a turnover point similar to the one where we have collectively decided that a five gigahertz processor would be infeasible to produce and instead focused on increasing the number of cores. So far there aren't any hints that suggest that the human brain can be made infinitely accommodating. Even when we try to deal with one piece at a time, the number of pieces can become prohibitive.

Each unused feature or one that is used only by some users will reduce the performance for everyone else. We may also more easily forget that such a feature exists since no feedback about it can be obtained, which in turn may increase entropy. For this reason, it is good to attach each feature to the value it creates for the end user. If it doesn't add value or does it in a questionable way, we should probably remove it.

We may add features on top of each other, which creates problems, especially when the number of layers starts to grow. If we then decide to remove a feature in the middle, we may not have the flexibility to do so, because everything above it would fall. The more features we add on top of each other, the more inflexible the construction becomes. Each layer needs to communicate with the one on top of it, which increases the time to output within the system. A similar problem may appear in a horizontal chain as well, when the next action relies on the previous one, which has to be considered when creating long chains of actions.

From a computational standpoint, the cost of a feature are the number of elementary operations that need to be executed. The more complex the feature and the provided functionality, the longer it takes for the computation to finish. We often don't have the luxury to wait forever, but are bounded by the amount of time a human being is willing to wait to see the output. Any longer that this can also mean that the system doesn't fulfill its purpose. Adding too many such features can quickly exceed this time budget.

The more features a system has, the more eyes and hands it needs in order to be well-maintained. A previously usable system developed by many people during an economic boom may no longer be usable when many developers are forced to quit. This indicates that going at a slow, but constant average speed and persisting longer is often preferable over using all existing resources and relying on them to be always available. The desire to work in parallel can easily lead people to believe that they should contribute with their own features, when actually a collective, single feature would be more appropriate for the system.

When working on a separate component, it may become less obvious what other people have already finished and what can be readily reused. This may lead to duplicate functionality or other synchronization costs. Documentation can be of great help (if not overdone) since it improves communication. But it is insufficient to expect that just because it is there, it will answer all questions. Good documentation is a feature in itself.

Features need to be organized in a way that makes them self-explanatory. If someone has to jump 30 lines earlier to see the value with which a non-descriptive variable was initialized, then their productivity will suffer as a result. There is no value in making things hard to understand for others, especially when working on a team.

The development of no single feature should create bottlenecks in the team's work. This requires proper organization and synchronization of how the various pieces will fit together. If a feature on the critical path is unavailable, others who depend on it for their work will need to wait, which puts the project at risk of being over budget or not finished at all. Assigning a large number of features to a small team may frustrate and paralyze everyone. Having to sped time on features with low probability that they will be finished can be demotivating.

A system should exist for solving some kind of problem. How good it is will depend on how deep the developers understand this problem. This means that the features will need to be bound to some domain experience, not merely try to exist on their own, just because they can. From one piece should follow the next, even though it is often hard to make code read as logically connected paragraphs. Yet, not thinking about structure leads to code that looks very chaotic, written without plan or thought. This can happen when we mix or group unrelated functionality together. Such code is hard to understand and could be a waste of other people's efforts. We don't read only from left to right, but also from top to bottom and not randomly.

It is easy to forget that programming is a creative act, when the number of off-the-shelf components we could integrate into our projects grows every minute. The problem of integrating then takes precedence over the problem of creating, which causes many to believe that how we combine the various utilities is what matters most, even when they recreate a different version of a past reality. This in turn may lead to the desire to study even more components rather than new ways of thinking, seeing or experiencing the problems that we are trying to solve. Finding new approaches, thinking about addressing entire problem hierarchies and combining knowledge from various fields to do so are all valuable skills, which, when we look for shortcuts, we forget to practice. These skills may remain inhibited due to our desire to reach to the next component. It is not the actual feature or component that matter, but the depth and richness of the thinking that went behind them. This is what makes them as beautiful as they are. If someone asks you “How can I do this?”, there is a benefit in taking your time to answer and think thoroughly about it instead of rushing to recommend concrete components without explaining the reasoning behind the choice. The second is much easier, which is why it is often preferred in answers. But it doesn't contribute to learning anything new.

The number of lines of code can't be an indicator for how innovative a solution is. Long, repetitive code may actually teach us less than analyzing some short, but sophisticated algorithm. Some beautiful pieces of code are short, but they give us plenty of new ideas that can be combined with our existing knowledge. Constraint and restraint seem to work well together in motivating such work. Short pieces of coherent code can be changed at a much faster pace, encouraging experimentation and exploration, which is why they can be made to work much more often. Fixing a non-obvious bug in a larger piece of code would take considerably longer, limiting our ability to learn meanwhile. A small feature that works is still more valuable than a big one that doesn't, so we can subdivide our work to organize around a collection of small, easily distinguishable features, which we can join together rather than seek to create a monolithic piece of code in one sitting. The longer the code, the harder it becomes to operate on it and the smaller and ineffective our changes seem.

We may learn considerably more if we inspect features developed by people working in a different domain, with a different kind of thinking. By repeatedly applying what we already know we risk to make our existing knowledge an obstacle that prevents us from learning effectively in the future. We also may start to feel more attached to the features we work on, which could become a problem when we need to switch to another type of work. If we have experience with three different content management systems, studying a fourth one wouldn't be of much help. If we learned three programming languages, studying another one that follows the same paradigm will have diminishing returns. In that case learning something new like linear algebra may prove to be a better investment of our time, and we may start writing simple programs that work with matrices. Then we could eventually study spectral theory to learn how to decompose them or extract the most interesting data from big ones. CSS transformations can also be represented through matrices, which is reflected in matrix3d(). Tensors are generalizations of matrices to higher dimensions and Google released TensorFlow to everyone. And so on. If we do this long enough, we may start to see some hidden connections, some interesting ways to combine various separate thoughts and channel them together in a given direction. That's when thinking starts to become interesting—when it could attract additional thought particles to sustain and increase its force in that direction. But this won't come to us for free and without effort, because every feature has a cost that we have to pay.

bit.ly/1IObNuD