Lessons learned from Linux and the Unix Philosophy book

Default featured post

Linux and the Unix Philosophy written by Mike Gancarz (2003) is an amazing and worthy book. This book was life changing for me. It changed my perspective about software and its development. Not only software, I learned many other useful stuff from the book which not only applicable to software development but also to life. Even though the book is quite dated, its concepts, Unix & Linux concepts basically, are valid up to now.

I have highlighted parts of the content which I found them useful and share them here, so others can also use.

Linux and the Unix Philosophy
Linux and the Unix Philosophy

Disclaimer: The following section of the post is part of Linux and the Unix Philosophy book by Mike Gancarz hence it is copyrighted and it is excluded from Creative Commons Attribution-ShareAlike 4.0 International License (which is used for this blog, refer to here) and all rights reserved and belong to Mike Gancarz, the author of the book. You can purchase the full book (paper, electronic, audio) from Amazon or other sellers.

  • The creators of the Unix operating system started with a radical concept: They assumed that the user of their software would be computer literate from the start. The entire Unix philosophy revolves around the idea that the user knows what he is doing. While other operating system designers take great pains to accommodate the breadth of users from novice to expert, the designers of Unix took an inhospitable “if you can’t understand it, you don’t belong here” kind of approach.
  • Again, Thompson had set a precedent that was later adopted by Unix developers: Someone whose back is against the wall often writes great programs. When an application must be written, and (1) it must be done to meet a practical need, (2) there aren’t any “experts” around who would know how to write it, and (3) there is no time to do it “right,” the odds are very good that an outstanding piece of software will be written. In Thompson’s case, he needed an operating system written in a portable language because he had to move his programs from one hardware architecture to another. No selfdescribed portable operating system experts could be found. And he certainly didn’t have time to do it “right.”
  • At this juncture, you may be wondering at what point a small program becomes a large program. The answer is, it depends. Large programs in one environment may be considered average for another. What is spaghetti code to one programmer may be daily pasta to the next. Here are some signs that suggest that your software may be departing from the Unix approach:
    • The number of parameters passed to a function call causes the line length on the screen to be exceeded.
    • The subroutine code exceeds the length of the screen or a standard piece of 81/2-by-11 inch paper. Note that smaller fonts and taller windows on a large workstation monitor allow you to comfortably stretch the limit a bit. Just don’t get carried away.
    • You can no longer remember what a subroutine does without having to read the comments in the code.
    • The names of the source files scroll off the screen when you obtain a directory listing.
    • You discover that one file has become too unwieldy for defining the program’s global variables.
    • You’re still developing the program, and you cannot remember what condition causes a given error message.
    • You find yourself having to print the source code on paper to help you organize it better.
  • There exists a kind of software engineer who takes pride in writing large programs that are impossible for anyone but himself to comprehend. He considers such work “job security.” You might say that the only thing bigger than his ego is his last application program. Such software engineers are far too common in traditional software engineering environments.
  • The best program, like Cousteau’s lake fly, performs but one task in its life and does it well. The program is loaded into memory, accomplishes its function, and then gets out of the way to allow the next single-minded program to begin. This sounds simple, yet it may surprise you how many software developers have difficulty sticking to this singular goal.
  • The following group of questions would be a good starting point for deciding.
    • Does the program require user interaction? Could the user supply the necessary parameters in a file or on the command line?
    • Does the program require input data to be formatted in a special way? Are there other programs on the system that could do the formatting?
    • Does the program require the output data to be formatted in a special way? Is plain ASCII text sufficient?
    • Does another program exist that performs a similar function without your having to write a new program?
  • Software engineering requires more rework than any other engineering discipline, because software deals with abstract ideas. If people have difficulty describing hardware accurately enough to get it right the first time, imagine how difficult it is to describe something that exists only in peoples’ minds and in electrical patterns traversing a microchip. The admonition, “Abandon all hope, all ye who enter here” comes to mind.
  • When we say “as soon as possible,” we mean as soon as possible. Post haste. Spend a small amount of time planning the application, and then get to it. Write code as if your life depended on it. Strike while the computer is warm. You haven’t a nanosecond to waste.
  • The sooner it begins, the closer you will be to the released product. The prototype shows you what works and, most important, what doesn’t. You need this affirmation or denial of the path you’ve chosen. It is far better to discover a faulty assumption early and suffer only a minor setback than to spend months in development meetings unaware of the Mother of All Design Flaws waiting to ambush you three weeks before the deadline.
  • For every correct design, there are hundreds of incorrect ones. By knocking out a few of the bad ones early, you begin a process of elimination that invariably brings you closer to a quality finished product. You discover algorithms that do not compute, timings that keep missing a beat, and user interfaces that cannot interface. These trials serve to winnow out the chaff, leaving you with solid grain.
  • Man has the capacity to build only three systems. No matter how hard he may try, no matter how many hours, months, or years for which he may struggle, he eventually realizes that he is incapable of anything more. He simply cannot build a fourth. To believe otherwise is self-delusion. Why only three? That is a tough question. One could speculate on several theories drawn from scientific, philosophical, and religious view- points. Each could offer a plausible explanation for why this occurs. But the simplest explanation may be that the design process of man’s systems, like man himself, passes through three stages of life: youth, maturity, and old age. In the youthful stage, people are full of vigor. They are the new kids on the block; they exude vitality, crave attention, and show lots of potential. As a person passes from youth to maturity, he or she becomes more useful to the world. Careers take shape. Long-term relationships develop. Influence widens in worldly affairs. The person makes an impact—good, bad, or otherwise. By the time old age sets in, the person has lost many abilities of youth. As physical prowess declines, much of the person’s worldly influence fades as well. One’s career becomes a memory. Resistance to change sets in. What remains is valuable wisdom based on experience.
  • If he had the time to do it right, he wouldn’t be under any deadline pressure. So he has to improvise. But whereas the typical improvisation is one of compromise, this effort roars ahead without compromise—in the wrong direction. At least, that is what his observers conclude. When a developer’s back is against the wall without time to do it right, he tends to break all the rules. It appears to his traditional-thinking coworker that he has lost his marbles under the refrigerator.
  • Critics often rise against him. “He can’t get away with that!” they insist. “He doesn’t know what he’s doing. He’s going about it all wrong.” His response? “Yeah, it’s ugly, but it works!”
  • Man builds the First System alone or, at most, with a small group of people. This is partly because many people in the mainstream have little appreciation for what he’s doing. They have not seen what he has seen from his vantage point, so they have no idea why he’s excited. Hence, they conclude that his work is interesting, but not interesting enough for them to get involved. A second reason that many people avoid working on the First System is more practical: Building the First System involves significant risk. No one knows whether the First System will have the characteristics that lead to the development of the Second System. There always exists a better than 80 percent chance of failure. Being associated with a First System that failed can be “career limiting,” in industry jargon. Consequently, some people would rather wait until the idea is proven. (They usually become the developers of the Second System. More about them later.)
  • One thing is certain: The First System is almost never built by a large group of people. Once the team grows too big for daily personal interaction among its members, productivity wanes. Personalities clash. People carry out hidden agendas. Little fiefdoms emerge as people begin to pursue their selfish interests. These occurrences dilute the goal, making it difficult to reach.
  • The following list names some conceptual fields and technologies in which innovation is setting peoples’ imaginations on fire, spawning many First Systems today:
    • Artificial intelligence
    • Biotechnology
    • Digital imaging
    • Digital music
    • Electronic monetary systems and a cashless society
    • Genetic engineering and cloning
    • The Internet and the World Wide Web
    • Interactive television
    • The Mars landing
    • Miniature machines
    • Nanotechnology
    • Quality (Six Sigma, Total Quality Management, etc.)
    • Virtual reality
    • Wireless technology
  • “Experts” build the Second System using ideas proven by the First System. Attracted by the First System’s early success, they climb aboard for the ride, hoping to reap rewards by having their names attached to the system. Everyone wants to be associated with a winner. This group of self-proclaimed experts often contains many critics of the First System. These individuals, feeling angry with themselves for not having designed the First System, lash out at its originators, spewing forth claims that they could have done it better. Sometimes they are right. They could have done a better job on certain aspects of the design. Their specialized knowledge can prove very helpful in redesigning several more primitive algorithms found in the First System. Remember: The First System’s designer(s) had little time to do it right. Many of these experts know what is right and have the time and resources to carry it out.
  • Such attitudes often invoke the ire of the First System’s designers. Occasionally they fight back. Bob Scheifler, a pioneer of the popular X Window System, once responded to critics of his early design efforts in handy fashion: “If you don’t like it, you’re free to write your own industry-standard window system.”
  • By the time the Third System arrives, the First System’s originators have disappeared. The most innovative people in the Second System’s development have moved on to more interesting projects as well. No one wants to be associated with a future trailing-edge technology.
  • Unix developers take an alternative view toward detailed functional and design specifications. Although their intent is similar to that of the traditionalists, the order of events differs
    • Write a short functional specification
    • Write the software
    • Use an iterative test/rewrite process until you get it right
    • Write detailed documentation if necessary.
  • A short functional specification here usually means three to four pages or fewer. The rationale behind this is that (1) no one really knows what is wanted, and (2) it’s hard to write about something that doesn’t exist. While a traditionalist may spend weeks or even months writing functional and design specifications, the Unix programmer jots down what is known about the goal at the start and spends the rest of the time building the system.
  • For the Unix user, the iterative design process has begun. He and the developers are proceeding toward the Third System. Once the developers receive the initial reactions from the users, they know whether they are on the right track. Occasionally, the user may inform them that what he wanted is not what he received, resulting in a complete redesign. More often than not, the user will like part of the design and will provide useful commentary on what must be changed. Such cooperation between the developers and the end user is tantamount to producing a Third System that meets the user’s needs in most respects.
  • Unlike most systems planned in the traditional way, Unix evolved from a prototype. It grew out of a series of design iterations that have transformed it from a limited-use laboratory system into one called upon to tackle the most arduous tasks. It is living proof of a design philosophy that, although unorthodox to some, produces excellent results.
  • Software tightly coupled to a hardware platform holds its value only as long as that platform remains competitive. Once the platform’s advantage fades, the software’s worth drops off dramatically. To retain its value, it must be ported from one platform to another as newer, faster models become available. Failure to move rapidly to the next available hardware spells death. Market opportunity windows remain open for short periods before slamming shut. If the software doesn’t appear within its opportunity window, it finds its market position usurped by a competitor. One could even argue that the inability to port their software to the latest platforms has killed more software companies than all other reasons combined.
  • The Unix programmer chooses to make not only the code portable, but the data as well.
  • If you expect to move your data easily, you must make it portable. Any impediments to data movement, whether unintentional or by design, place limits on your data’s potential value. The longer your data must sit somewhere, the less it will be worth when it finally arrives. The problem is, if your data is not in a format that is useful at its destination, it must be converted. That conversion process takes time. Every second spent in data conversion eats away at your data’s value.
  • Text is not necessarily the highest performing format; it’s only the most common one. Other formats have been used in some applications, but none has found such wide acceptance as text. In nearly all cases, data encoded in text can be handled by target platforms.
  • The real power of text files becomes apparent when developing programs that use pipes under Unix. The pipe is a mechanism for passing one program’s output to another program as input without using a temporary file. Many Unix programs are little more than a collection of smaller programs joined by a series of pipes. As developers prototype a program, they can easily check the data for accuracy at each point along the pipeline. If there is a problem, they can interrupt the flow through the pipeline and figure out whether the data or its manipulator is the problem. This greatly speeds up the development process, giving the Unix programmer a significant edge over programmers on other operating systems.
  • The best way to write lots of software is to borrow it. By borrowing software, we mean incorporating other people’s modules, programs, and configuration files into your applications. In producing a derivative work, you augment the previous developers’ efforts, carrying their implementations to new heights of utility. Their software becomes more valuable as it finds a home in more applications; your software becomes more valuable because your investment in it has been reduced relative to its return. It’s a mutually beneficial situation.
  • Leveraging other people’s code can result in powerful advantages for the individual programmer, too. Some programmers believe that they protect their job security by writing the code themselves. “Since I write good code, I’ll always have a job,” they reason. The problem is, writing good code takes time. If you have to write every line of code used in an application, you will appear slow and inefficient. The real job security belongs to the software developer who can cut and paste modules together quickly and efficiently. Developers like that often produce so much software in a short time that companies generally consider them indispensable.
  • NIH can be especially dangerous with today’s emphasis on standardization in the software industry. Standards drive software vendors toward commoditization. All spreadsheets begin to look alike, all word processors provide the same capabilities, and so on. The resulting oversupply of available software to accomplish everyday tasks drives prices down, thus limiting profitability.
  • To date no full-featured debugger for shell scripts has emerged. Shell script writers must still rely on primitive mechanisms such as sh -x to display the names of the commands as they execute.
  • Albert Einstein once said, “I have only seen two miracles in my life, nuclear fusion and compound interest” (italics added). For all of his wonderful theories, these two ideas evidently impressed him most. He understood that a small amount of something, multiplied repeatedly, can grow to miraculous proportions. It took a keen mind like his to recognize the power in this simple idea.
  • The Unix programmer deals with the user interface by avoiding it (i.e., the typical Unix application doesn’t have a command parser). Instead, it expects its operating parameters to be entered on the command line when invoking the command. This eliminates most of the possibilities described above, especially the less graceful ones. For those commands that have many command line options-a cautionary sign to begin with-Unix provides standard library routines for weeding out bad user input. This results in significantly smaller application programs.
  • Why do Linux users persist in using the shorter names today? Certainly today’s PC keyboards can handle much higher speeds, so such brevity is no longer necessary. The reason is that shorter names allow you to cram much more on a command line. Remember that Linux shells have a pipe mechanism that allows you to string the output of one command into another on the command line.
  • The term WYSIWYG (what you see is what you get) is also WYSIATI (what you see is all there is). What you can click on is what you get and not much else. Attempts to provide graphical shells have produced cumbersome, inflexible user interfaces at best. And the use of “shortcuts” usually only shortens the time it takes to access a single command.
  • There is an old joke in the computer world that goes something like this: If one woman can have a baby in nine months, does that mean that nine women can have a baby in one month? The obvious implication here is that certain tasks must be performed serially due to nature. No attempts to make the process run in parallel will make the result appear any faster.
  • Remember that most software is a compromise in that it is never finished, only released. If, by definition, software can never be finished, one can never develop software that offers a 100-percent implementation. By recognizing the 90-percent solution as a reasonable measure of completeness, it becomes easy to write applications that appeal to most of the user population.
  • Most Unix programmers will agree that applications and systems should be simple, correct, consistent, and complete. The key is how they prioritize those characteristics. Although the proper system designer strives for completeness at the expense of simplicity, Unix developers elevate simplicity to the primary priority. So the proper system designer levels criticism at Unix, not so much because it is improper, but because its priorities are reversed. In that sense, it is worse than the proper system.
  • The old adage, “united we stand, divided we fall” rings true here.
  • Small programs that do one thing well avoid becoming large complex monoliths. Such monoliths often contain “spaghetti code” and are difficult to join with other applications. Small programs acknowledge that there is a tomorrow where today’s functional capabilities will appear incomplete or, worse, obsolete. Whereas monolithic programs cover all known circumstances, small programs freely admit that software evolves. Software is never finished; it is only released.
  • Today, many developers write software for Linux for the sheer fun of it. To them, this is entertainment, geek-style. Beyond a need for survival and social order, people have a built-in need to be entertained. It may seem strange to those outside of the computer world, but many Linux geeks find software development to be a great pastime.
  • This is where Crawford’s approach and the Unix philosophy part company. For while Crawford emphasizes that the most important thing a piece of software can do is to communicate with a human being, a Unix program’s highest priority is to communicate with another program. That the Unix program must eventually interact with a human being is secondary.
  • The Atari approach suggests that if the average person is given a gun, he is likely to shoot himself in the foot. By contrast, the Unix system effectively hands the uninitiated user an assault rifle, plugs in 20 rounds, and points it at his foot. A person with his foot shot off doesn’t walk away from the ride very easily.
  • Just because you have an army of people saying that you’re right doesn’t mean that you’re right. I can go out and get an army of people to say I’m right, too. From a conversation with Don “Smokey” Wallace.
  • The Microsoft Office suite is one example of such a product. Rather than containing a series of plug-in modules that could be inserted as needed, it tries to load “everything but the kitchen sink” when it starts up.
  • While visual and audible content have emotional impact, it is the written word that keeps us coming back for more. A Web site may have glitzy graphics and the coolest sounds, but these will not hold your interest forever. Unless a site has good written content of interest to you, you will not stay for long, and you won’t be back again.
  • Despite the overwhelming influence of Microsoft’s marketing juggernaut, Microsoft’s voice is not the only one in the world of computing. Just because there is an army of people saying that Microsoft Windows is the right approach to computing doesn’t mean that it’s true. I can go out and get an army that says otherwise. The soldiers in my army look like a bunch of penguins. They wear red hats and yellow stars. They adapt like chameleons. They speak a strange language with words like “grep,” “awk,” and “sed.” And they believe in a philosophy called Unix.
  • First they ignore you. Then they ridicule you. Then they fight you. And then you win. Gandhi
  • The Stones’ music was done in a cathedral, while Elvis borrowed his music from the bazaar of American music. Huh? Before you go having your nineteenth nervous breakdown over this one, let me explain. Maybe you’ll see why the Stones, instead of Elvis, were crying in the chapel.
  • Elvis was also a master of reuse. While Mick Jagger and Keith Richards were busy writing good songs, Elvis was busy borrowing them-borrowing them and turning them into hits, that is. While he could have written many of the songs he performed, he chose instead to leverage the work of other song-writers in addition to his own. That allowed him to have a much greater impact on the entertainment world. Yes, he became very wealthy from the sales of his music and merchandise. But many others also shared in his good fortune. That’s what happens in a true open-source collaborative kind of environment. Everyone benefits.
  • Latin once held the place that English holds today. Through the conquests of Rome, Latin usage had steadily grown from about 250 B.C. until the 6th century. Around that time, the Roman Catholic Church pronounced that Latin was the language for scientific, philosophical, and religious writing. So, from that point forward, if you wanted to be a cool priest or scholar, you had to speak Latin. However, with the gradual decline of the Roman Empire, invading barbarians, who, because of their war-like nature had little interest in being intellectually stylish, usually modified Latin to their liking. In adding their own idioms for kill, maim, and plunder, they felt it was their God-given right to subdue the language as well. Meanwhile, the priests and the scholars decried this pollution of their beloved Latin, further continuing to  promulgate its importance to an ever-shrinking religious minority, until such time as the only ones who could speak and write Latin did so in the quiet seclusion of cathedrals. On the other hand, English grew to become the international language not because it was pure or holy, but because it was adaptive. It would admit the entrance of practically any foreign word or concept into its everyday usage. It was able to interface with other cultures better than any other language to date. And interface it did. In the wild diversity of the bazaar we call life, English found its place among the largest number of nations primarily because of its ability to connect to and with anything.
  • Eric Raymond referred to these works as software that “scratches a programmer’s personal itch.” What appears to be a common goal of many OSS developers is that they just want to get it written. Their backs are often against the wall because of their daily job pressures, and they don’t have much time for frills. So, they usually skip most of the fancy glittery stuff that the marketing droids love and, instead, do one thing well, which is to produce a lean, mean application that solves a personal need. In the case of the successful ones, these bare-bones solutions strike the matches that set others’ imaginations on fire. That is how the hit OSS applications are born.
  • One area in which the open-source community has made significant advances is marketing. In today’s fiercely competitive computing world, it is not enough to produce high-quality software in accordance with sound design principles. One must tell people that one has done so. You can have the best frazzlefloozle in the world, but if no one knows about your frazzlefloozle, it won’t matter how good it is. It will fade into obscurity or languish as an undiscovered relic for years.
  • A map of the world that does not include Utopia is not worth even glancing at. -Oscar Wild

Inline/featured images credits