Библиотека сайта rus-linux.net
Broadly speaking, there are two directions one can go in designing a hierarchy of functions or objects. Which direction you choose, and when, has a profound effect on the layering of your code.
One direction is bottom-up, from concrete to abstract — working up from the specific operations in the problem domain that you know you will need to perform. For example, if one is designing firmware for a disk drive, some of the bottom-level primitives might be ‘seek head to physical block’, ‘read physical block’, ‘write physical block’, ‘toggle drive LED’, etc.
The other direction is top-down, abstract to concrete — from the highest-level specification describing the project as a whole, or the application logic, downwards to individual operations. Thus, if one is designing software for a mass-storage controller that might drive several different sorts of media, one might start with abstract operations like ‘seek logical block’, ‘read logical block’, ‘write logical block’, ‘toggle activity indication’. These would differ from the similarly named hardware-level operations above in that they're intended to be generic across different kinds of physical devices.
These two examples could be two ways of approaching design for the same collection of hardware. Your choice, in cases like this, is one of these: either abstract the hardware (so the objects encapsulate the real things out there and the program is merely a list of manipulations on those things), or organize around some behavioral model (and then embed the actual hardware manipulations that carry it out in the flow of the behavioral logic).
An analogous choice shows up in a lot of different contexts. Suppose you're writing MIDI sequencer software. You could organize that code around its top level (sequencing tracks) or around its bottom level (switching patches or samples and driving wave generators).
A very concrete way to think about this difference is to ask whether the design is organized around its main event loop (which tends to have the high-level application logic close to it) or around a service library of all the operations that the main loop can invoke. A designer working from the top down will start by thinking about the program's main event loop, and plug in specific events later. A designer working from the bottom up will start by thinking about encapsulating specific tasks and glue them together into some kind of coherent order later on.
For a larger example, consider the design of a Web browser. The
top-level design of a Web browser is a specification of the expected
behavior of the browser: what types of URL (like http:
or
ftp:
or file:
) it interprets, what kinds of images it is expected to be
able to render, whether and with what limitations it will accept
Java or
JavaScript, etc. The layer of the implementation
that corresponds to this top-level view is its main event loop; each
time around, the loop waits for, collects, and dispatches on a user
action (such as clicking a Web link or typing a character into a
field).
But the Web browser has to call a large set of domain primitives to do its job. One group of these is concerned with establishing network connections, sending data over them, and receiving responses. Another set is the operations of the GUI toolkit the browser will use. Yet a third set might be concerned with the mechanics of parsing retrieved HTML from text into a document object tree.
Which end of the stack you start with matters a lot, because the layer at the other end is quite likely to be constrained by your initial choices. In particular, if you program purely from the top down, you may find yourself in the uncomfortable position that the domain primitives your application logic wants don't match the ones you can actually implement. On the other hand, if you program purely from the bottom up, you may find yourself doing a lot of work that is irrelevant to the application logic — or merely designing a pile of bricks when you were trying to build a house.
Ever since the structured-programming controversies of the 1960s, novice programmers have generally been taught that the correct approach is the top-down one: stepwise refinement, where you specify what your program is to do at an abstract level and gradually fill in the blanks of implementation until you have concrete working code. Top-down tends to be good practice when three preconditions are true: (a) you can specify in advance precisely what the program is to do, (b) the specification is unlikely to change significantly during implementation, and (c) you have a lot of freedom in choosing, at a low level, how the program is to get that job done.
These conditions tend to be fulfilled most often in programs relatively close to the user and high in the software stack — applications programming. But even there those preconditions often fail. You can't count on knowing what the ‘right’ way for a word processor or a drawing program to behave is until the user interface has had end-user testing. Purely top-down programming often has the effect of overinvesting effort in code that has to be scrapped and rebuilt because the interface doesn't pass a reality check.
In self-defense against this, programmers try to do both things — express the abstract specification as top-down application logic, and capture a lot of low-level domain primitives in functions or libraries, so they can be reused when the high-level design changes.
Unix programmers inherit a tradition that is centered in systems programming, where the low-level primitives are hardware-level operations that are fixed in character and extremely important. They therefore lean, by learned instinct, more toward bottom-up programming.
Whether you're a systems programmer or not, bottom-up can also look more attractive when you are programming in an exploratory way, trying to get a grasp on hardware or software or real-world phenomena you don't yet completely understand. Bottom-up programming gives you time and room to refine a vague specification. Bottom-up also appeals to programmers' natural human laziness — when you have to scrap and rebuild code, you tend to have to throw away larger pieces if you're working top-down than you do if you're working bottom-up.
Real code, therefore tends to be programmed both top-down and bottom-up. Often, top-down and bottom-up code will be part of the same project. That's where ‘glue’ enters the picture.
When the top-down and bottom-up drives collide, the result is often a mess. The top layer of application logic and the bottom layer of domain primitives have to be impedance-matched by a layer of glue logic.
One of the lessons Unix programmers have learned over decades is that glue is nasty stuff and that it is vitally important to keep glue layers as thin as possible. Glue should stick things together, but should not be used to hide cracks and unevenness in the layers.
In the Web-browser example, the glue would include the rendering code that maps a document object parsed from incoming HTML into a flattened visual representation as a bitmap in a display buffer, using GUI domain primitives to do the painting. This rendering code is notoriously the most bug-prone part of a browser. It attracts into itself kluges to address problems that originate both in the HTML parsing (because there is a lot of ill-formed markup out there) and the GUI toolkit (which may not have quite the primitives that are really needed).
A Web browser's glue layer has to mediate not merely between specification and domain primitives, but between several different external specifications: the network behavior standardized in HTTP, HTML document structure, and various graphics and multimedia formats as well as the users' behavioral expectations from the GUI.
And one single bug-prone glue layer is not the worst fate that can befall a design. A designer who is aware that the glue layer exists, and tries to organize it into a middle layer around its own set of data structures or objects, can end up with two layers of glue — one above the midlayer and one below. Programmers who are bright but unseasoned are particularly apt to fall into this trap; they'll get each fundamental set of classes (application logic, midlayer, and domain primitives) right and make them look like the textbook examples, only to flounder as the multiple layers of glue needed to integrate all that pretty code get thicker and thicker.
The thin-glue principle can be viewed as a refinement of the Rule of Separation. Policy (the application logic) should be cleanly separated from mechanism (the domain primitives), but if there is a lot of code that is neither policy nor mechanism, chances are that it is accomplishing very little besides adding global complexity to the system.
The C language itself is a good example of the effectiveness of thin glue.
Thompson and Ritchie designed C to be a sort of structured assembler for an idealized processor and memory architecture that they expected could be efficiently modeled on most conventional computers. By happy accident, their model for the idealized processor was the PDP-11, a particularly mature and elegant minicomputer design that closely approximated Blaauw & Brooks's classical architecture. By good judgment, Thompson and Ritchie declined to wire into their language most of the few traits (such as little-endian byte order) where the PDP-11 didn't match it.[47]
The PDP-11 became an important model for the following generations of microprocessor architectures. The basic abstractions of C turned out to capture the classical architecture rather neatly. Thus, C started out as a good fit for microprocessors and, rather than becoming irrelevant as its assumptions fell out of date, actually became a better fit as hardware converged more closely on the classical architecture. One notable example of this convergence was when Intel's 386, with its large flat memory-address space, replaced the 286's awkward segmented-memory addressing after 1985; pure C was actually a better fit for the 386 than it had been for the 286.
It is not a coincidence that the experimental era in computer architectures ended in the mid-1980s at the same time that C (and its close descendant C++) were sweeping all before them as general-purpose programming languages. C, designed as a thin but flexible layer over the classical architecture, looks with two decades' additional perspective like almost the best possible design for the structured-assembler niche it was intended to fill. In addition to compactness, orthogonality, and detachment (from the machine architecture on which it was originally designed), it also has the important quality of transparency that we will discuss in Chapter═6. The few language designs since that are arguably better have needed to make large changes (like introducing garbage collection) in order to get enough functional distance from C not to be swamped by it.
This history is worth recalling and understanding because C shows us how powerful a clean, minimalist design can be. If Thompson and Ritchie had been less wise, they would have designed a language that did much more, relied on stronger assumptions, never ported satisfactorily off its original hardware platform, and withered away as the world changed out from under it. Instead, C has flourished — and the example Thompson and Ritchie set has influenced the style of Unix development ever since. As the writer, adventurer, artist, and aeronautical engineer Antoine de Saint-ExupИry once put it, writing about the design of airplanes: ╚La perfection est atteinte non quand il ne reste rien Ю ajouter, mais quand il ne reste rien Ю enlever╩. (“Perfection is attained not when there is nothing more to add, but when there is nothing more to remove”.)
Ritchie and Thompson lived by this maxim. Long after the resource constraints on early Unix software had eased, they worked at keeping C as thin a layer over the hardware as possible.
The history of C is also a lesson in the value of having a working reference implementation before you standardize. We'll return to this point in Chapter═17 when we discuss the evolution of C and Unix standards.
[46] The terms big-endian and little-endian refer to architectural choices about the order in which bits are interpreted within a machine word. Though it has no canonical location, a Web search for On Holy Wars and a Plea for Peace should turn up a classic and entertaining paper on this subject.
[47] The widespread belief that the autoincrement and autodecrement features entered C because they represented PDP-11 machine instructions is a myth. According to Dennis Ritchie, these operations were present in the ancestral B language before the PDP-11 existed.