Evidence for Design in Physics and Biology: From the Origin of the Universe to the Origin of Life

By: Dr. Stephen C. Meyer; ©2001
Dr. Stephen Meyer explaines the evidence for intelligent design from a physics and biological perspective.

Pages 53–111 of Science and Evidence for Design in the Universe. The Proceedings of the Wethersfield Institute. Michael Behe, William A. Dembski, and Stephen C. Meyer(San Francisco: Ignatius Press, 2001. © 2000 Homeland Foundation.)

I. Introduction

In the preceding essay, mathematician and probability theo­rist William Dembski notes that human beings often detect the prior activity of rational agents in the effects they leave behind.[1] Archaeologists assume, for example, that rational agents pro­duced the inscriptions on the Rosetta Stone; insurance fraud investigators detect certain ‘‘cheating patterns’’ that suggest intentional manipulation of circumstances rather than ‘‘natu­ral’’disasters; and cryptographers distinguish between random signals and those that carry encoded messages.

More importantly, Dembski’s work establishes the criteria by which we can recognize the effects of rational agents and distinguish them from the effects of natural causes. In brief,he shows that systems or sequences that are both ‘‘highly com­plex’’(or very improbable)and ‘‘specified’’ are always produced by intelligent agents rather than by chance and/or physical-chemical laws. Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm. A specification, on the other hand, is a match or correspondence between an event or object and an independently given pattern or set of functional requirements.

As an illustration of the concepts of complexity and speci­fication, consider the following three sets of symbols:

‘‘inetehnsdysk]idmhcpew,ms.s/a’’
‘‘Time and tide wait for no man.’’
‘‘ABABABABABABABABABAB’’

Both the first and second sequences shown above are complex because both defy reduction to a simple rule. Each represents a highly irregular, a periodic, and improbable sequence of sym­bols. The third sequence is not complex but is instead highly ordered and repetitive. Of the two complex sequences, only the second exemplifies a set of independent functional require­ments—that is,only the second sequence is specified. English has a number of functional requirements.For example,to con­vey meaning in English one must employ existing conventions of vocabulary(associations of symbol sequences with partic­ular objects, concepts, or ideas), syntax, and grammar (such as ‘‘every sentence requires a subject and a verb’’). When ar­rangements of symbols ‘‘match’’ or utilize existing vocabulary and grammatical conventions(that is, functional requirements) communication can occur. Such arrangements exhibit ‘‘specification’’.The second sequence(‘‘Time and tide wait for no man’’) clearly exhibits such a match between itself and the pre­existing requirements of English vocabulary and grammar.

Thus, of the three sequences above only the second man­ifests complexity and specification, both of which must be present for us to infer a designed system according to Demb­ski’s theory. The third sequence lacks complexity, though it does exhibit a simple pattern, a specification of sorts. The first sequence is complex but not specified, as we have seen. Only the second sequence, therefore, exhibits both complexity and specification. Thus, according to Dembski’s theory, only the second sequence indicates an intelligent cause—as indeed our intuition tells us.

As the above illustration suggests,Dembski’s criteria of spec­ification and complexity bear a close relationship to certain concepts of information. As it turns out, the joint criteria of complexity and specification (or ‘‘specified complexity’’) are evidence for design in physics and biology 55 equivalent or ‘‘isomorphic’’ with the term ‘‘information con­tent’’,[2] as it is often used.[3] Thus, Dembski’s work suggests that ‘‘high information content’’indicates the activity of an intelli­gent agent. Common, as well as scientific, experience confirms this theoretical insight. For example,few rational people would attribute hieroglyphic inscriptions to natural forces such as wind or erosion rather than to intelligent activity.

Dembski’s work also shows how we use a comparative rea­soning process to decide between natural and intelligent causes. We usually seek to explain events by reference to one of three competing types of explanation: chance, necessity(as the result of physical-chemical laws), and/or design,(that is, as the work of an intelligent agent). Dembski has created a formal model of evaluation that he calls ‘‘the explanatory filter’’. The fil­ter shows that the best explanation of an event is determined by its probabilistic features or ‘‘signature’’. Chance best ex­plains events of small or intermediate probability; necessity (or physical-chemical law) best explains events of high probability; and intelligent design best explains small probability events that also manifest specificity (of function, for example). His ‘‘explanatory filter’’ constitutes, in effect, a scientific method for detecting the activity of intelligence. When events are both highly improbable and specified(by an independent pattern) we can reliably detect the activity of intelligent agents.In such cases, explanations involving design are better than those that rely exclusively on chance and/or deterministic natural pro­cesses.

Dembski’s work shows that detecting the activity of intel­ligent agency (‘‘inferring design’’) represents an indisputably common form of rational activity. His work also suggests that the properties of complexity and specification reliably indicate the prior activity of an intelligent cause. This essay will build on this insight to address another question. It will ask: Are the criteria that indicate intelligent design present in features of nature that clearly preexist the advent of humans on earth? Are the features that indicate the activity of a designing intel­ligence present in the physical structure of the universe or in the features of living organisms? If so, does intelligent design still constitute the best explanation of these features, or might naturalistic explanations based upon chance and/or physico­chemical necessity constitute a better explanation? This paper will evaluate the merits of the design argument in light of de­velopments in physics and biology as well as Dembski’s work on ‘‘the design inference’’. I will employ Dembski’s comparative explanatory method(the ‘‘explanatory filter’’) to evaluate the competing explanatory power of chance, necessity, and design with respect to evidence in physics and biology. I will argue that intelligent design (rather than chance, necessity, or a com­bination of the two) constitutes the best explanation of these phenomena. I will,thus,suggest an empirical, as well as a the­oretical, basis for resuscitating the design argument.

2.1 Evidence of Design in Physics: Anthropic ‘‘Fine Tuning’’

Despite the long popularity of the design argument in the history of Western thought, most scientists and philosophers had come to reject the design argument by the beginning of the twentieth century. Developments in philosophy during the eighteenth century and developments in science during the nineteenth (such as Laplace’s nebular hypothesis and Darwin’s theory of evolution by natural selection) left most scientists and scholars convinced that nature did not manifest unequiv­ocal evidence of intelligent design.

During the last forty years, however, developments in phy­sics and cosmology have placed the word ‘‘design’’ back in the scientific vocabulary. Beginning in the 1960s, physicists unveiled a universe apparently fine-tuned for the possibility of human life. They discovered that the existence of life in the universe depends upon a highly improbable but precise balance of physical factors.[4] The constants of physics, the ini­tial conditions of the universe, and many other of its features appear delicately balanced to allow for the possibility of life. Even very slight alterations in the values of many factors, such as the expansion rate of the universe, the strength of gravita­tional or electromagnetic attraction, or the value of Planck’s constant, would render life impossible. Physicists now refer to these factors as ‘‘anthropic coincidences’’ (because they make life possible for man) and to the fortunate convergence of all these coincidences as the ‘‘fine tuning of the universe’’. Given the improbability of the precise ensemble of values represented by these constants, and their specificity relative to the require­ments of a life-sustaining universe, many physicists have noted that the fine tuning strongly suggests design by a preexistent intelligence. As well-known British physicist Paul Davies has put it, ‘‘the impression of design is overwhelming.’’[5]

To see why, consider the following illustration. Imagine that you are a cosmic explorer who has just stumbled into the con­trol room of the whole universe. There you discover an elabo­rate ‘‘universe-creating machine’’,with rows and rows of dials, each with many possible settings. As you investigate, you learn that each dial represents some particular parameter that has to be calibrated with a precise value in order to create a universe in which life can exist. One dial represents the possible settings for the strong nuclear force, one for the gravitational constant, one for Planck’s constant, one for the ratio of the neutron mass to the proton mass, one for the strength of electromagnetic attraction, and so on. As you, the cosmic explorer, examine the dials, you find that they could easily have been tuned to different settings. Moreover, you determine by careful calcu­lation that if any of the dial settings were even slightly altered, life would cease to exist. Yet for some reason each dial is set at just the exact value necessary to keep the universe running. What do you infer about the origin of these finely tuned dial settings?

Not surprisingly, physicists have been asking the same qu­estion. As astronomer George Greenstein mused,‘‘the thought in­sistently arises that some supernatural agency, or rather Agency, must be involved. Is it possible that suddenly, without intend­ing to, we have stumbled upon scientific proof for the exis­tence of a Supreme Being? Was it God who stepped in and so providentially crafted the cosmos for our benefit?’’[6] For many scientists,[7] the design hypothesis seems the most obvious and intuitively plausible answer to this question. As Sir Fred Hoyle commented, ‘‘a commonsense interpretation of the facts sug­gests that a super intellect has monkeyed with physics, as well as chemistry and biology, and that there are no blind forces worth speaking about in nature.’’[8] Many physicists now concur. They would argue that, given the improbability and yet the precision of the dial settings, design seems the most plausible explanation for the anthropic fine tuning. Indeed, it is pre­cisely the combination of the improbability(or complexity) of the settings and their specificity relative to the conditions re­quired for a life-sustaining universe that seems to trigger the ‘‘commonsense’’ recognition of design.

2.2 Anthropic Fine Tuning and the Explanatory Filter

Yet several other types of interpretations have been proposed:
(1) the so-called weak anthropic principle, which denies that the fine tuning needs explanation;(2)explanations based upon natural law; and(3)explanations based upon chance. Each of these approaches denies that the fine tuning of the universe resulted from an intelligent agent. Using Dembski’s ‘‘explana­tory filter’’,this section will compare the explanatory power of competing types of explanations for the origin of the anthropic fine tuning.It will also argue, contra (1), that the fine tuning does require explanation.

Of the three options above, perhaps the most popular ap­proach, at least initially, was the ‘‘weak anthropic principle’’ (WAP). Nevertheless, the WAP has recently encountered severe criticism from philosophers of physics and cosmology. Advocates of WAP claimed that if the universe were not .ne-tuned to allow for life, then humans would not be here to observe it. Thus, they claimed, the fine tuning requires no explanation. Yet as John Leslie and William Craig have ar­gued, the origin of the fine tuning does require explanation.[9] Though we humans should not be surprised to find ourselves living in a universe suited for life (by definition), we ought to be surprised to learn that the conditions necessary for life are so vastly improbable. Leslie likens our situation to that of a blindfolded man who has discovered that, against all odds, he has survived a firing squad of one hundred expert marks­men.[10] Though his continued existence is certainly consistent with all the marksmen having missed, it does not explain why the marksmen actually did miss. In essence,the weak anthropic principle wrongly asserts that the statement of a necessary con­dition of an event eliminates the need for a causal explanation of that event. Oxygen is a necessary condition of fire, but saying so does not provide a causal explanation of the San Francisco fire. Similarly,the fine tuning of the physical constants of the universe is a necessary condition for the existence of life, but that does not explain, or eliminate the need to explain, the origin of the fine tuning.

While some scientists have denied that the fine-tuning coin­cidences require explanation(with the WAP), others have tried to find various naturalistic explanations for them. Of these, ap­peals to natural law have proven the least popular for a simple reason. The precise ‘‘dial settings’’ of the different constants of physics are specific features of the laws of nature themselves. For example, the gravitational constant G determines just how strong gravity will be, given two bodies of known mass sep­arated by a known distance. The constant G is a term within the equation that describes gravitational attraction. In this same way, all the constants of the fundamental laws of physics are fea­tures of the laws themselves. Therefore, the laws cannot explain these features; they comprise the features that we need to explain.

As Davies has observed, the laws of physics ‘‘seem themselves to be the product of exceedingly ingenious design’’.[11] Further, natural laws by definition describe phenomena that conform to regular or repetitive patterns. Yet the idiosyncratic values of the physical constants and initial conditions of the universe constitute a highly irregular and nonrepetitive ensemble. It seems unlikely, therefore, that any law could explain why all the fundamental constant shave exactly the values they do— why, for example, the gravitational constant should have ex­actly the value 6.67 × 10– ¹¹ Newton-meters² per kilogram² and the permittivity constant in Coulombs law the value 8.85 × 10– ¹² Coulombs² per Newton-meter², and the electron charge to mass ratio 1.76 × 10¹¹ Coulombs per kilogram, and Planck’s constant 6.63 × 10– ³4 Joules-seconds, and so on.[12] These val­ues specify a highly complex array. As a group, they do not seem to exhibit a regular pattern that could in principle be subsumed or explained by natural law.

Explaining anthropic coincidences as the product of chance has proven more popular, but this has several severe liabilities as well. First, the immense improbability of the fine tuning makes straightforward appeals to chance untenable. Physicists have discovered more than thirty separate physical or cosmo­logical parameters that require precise calibration in order to produce a life-sustaining universe.[13] Michael Denton, in his book Nature’s Destiny(1998), has documented many other nec­essary conditions for specifically human life from chemistry, geology, and biology. Moreover, many individual parameters exhibit an extraordinarily high degree of .ne tuning. The ex­pansion rate of the universe must be calibrated to one part in 106°.[14] A slightly more rapid rate of expansion—by one part in 106°—would have resulted in a universe too diffuse in matter to allow stellar formation.[15] An even slightly less rapid rate of expansion—by the same factor—would have produced an immediate gravitational recollapse. The force of gravity itself requires fine tuning to one part in 104°.[16] Thus, our cosmic explorer finds himself confronted not only with a large ensem­ble of separate dial settings but with very large dials contain­ing a vast array of possible settings, only very few of which allow for a life-sustaining universe.In many cases, the odds of arriving at a single correct setting by chance, let alone all the correct settings, turn out to be virtually infinitesimal. Oxford physicist Roger Penrose has noted that a single parameter, the so-called ‘‘original phase-space volume’’, required such precise fine tuning that the ‘‘Creator’s aim must have been [precise] to an accuracy of one part in 10¹°¹²³’’ (which is ten billion mul­tiplied by itself 123 times). Penrose goes on to remark that, ‘‘one could not possibly even write the number down in full . . .[since] it would be ‘1’ followed by 10¹²³ successive ‘0’s!’’ —more zeros than the number of elementary particles in the entire universe. Such is, he concludes, ‘‘the precision needed to set the universe on its course’’.[17]

To circumvent such vast improbabilities,some scientists have postulated the existence of a quasi-infinite number of parallel universes. By doing so, they increase the amount of time and number of possible trials available to generate a life-sustaining universe and thus increase the probability of such a universe arising by chance. In these ‘‘many worlds’’or ‘‘possible worlds’’ scenarios—which were originally developed as part of the ‘‘Ev­erett interpretation’’ of quantum physics and the inflationary Big Bang cosmology of Andre´Linde—any event that could happen, however unlikely it might be, must happen some­where in some other parallel universe.[18] So long as life has a positive (greater than zero) probability of arising, it had to arise in some possible world. Therefore, sooner or later some universe had to acquire life-sustaining characteristics. Clifford Longley explains that according to the many-worlds hypoth­esis:

There could have been millions and millions of different uni­verses created each with different dial settings of the fundamental ratios and constants, so many in fact that the right set was bound to turn up by sheer chance. We just happened to be the lucky ones.[19]

According to the many-worlds hypothesis, our existence in the universe only appears vastly improbable, since calculations about the improbability of the anthropic coincidences arising by chance only consider the ‘‘probabilistic resources’’(roughly, the amount of time and the number of possible trials) avail­able within our universe and neglect the probabilistic resources available from the parallel universes. According to the many-worlds hypothesis, chance can explain the existence of life in the universe after all.

The many-worlds hypothesis now stands as the most pop­ular naturalistic explanation for the anthropic fine tuning and thus warrants detailed comment. Though clearly ingenious, the many-worlds hypothesis suffers from an overriding difficulty: we have no evidence for any universes other than our own. Moreover, since possible worlds are by definition causally inaccessible to our own world, there can be no evidence for their existence except that they allegedly render probable oth­erwise vastly improbable events. Of course, no one can observe a designer directly either, although atheistic designer—that is,God—is not causally disconnected from our world.Even so, recent work by philosophers of science such as Richard Swinburne, John Leslie, Bill Craig,[20] Jay Richards,[21] and Robin Collins have established several reasons for preferring the(the­istic) design hypothesis to the naturalistic many-worlds hy­pothesis.

2.3 Theistic Design: A Better Explanation?

First, all current cosmological models involving multiple uni­verses require some kind of mechanism for generating uni­verses. Yet such a ‘‘universe generator’’ would itself require precisely configured physical states, thus begging the question of its initial design. As Collins describes the dilemma:

In all currently worked out proposals for what this universe gen­erator could be—such as the oscillating big bang and the vacuum fluctuation models…—the ‘‘generator’’itself is governed by a complex set of laws that allow it to produce universes. It stands to reason, therefore, that if these laws were slightly different the generator probably would not be able to produce any universes that could sustain life.[22]

Indeed,from experience we know that some machines(or fac­tories) can produce other machines. But our experience also suggests that such machine-producing machines themselves re­quire intelligent design.

Second, as Collins argues, all things being equal,we should prefer hypotheses ‘‘that are natural extrapolations from what we already know’’ about the causal powers of various kinds of entities.[23] Yet when it comes to explaining the anthropic coin­cidences,the multiple-worlds hypothesis fails this test, whereas the theistic-design hypothesis does not. To illustrate, Collins asks his reader to imagine a paleontologist who posits the exis­tence of an electromagnetic ‘‘dinosaur-bone-producing field’’, as opposed to actual dinosaurs, as the explanation for the origin of large fossilized bones. While certainly such a field qualifies as a possible explanation for the origin of the fossil bones, we have no experience of such fields or of their producing fos­silized bones. Yet we have observed animal remains in various phases of decay and preservation in sediments and sedimentary rock. Thus, most scientists rightly prefer the actual dinosaur hypothesis over the apparent dinosaur hypothesis (that is,the ‘‘dinosaur-bone-producing-field’’hypothesis) as an explanation for the origin of fossils. In the same way, Collins argues, we have no experience of anything like a ‘‘universe generator’’ (that is not itself designed;see above) producing finely tuned systems or infinite and exhaustively random ensembles of possibilities. Yet we do have extensive experience of intelligent agents producing finely tuned machines such as Swiss watches. Thus, Collins concludes, when we postulate ‘‘a supermind’’ (God) to explain the fine tuning of the universe, we are ex­trapolating from our experience of the causal powers of known entities (that is,intelligent humans), whereas when we postu­late the existence of an infinite number of separate universes, we are not.

Third, as Craig has shown, for the many-worlds hypoth­esis to suffice as an explanation for anthropic fine tuning, it must posit an exhaustively random distribution of physi­cal parameters and thus an infinite number of parallel universes to insure that a life-producing combination of fac­tors will eventually arise. Yet neither of the physical mod­els that allow for a multiple-universe interpretation—Everett’s quantum-mechanical model or Linde’s inflationary cosmology —provides a compelling justification for believing that such an exhaustively random and infinite number of parallel uni­verses exists, but instead only a finite and nonrandom set.[24] The Everett model, for example, only generates an ensem­ble of material states, each of which exists within a parallel universe that has the same set of physical laws and constants as our own. Since the physical constants do not vary ‘‘across universes’’,Everett’s model does nothing to increase the prob­ability of the precise fine tuning of constants in our universe arising by chance. Though Linde’s model does envision a vari­able ensemble of physical constants in each of his individual ‘‘bubble universes’’, his model fails to generate either an ex­haustively random set of such conditions or the infinite number of universes required to render probable the life-sustaining fine tuning of our universe.

Fourth, Richard Swinburne argues that the theistic-design hypothesis constitutes a simpler and less adhoc hypothesis than the many-worlds hypothesis.[25] He notes that virtually the only evidence for many worlds is the very anthropic fine tuning the hypothesis was formulated to explain. On the other hand, the theistic-design hypothesis, though also only supported by in­direct evidences, can explain many separate and independent features of the universe that the many-worlds scenario cannot, including the origin of the universe itself, the mathematical beauty and elegance of physical laws, and personal religious experience. Swinburne argues that the God hypothesis is a simpler as well as a more comprehensive explanation because it requires the postulation of only one explanatory entity, rather than the multiple entities—including the finely tuned universe generator and the infinite number of causally separate universes —required by the many-worlds hypothesis.

Swinburne and Collins’arguments suggest that few reason­able people would accept such an unparsimonious and far­ fetched explanation as the many-worlds hypothesis in any other domain of life. That some scientists dignify the many-worlds hypothesis with serious discussion may speak more to an unim­peachable commitment to naturalistic philosophy than to any compelling merit for the idea itself. As Clifford Longley noted in the London Times in 1989,[26] the use of the many-worlds hy­pothesis to avoid the theistic-design argument often seems to betray a kind of special pleading and metaphysical desperation. As Longley explains:

The [anthropic-design argument] and what it points to is of such an order of certainty that in any other sphere of science,it would be regarded as settled. To insist otherwise is like insist­ing that Shakespeare was not written by Shakespeare because it might have been written by a billion monkeys sitting at a billion keyboards typing for a billion years. So it might. But the sight of scientific atheists clutching at such desperate straws has put new spring in the step of theists.[27]

Indeed, it has. As the twentieth century comes to a close, the design argument has reemerged from its premature retirement at the hands of biologists in the nineteenth century. Physics, astronomy, cosmology, and chemistry have each revealed that life depends on a very precise set of design parameters, which, as it happens, have been built into our universe. The .ne-tuning evidence has led to a persuasive reformulation of the design hypothesis, even if it does not constitute a formal de­ductive proof of God’s existence. Physicist John Polkinghorne has written that, as a result, ‘‘we are living in an age where there is a great revival of natural theology taking place. That revival of natural theology is taking place not on the whole among theologians, who have lost their nerve in that area,but among the scientists.’’[28] Polkinghorne also notes that this new natural theology generally has more modest ambitions than the natural theology of the Middle Ages. Indeed, scientists ar­guing for design based upon evidence of anthropic fine tuning tend to do so by inferring an intelligent cause as a ‘‘best ex­planation’’, rather than by making a formal deductive proof of God’s existence.(See Appendix,pp. 213–34, ‘‘Fruitful Interchange or Polite Chitchat: The Dialogue between Science and Theology’’.) Indeed, the foregoing analysis of competing types of causal explanations for the anthropic fine tuning sug­gests intelligent design precisely as the best explanation for its origin. Thus, fine-tuning evidence may support belief in God’s existence, even if it does not ‘‘prove’’ it in a deductively certain way.

3.1 Evidence of Intelligent Design in Biology

Despite the renewed interest in design among physicists and cosmologists, most biologists are still reluctant to consider such notions. Indeed, since the late-nineteenth century, most biolo­gists have rejected the idea that biological organisms manifest evidence of intelligent design. While many acknowledge the appearance of design in biological systems, they insist that purely naturalistic mechanisms such as natural selection act­ing on random variations can fully account for the appearance of design in living things.

3.2 Molecular Machines

Nevertheless, the interest in design has begun to spread to bi­ology. For example, in 1998 the leading journal, Cell, featured a special issue on ‘‘Macromolecular Machines’’. Molecular ma­chines are incredibly complex devices that all cells use to pro­cess information, build proteins, and move materials back and forth across their membranes. Bruce Alberts, President of the National Academy of Sciences, introduced this issue with an ar­ticle entitled, ‘‘TheCell as a Collection of Protein Machines’’. In it, he stated that:

We have always underestimated cells. . . . The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. . . .Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.[29]

Alberts notes that molecular machines strongly resemble ma­chines designed by human engineers, although as an orthodox neo-Darwinian he denies any role for actual, as opposed to apparent, design in the origin of these systems.

In recent years, however, a formidable challenge to this view has arisen within biology. In his book Darwin’s Black Box (1996), Lehigh University biochemist Michael Behe shows that neo-Darwinists have failed to explain the origin of complex molecular machines in living systems. For example, Behe looks at the ion-powered rotary engines that turn the whip-like flagella of certain bacteria.[30] He shows that the intricate ma­chinery in this molecular motor—including a rotor, a stator, O-rings, bushings, and a drive shaft—requires the coordinated interaction of some forty complex protein parts. Yet the ab­sence of any one of these proteins results in the complete loss of motor function. To assert that such an ‘‘irreducibly com­plex’’ engine emerged gradually in a Darwinian fashion strains credulity. According to Darwinian theory, natural selection se­lects functionally advantageous systems.[31] Yet motor function only ensues after all the necessary parts have independently self-assembled—an astronomically improbable event. Thus, Behe insists that Darwinian mechanisms cannot account for the origin of molecular motors and other ‘‘irreducibly complex sys­tems’’ that require the coordinated interaction of multiple independent protein parts.

To emphasize his point, Behe has conducted a literature search of relevant technical journals[32] He has found a com­plete absence of gradualistic Darwinian explanations for the origin of the systems and motors that he discusses. Behe con­cludes that neo-Darwinists have not explained, or in most cases even attempted to explain, how the appearance of design in ‘‘Irreducibly complex’’ systems arose naturalistically. Instead, he notes that we know of only one cause sufficient to produce functionally integrated, irreducibly complex systems, namely, intelligent design. Indeed,whenever we encounter irreducibly complex systems and we know how they arose, they were in­variably designed by an intelligent agent. Thus, Behe con­cludes (on strong uniformitarian grounds) that the molecular machines and complex systems we observe in cells must also have had an intelligent source. In brief, molecular motors ap­pear designed because they were designed.

3.3 The Complex Specificity of Cellular Components

As Dembski has shown elsewhere,[33] Behe’s notion of ‘‘irre­ducible complexity’’ constitutes a special case of the ‘‘com­plexity’’ and ‘‘specification’’ criteria that enables us to detect intelligent design. Yet a more direct application of Dembski’s criteria to biology can be made by analyzing proteins, the macromolecular components of the molecular machines that Behe examines inside the cell. In addition to building motors and other biological structures, proteins perform the vital bio­chemical functions—information processing, metabolic regu­lation, signal transduction—necessary to maintain and create cellular life.

Biologists, from Darwin’s time to the late 1930s, assumed that proteins had simple, regular structures explicable by refer­ence to mathematical laws. Beginning in the 1950s, however, biologists made a series of discoveries that caused this sim­plistic view of proteins to change. Molecular biologist Fred Sanger determined the sequence of constituents in the protein molecule insulin. Sanger’s work showed that proteins are made of long nonrepetitive sequences of amino acids, rather like an irregular arrangement of colored beads on a string.[34] Later in the 1950s, work by John Kendrew on the structure of the protein myoglobin showed that proteins also exhibit a surprising three-dimensional complexity. Far from the simple structures that biologists had imagined, Kendrew’s work revealed an ex­traordinarily complex and irregular three-dimensional shape— a twisting, turning, tangled chain of amino acids. As Kendrew explained in 1958, ‘‘the big surprise was that it was so irregular . . . the arrangement seems to be almost totally lacking in the kind of regularity one instinctively anticipates, and it is more complicated than has been predicted by any theory of protein structure.’’[35]

During the 1950s, scientists quickly realized that proteins possess another remarkable property. In addition to their com­plexity, proteins also exhibit specificity, both as one-dimen­sional arrays and as three-dimensional structures. Whereas proteins are built from rather simple chemical building blocks known as amino acids, their function—whether as enzymes, signal transducers, or structural components in the cell— depends crucially upon the complex but specific sequencing of these building blocks.[36] Molecular biologists such as Fran­cis Crick quickly likened this feature of proteins to a linguistic text. Just as the meaning (or function) of an English text de­pends upon the sequential arrangement of letters in a text, so too does the unction of a polypeptide(a sequence of amino acids) depend upon its specific sequencing. Moreover, in both cases, slight alterations in sequencing can quickly result in loss of function.

In the biological case, the specific sequencing of amino acids gives rise to specific three-dimensional structures. This struc­ture or shape in turn (largely) determines what function, if any, the amino acid chain can perform within the cell. A functioning protein’s three-dimensional shape gives it a ‘‘hand-in-glove’’ .t with other molecules in the cell, enabling it to catalyze spe­cific chemical reactions or to build specific structures within the cell. Due to this specificity, one protein cannot usually substitute for another any more than one tool can substitute for another.A topoisomerase can no more perform the job of a polymerase, than a hatchet can perform the function of a soldering iron. Proteins can perform functions only by virtue of their three-dimensional specificity of fit with other equally specified and complex molecules within the cell. This three-dimensional specificity derives in turn from a one-dimensional specificity of sequencing in the arrangement of the amino acids that form proteins.

3.4. The Sequence Specificity of DNA

The discovery of the complexity and specificity of proteins has raised an important question. How did such complex but spe­cific structures arise in the cell? This question recurred with particular urgency after Sanger revealed his results in the early 1950s. Clearly, proteins were too complex and functionally spe­cific to arise ‘‘by chance’’. Moreover, given their irregularity, it seemed unlikely that a general chemical law or regularity gov­erned their assembly. Instead, as Nobel Prize winner Jacques Monod recalled, molecular biologists began to look for some source of information within the cell that could direct the con­struction of these highly specific structures. As Monod would later recall, to explain the presence of the specific sequencing of proteins, ‘‘you absolutely needed a code.’’[37]

In 1953, James Watson and Francis Crick elucidated the structure of the DNA molecule.[38] The structure they discov­ered suggested a means by which information or ‘‘specficity’’ of sequencing might be encoded along the spine of DNA’s sugar-phosphate backbone.[39] Their model suggested that vari­ations in sequencing of the nucleotide bases might find expres­sion in the sequencing of the amino acids that form proteins. Francis Crick proposed this idea in 1955, calling it the ‘‘sequence hypothesis’’.[40]

According to Crick’s hypothesis, the specific arrangement of the nucleotide bases on the DNA molecule generates the specific arrangement of amino acids in proteins.[41] The se­quence hypothesis suggested that the nucleotide bases in DNA functioned like letters in an alphabet or characters in a ma­chine code.Just as alphabetic letters in a written language may perform a communication function depending upon their se­quencing, so too, Crick reasoned, the nucleotide bases in DNA may result in the production of a functional protein molecule depending upon their precise sequential arrangement. In both cases, function depends crucially upon sequencing. The nu­cleotide bases in DNA function in precisely the same way as symbols in a machine code or alphabetic characters in a book. In each case, the arrangement of the characters deter­mines the function of the sequence as a whole. As Dawkins notes, ‘‘The machine code of the genes is uncannily computer­like.’’[42] Or, as software innovator Bill Gates explains, ‘‘DNA is like a computer program, but far, far more advanced than any software we’ve ever created.’’[43] In the case of a computer code, the specific arrangement of just two symbols (0 and 1) suffices to carry information. In the case of an English text, the twenty-six letters of the alphabet do the job. In the case of DNA, the complex but precise sequencing of the four nu­cleotide bases adenine, thymine, guanine, and cytosine(A,T, G, and C)—stores and transmits genetic information, infor­mation that finds expression in the construction of specific proteins. Thus, the sequence hypothesis implied not only the complexity but also the functional specificity of DNA base se­quencing.

4.1 The Origin of Life and the Origin of Biological Information (or Speci.ed Complexity)

Developments in molecular biology have led scientists to ask how the specific sequencing—the information content or spec­ified complexity—in both DNA and proteins originated. These developments have also created severe difficulties for all strictly naturalistic theories of the origin of life. Since the late 1920s, naturalistically minded scientists have sought to explain the ori­gin of the very first life as the result of a completely undirected process of ‘‘chemical evolution’’. In The Origin of Life (1938), Alexander I. Oparin, a pioneering chemical evolutionary theo­rist, envisioned life arising by a slow process of transformation starting from simple chemicals on the early earth. Unlike Dar­winism, which sought to explain the origin and diversification of new and more complex living forms from simpler, preexist­ing forms, chemical evolutionary theory seeks to explain the origin of the very first cellular life. Yet since the late 1950s, naturalistic chemical evolutionary theories have been unable to account for the origin of the complexity and specificity of DNA base sequencing necessary to build a living cell.[44] This section will, using the categories of Dembski’s explanatory fil­ter, evaluate the competing types of naturalistic explanations for the origin of specified complexity or information content necessary to the first living cell.

4.2 Beyond the Reach of Chance

Perhaps the most common popular view about the origin of life is that it happened by chance. A few scientists have also voiced support for this view at various times during their careers. In 1954 physicist George Wald, for example, argued for the causal efficacy of chance operating over vast expanses of time. As he stated, ‘‘Time is in fact the hero of the plot. . . .Given so much time, the impossible becomes possible, the possible proba­ble, and the probable virtually certain.’’[45] Later Francis Crick would suggest that the origin of the genetic code—that is, the translation system—might be a ‘‘frozen accident’’[46] Other theories have invoked chance as an explanation for the origin of genetic information, often in conjunction with prebiotic natural selection.(See section 4.3.)

While some scientists may still invoke ‘‘chance’’ as an expla­nation, most biologists who specialize in origin-of-life research now reject chance as a possible explanation for the origin of the information in DNA and proteins.[47] Since molecular biol­ogists began to appreciate the sequence specificity of proteins and nucleic acids in the 1950s and 1960s, many calculations have been made to determine the probability of formulat­ing functional proteins and nucleic acids at random. Various methods of calculating probabilities have been offered by Mo­rowitz,[48] Hoyle and Wickramasinghe,[49] Cairns-Smith,[50] Pri­gogine,[51] and Yockey.[52] For the sake of argument, such calcu­lations have often assumed extremely favorable prebiotic conditions (whether realistic or not), much more time than there was actually available on the early earth, and theoretically max­imal reaction rates among constituent monomers (that is,the constituent parts of proteins, DNA and RNA). Such calcula­tions have invariably shown that the probability of obtaining functionally sequenced biomacromolecules at random is, in Prigogine’s words, ‘‘vanishingly small . . . even on the scale of . . . billions of years’’.[53] As Cairns-Smith wrote in 1971:

Blind chance . . . is very limited. Low-levels of cooperation he [blind chance] can produce exceedingly easily (the equivalent of letters and small words), but he becomes very quickly incompe­tent as the amount of organization increases. Very soon indeed long waiting periods and massive material resources become irrelevant.[54]

Consider the probabilistic hurdles that must be overcome to construct even one short protein molecule of about one hun­dred amino acids in length. (A typical protein consists of about three hundred amino acid residues, and many crucial proteins are very much longer.)[55]

First, all amino acids must form a chemical bond known as a peptide bond so as to join with other amino acids in the pro­tein chain. Yet in nature many other types of chemical bonds are possible between amino acids; in fact, peptide and nonpep­tide bonds occur with roughly equal probability. Thus, at any given site along a growing amino acid chain the probability of having a peptide bond is roughly 1/2. The probability of attaining four peptide bonds is: (1/2 × 1/2 × 1/2 × 1/2)= 1/16 or(1/2)4. The probability of building a chain of one hundred amino acids in which all linkages involve peptide linkages is (1/2).., or roughly 1 chance in 10³°.

Secondly, in nature every amino acid has a distinct mirror image of itself, one left-handed version, or L-form, and one right-handed version, or D-form. These mirror-image forms are called optical isomers. Functioning proteins use only left-handed amino acids,yet the right-handed and left-handed iso­mers occur in nature with roughly equal frequency. Taking this into consideration compounds the improbability of attaining a biologically functioning protein. The probability of attaining at random only L-amino acids in a hypothetical peptide chain one hundred amino acids long is (1/2)¹°°, or again roughly 1 chance in 1030. The probability of building a one hundred-amino-acid­ length chain at random in which all bonds are peptide bonds and all amino acids are L-form would be roughly 1 chance in 1060.

Finally, functioning proteins have a third independent re­quirement, which is the most important of all; their amino acids must link up in a specific sequential arrangement,just as the letters in a sentence must be arranged in a specific sequence to be meaningful. In some cases, even changing one amino acid at a given site can result in a loss of protein function. Moreover, because there are twenty biologically occurring amino acids, the probability of getting a specific amino acid at a given site is small, that is, 1/20. (Actually the probability is even lower because there are many nonproteineous amino acids in nature.) On the assumption that all sites in a protein chain require one particular amino acid, the probability of attaining a particu­lar protein one hundred amino acids long would be(1/20)¹°°, or roughly 1 chance in 10130. We know now, however, that some sites along the chain do tolerate several of the twenty proteineous amino acids, while others do not. The biochemist Robert Sauer of MIT has used a technique known as ‘‘cassette mutagenesis’’ to determine just how much variance among amino acids can be tolerated at any given site in several pro­teins. His results have shown that, even taking the possibility of variance into account, the probability of achieving a func­tional sequence of amino acids[56] in several known proteins at random is still ‘‘vanishingly small’’, roughly 1 chance in 1065 —an astronomically large number.[57] (There are 1065 atoms in our galaxy.)[58]

Moreover, if one also factors in the need for proper bonding and homochirality (the first two factors discussed above), the probability of constructing a rather short functional protein at random becomes so small (1 chance in 10125) as to approach the universal probability bound of 1 chance in 10150, the point at which appeals to chance become absurd given the ‘‘proba­bilistic resources’’ of the entire universe.[59] Further,making the same calculations for even moderately longer proteins easily pushes these numbers well beyond that limit. For example, the probability of generating a protein of only 150 amino acids in length exceeds(using the same method as above)[60] 1 chance in 10180, well beyond the most conservative estimates for the small probability bound given our multi-billion-year-old uni­verse.[61] In other words,given the complexity ofproteins,itis extremely unlikely that a random search through all thepossi­bleamino acid sequences could generate even a single relatively short functional protein in the time available since the begin­ning of the universe (let alone the time available on the early earth). Conversely, to have a reasonable chance of finding a short functional protein in such a random search would require vastly more time than either cosmology or geology allows.

More realistic calculations (taking into account the probable presence of nonproteineous amino acids, the need for vastly longer functional proteins to perform specific functions such as polymerization, and the need for multiple proteins func­tioning in coordination) only compound these improbabilities —indeed, almost beyond computability. For example, recent theoretical and experimental work on the so-called ‘‘minimal complexity’’ required to sustain the simplest possible living or­ganism suggests a lower bound of some 250 to 400 genes and their corresponding proteins.[62] The nucleotide sequence space corresponding to such a system of proteins exceeds 4³°°°°°. The improbability corresponding to this measure of molecular complexity again vastly exceeds 1 chance in 10150, and thus the ‘‘probabilistic resources’’ of the entire universe.

Leave a Comment