Part 7: Enzymes Have A VERY Complex System of Working Together

In the last posting, we discussed how improbable it would be for accumulated random mutations to develop into a single functional enzyme. What makes this even more unlikely is that there are thousands of enzymes, which work together closely to produce useful products to maintain life.

Here is an online map that shows how they work together (1):

For example, the rate-limiting enzyme (the “bottle-neck”) in the glycolysis pathway,  phosphofructokinase-1, is controlled by the end product of its initial pathway, pyruvate, as well as ATP, an important final product of glycolysis. This keeps the pathway from running when it is not needed. Do you think complex pathways like this came about by unrelated random mutations? I don’t.


Part 6: Individual Enzymes Are Unlikely To Have Evolved by Accumulated Mutations

An enzyme is a protein designed to help a chemical reaction go faster within the body. It does this by making the halfway point in the reaction easier to get to (lowering the activation energy and stabilizing the transition state), as you can see in the picture below:

File:Induced fit diagram.svg


In order to do this, the protein must be properly made and folded to provide the perfect environment for the halfway point to happen. One nucleotide of DNA, one amino acid wrong, and you may have a broken enzyme, leading to serious diseases like sickle cell anemia or cystic fibrosis. There are times when substitutions or changes can happen, with little or no effect on the enzyme’s effectiveness, and little or no disease as a result; but there are a limited number of changes that are allowed for the enzyme to do its job.

According to evolution, the DNA blueprints for our proteins were developed by the accumulation of random mutations over time. However, this is a very improbable way to make thousands of efficiently working enzymes. Even when scientists with the best equipment have the DNA blueprints, they seriously struggle to know what those blueprints will look like when they are built (1):

In computational biology, de novo protein structure prediction refers to an algorithmic process by which protein tertiary structure is predicted from its amino acid primary sequence [which comes directly from DNA]. The problem itself has occupied leading scientists for decades while still remaining unsolved. According to Science, the problem remains one of the top 125 outstanding issues in modern science. At present, some of the most successful methods have a reasonable probability of predicting the folds of small, single-domain proteins within 1.5 angstroms over the entire structure. [This is a small protein compared to many of those out there.]

De novo methods tend to require vast computational resources, and have thus only been carried out for relatively small proteins. De novo protein structure modeling is distinguished from Template-based modeling (TBM) by the fact that no solved homolog to the protein of interest is known, making efforts to predict protein structure from amino acid sequence exceedingly difficult. Prediction of protein structure de novo for larger proteins will require better algorithms and larger computational resources such as those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing projects (such as Folding@home, Rosetta@home, the Human Proteome Folding Project, or Nutritious Rice for the World). Although computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) to fields such as medicine and drug design make de novo structure prediction an active research field…

A major limitation of de novo protein prediction methods is the extraordinary amount of computer time required to successfully solve for the native confirmation of a protein. Distributed methods, such as Rosetta@home, have attempted to ameliorate this by recruiting individuals who then volunteer idle home computer time in order to process data. Even these methods face challenges, however. For example, a distributed method was utilized by a team of researchers at the University of Washington and the Howard Hughes Medical Institute to predict the tertiary structure of the protein T0283 from its amino acid sequence. In a blind test comparing the accuracy of this distributed technique with the experimentally confirmed structure deposited within the Protein Databank (PDB), the predictor produced excellent agreement with the deposited structure. However, the time and number of computers required for this feat was enormous – almost two years and approximately 70,000 home computers, respectively.

So my question is, if it takes 70,000 home computers two years to figure out how to fold one efficiently working protein (a transmembrane protein for Salmonella typhi), can we reasonably expect that unguided random mutations, without the benefit of computer support, will fold for us 8,000 efficiently working proteins? I say no.


Part 5: The Genetic Code Appears To Have Been Coded by a Programmer

Human, animal and bacterial DNA all contain overlapping genes. “Overlapping gene” means that a working gene on one strand of DNA is the backwards version of the “message” (the gene) on the other strand. Both strands contain working genes that given instruction on how to make necessary parts of the human body. Here’s an article with more detail about that. (1)

An overlapping gene is like a palindrome. A palindrome is a sentence that reads backwards and forwards the same way. These genes don’t even have to read the same backwards and forward; they just need to make sense backwards and forward (the technical name for that is a semordnilap; they seem even hard to make long than conventional palindromes). It’s pretty easy to make a small palindrome, like this:

A man a plan a canal Panama.

A man a plan a canal Panama.

When you read it backward, it makes sense (sort of). But it is nearly impossible to write longer ones that make sense with a clear message when read backwards. Search for yourself online for “world’s longest palindrome that makes sense”. I couldn’t find ones longer than 250 words. (2) Even what I could see of that one seemed somewhat a stretch to understand. Of course, people have written much longer ones, but they make no sense at all (3) (4)!

Back to our genes. Now our bodies have 51 genes with overlapping strands of DNA that BOTH code for working (“coding”) genes (that means they make sense), and the bodies of mice have 28, according to our reseachers: (1)

Moreover, there are only 51 genes (51/615 = 8.3%) and 28 genes (28/497 = 5.6%) that involve exon-exon overlaps on opposite strands in human and mouse, respectively.

There are even more overlapping genes, if you count the regulatory DNA in between coding segments (5).  Counting all of the overlapping different-strand regulatory and coding genes together, we have a total of 438 overlapping genes. (6) How big are these overlaps? Our authors tell us that 57% of the overlapping genes are longer than 1000 “letters” (nucleotides). This would be 250 overlapping genes that are over 1000 letters long, including 40 longer than 10,000 letters and 20 longer than 20,000 letters (7).

Think about this! Expert palindromists don’t seem to be able to write palindromes that make sense longer than 250 words. But here we have in our own DNA 250 “semordnilaps” (palindromes that say something different, but make sense, when you read them backwards) longer than 1000 letters, including 43 longer than 10,000 letters and 22 longer than 20,000 letters (7).

This is a persuasive indication that our overlapping genes, and the rest of our DNA, didn’t come from random mutations, but was carefully coded.

Technical note:

To arrive at the statistics I just showed you, look at Table 2 in the previously referenced article and note that there are a total of 438 human different strand (diverging and converging) overlapping genes. We ignore the embedded genes, because evolutionists have easy explanations for how they arrived there. Now look at figure 1. The article explains how to read the graph (1):

For example, ~43% of the overlap regions of different-strand overlaps are shorter than 1 kb, whereas less than 2% of the overlap regions of same-strand overlaps are shorter than 1 kb. 

This means that 57% of the overlap regions of different-strand overlaps are longer than 1 kb.

According to the graph, 90% of overlapping divergent genes had overlaps less than 10,000 base pairs long. (We are using divergent numbers to apply to both divergent and convergent, which will give us a underestimated result.) This means that 10% of the genes were longer than 10,000 base pairs. According to Table 2, there are a total of 438 human different-strand (diverging and converging) overlapping genes. 10% of 438 would be about 43 different-strand overlapping genes longer than 10,000 base pairs.

In the same way, the graph shows that 95% of divergent different-strand overlapping genes were less than 20,000 base pairs. This means that 5% of them were 20,000 base pairs or more. 5% of 438 is about 22.

the findings were similar in mice and humans.







(6) Ibid.


Part 4: Missing Links are Still Missing

National Geographic’s cover story for its November 1999 magazine, “Feathers for T. Rex”, describing a newly discovered fossil from China with the following announcement (emphasis original):

“”IT’S A MISSING LINK between terrestrial dinosaurs and birds that could actually fly.”

Interesting claim. Over a century of evolutionary paleontologists have sifted through massive numbers of fossils from around the world, and we only now have found a missing link? There should be missing links all over the place from the millions of years required for natural selection and survival of the fittest. Where are they? Why is it so hard to find missing links?

Notably, a year later, in the October 2010 National Geographic, the Society ran another article, describing in detail how they mistakenly described a forged fossil–one composed of parts from multiple species–as a “missing link”. This means the missing link is still missing. No new cover stories have found the missing link since then. This suggests that the missing link is simply not there.

Some evolutionary scientists agree that the missing links are often not there. They support an alternate explanation called punctuated equilibrium, which suggests that evolutionary changes happened comparatively rapidly (over thousands or ten thousands of years, a very short time compare to the millions of years traditional evolution requires). But even Stephen Jay Gould, one of the foremost proponents of punctuated equilibrium, admits that transitional fossils are missing between many species (although he believes they are present between larger groups, such as birds and reptiles) (emphasis added) (1):

Since we proposed punctuated equilibria to explain trends, it is infuriating to be quoted again and again by creationists—whether through design or stupidity, I do not know—as admitting that the fossil record includes no transitional forms. Transitional forms are generally lacking at the species level, but they are abundant between larger groups.  

However, since such a vast number of fossils have been discovered, and most transitional forms are “generally lacking at the species level”, it is appropriate to question what happened to those species members for thousands and ten thousands of years. Is there not even one of them fossilized? This process would have been repeated thousands of times across the history of life on earth. The fact that even Gould’s species-to-species transitional forms are not present is a strong evidence that his framework that predicted them is not accurate.

PS: If you read Gould’s article closely, you will see that he does not strongly advocate instantaneous changes between species, such as a dinosaur hatching a bird’s egg. He mildly favors similar concepts, but they are not part of his main argument, at least in this article. I still judge that, since the transitional forms that should be present according to his framework, are missing, we should not accept punctuated equilibrium as a valid explanation of the fossil record. Yet we need also to state his position correctly, for honesty’s sake.


Part 3: Experiments Have Only Found One Universe

Last time, we talked about how many observers notice that the universe seems fine-tuned to accommodate life, “almost as if a Grand Designer had it all figured out” (1). This has led some people to attempt to develop theories to explain the fine-tuning of the Universe which do not require a designer. The most prominent theories call for the existence of a vast number of universes, which could have different physical laws from ours. If they are accurate, they would help to skew the odds in favor of a gradual evolutionary development of life on our planet. In that case, evolution could have taken place in many of these universes, and we would randomly happen to be in the universe that “made it”.

Before accepting that we have a universe that is finely tuned to support life, just because there are very many randomly formed universes, and we just happened to be in the one that was capable of supporting life, we need to ask about the scientific basis of this claim. Is there reason to believe that many other universes exist? Is there experimental evidence for them?

Proponents of these theories point to evidence from other parts of physics. Here is one example. Let’s say that we shine a light on a wall, with two plates in between: one with one slit, and another with with two parallel slits between the light and the wall. Now in this case, light acts as a wave; the light spreads out as it passes through the slits and interferes with the other “wave” of light. Look at the two pictures:

Double Slit Diffraction

Photo courtesy of EPZCAW,


Double slit diffraction 2

The most common interpretation of how this happens is that the light is acting like a wave, until it “crashes” when it hits the wall, where it “collapses” into a single spot, which means it is not a wave anymore. (For the physicists reading this, I am attempting to describe the Copenhagen interpretation of quantum mechanics in very simple terms).

However, another interpretation, provided without supporting evidence, is that when the light hits the wall, some of the properties of its wave (of its wavefunction) stay in this universe, and some automatically go to another universe, which (depending on who you ask) may newly have been created on the spot. Since there is a lot of light hitting objects around us, this explanation predicts a lot of new universes.

The question I would invite you to ask is this: where is the experimental evidence for the existence of any other universe besides ours? We can see in the lab that the light beams interfere with each other, but what reliable evidence do we have that part of that light beam, once it hits the wall, is now in another universe? What evidence is there for any other universes at all?

I have not been able to find any evidence at all. There are quite a few smart people trying to support this experimentally, but if you read carefully, you will see that they are drawing vast conclusions from limited and unclear evidence, or even trying to support this unproven theory with other unproven theories. Here are five examples:

#1: The “first evidence” many people see for the existence of multiple universes was found by a satellite called the Wilkinson Microwave Ansiotropy Probe (WMAP), which basically took pictures of invisible microwave “light” coming to us from outer space. Some people say that there are patterns in the microwaves that show that other universes are like “bubbles” “colliding” into ours, but if you read the actual papers carefully, they actually show that there is NO EVIDENCE FOR OTHER UNIVERSES from WMAP:

“We therefore conclude that this data set does not favor the bubble collision hypothesis for any value of Ns.” (2)

“The WMAP 7-year data-set does not favor the bubble collision hypothesis for any value of Ns”. (3) (see also [4])

#2: Cosmologist Laura Mersini-Houghton has claimed to have unmistakable evidence for the existence of another universe by predicting the discovery of the CMB cold spot. Her rationale for her equations for a “wave-like equation” (a waveform) of the universe (on which her predictions are based) are sensible, but those equations do not appear to assume the existence of other universes. So even if her equations are correct, I see no reason to conclude that her theory explains other universes, although it might explain  some things about this one! Besides, her prediction of a northern cold spot and of “dark flow” have not panned out (though several other predictions of hers have) (5) (6) (7).

#3: Scientists in the last few years have realized that the existing data does not support the existence of multiple universes very well. As of 2013, everyone was looking forward to the high-definition pictures of the microwave radiation coming from outer space, that we were supposed to get from the new, higher-resolution Planck satellite (5):

Future data from the Planck experiment will allow us to greatly improve on these results. If confirmed, the presence of bubble collisions in the CMB would be an extraordinary insight into the origins of our universe.

(Note that the researchers recognize the “great” need to improve their results, and that those results have not yet been confirmed.)

The pictures from Planck were supposed to be much higher resolution than the WMAP pictures. But now that the Planck pictures have come out (they were released on March 21, 2013 [6]), there have been no new discoveries based on it supporting a multiverse. Search for scientific papers on it (not newspaper articles!). As of this writing, on June 24, 2013, I found none. In spite of the absence of new experimental findings, British newspapers still claim that the previously known patterns in the microwave, which were seen by Planck, constitute new evidence for multiple universes (8), (9). This is poor science, since all we are seeing higher resolution pictures of things we already knew about, without anything to cause new conclusions.

#4: Scientific American recently ran an article by Vilenkin and Tegmark promoting the existence of multiple universes, but the arguments both used were not sound, and they did not present experimental evidence in support of their position. Vilenkin uses an example of using a broader model of a single universe to predict constants of nature for our local region of the cosmos (this is fair in principle); but then applies this without justification to suggest the existence of universes OUTSIDE of the single universe he just talked about.

Tegmark enthusiastically defends the multiverse position without providing reasonable, clear evidence of how current theories, robustly supported by experimental evidence, predict the existence of multiple universes (10).

He cites predictions of the density of dark energy, made on the basis of string theory, as evidence of the validity of a nearly infinite number of multiple universes. But string theory has just as little experimental evidence to support it:

“Until some way is found to observe the yet hypothetical higher dimensions, which are needed for consistency reasons, M-theory [a unified form of string theory] has a very difficult time making predictions that can be tested in a laboratory. Technologically, it may never be possible for it to be experimentally confirmed” (11).

“As of 2010, there are no feasible experiments to test the differences between MWI [the many-worlds interpretation of multiple universes] and other theories” (12).

#5: Paul Davies, professor of natural philosophy at the Australian Center for Astrobiology, argures in the NY Times that although multiple universes are not impossible, they are difficult to prove, and should not be taken seriously. Although opposed to the concept of God as creator, he states the following (13):

How seriously can we take this explanation for the friendliness of nature? Not very, I think. For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there are an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification.

Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator.