Part 6: Individual Enzymes Are Unlikely To Have Evolved by Accumulated Mutations

An enzyme is a protein designed to help a chemical reaction go faster within the body. It does this by making the halfway point in the reaction easier to get to (lowering the activation energy and stabilizing the transition state), as you can see in the picture below:

File:Induced fit diagram.svg


In order to do this, the protein must be properly made and folded to provide the perfect environment for the halfway point to happen. One nucleotide of DNA, one amino acid wrong, and you may have a broken enzyme, leading to serious diseases like sickle cell anemia or cystic fibrosis. There are times when substitutions or changes can happen, with little or no effect on the enzyme’s effectiveness, and little or no disease as a result; but there are a limited number of changes that are allowed for the enzyme to do its job.

According to evolution, the DNA blueprints for our proteins were developed by the accumulation of random mutations over time. However, this is a very improbable way to make thousands of efficiently working enzymes. Even when scientists with the best equipment have the DNA blueprints, they seriously struggle to know what those blueprints will look like when they are built (1):

In computational biology, de novo protein structure prediction refers to an algorithmic process by which protein tertiary structure is predicted from its amino acid primary sequence [which comes directly from DNA]. The problem itself has occupied leading scientists for decades while still remaining unsolved. According to Science, the problem remains one of the top 125 outstanding issues in modern science. At present, some of the most successful methods have a reasonable probability of predicting the folds of small, single-domain proteins within 1.5 angstroms over the entire structure. [This is a small protein compared to many of those out there.]

De novo methods tend to require vast computational resources, and have thus only been carried out for relatively small proteins. De novo protein structure modeling is distinguished from Template-based modeling (TBM) by the fact that no solved homolog to the protein of interest is known, making efforts to predict protein structure from amino acid sequence exceedingly difficult. Prediction of protein structure de novo for larger proteins will require better algorithms and larger computational resources such as those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing projects (such as Folding@home, Rosetta@home, the Human Proteome Folding Project, or Nutritious Rice for the World). Although computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) to fields such as medicine and drug design make de novo structure prediction an active research field…

A major limitation of de novo protein prediction methods is the extraordinary amount of computer time required to successfully solve for the native confirmation of a protein. Distributed methods, such as Rosetta@home, have attempted to ameliorate this by recruiting individuals who then volunteer idle home computer time in order to process data. Even these methods face challenges, however. For example, a distributed method was utilized by a team of researchers at the University of Washington and the Howard Hughes Medical Institute to predict the tertiary structure of the protein T0283 from its amino acid sequence. In a blind test comparing the accuracy of this distributed technique with the experimentally confirmed structure deposited within the Protein Databank (PDB), the predictor produced excellent agreement with the deposited structure. However, the time and number of computers required for this feat was enormous – almost two years and approximately 70,000 home computers, respectively.

So my question is, if it takes 70,000 home computers two years to figure out how to fold one efficiently working protein (a transmembrane protein for Salmonella typhi), can we reasonably expect that unguided random mutations, without the benefit of computer support, will fold for us 8,000 efficiently working proteins? I say no.


Leave a Reply

Your email address will not be published. Required fields are marked *