Part 7: Enzymes Have A VERY Complex System of Working Together

In the last posting, we discussed how improbable it would be for accumulated random mutations to develop into a single functional enzyme. What makes this even more unlikely is that there are thousands of enzymes, which work together closely to produce useful products to maintain life.

Here is an online map that shows how they work together (1):

For example, the rate-limiting enzyme (the “bottle-neck”) in the glycolysis pathway,  phosphofructokinase-1, is controlled by the end product of its initial pathway, pyruvate, as well as ATP, an important final product of glycolysis. This keeps the pathway from running when it is not needed. Do you think complex pathways like this came about by unrelated random mutations? I don’t.


Part 6: Individual Enzymes Are Unlikely To Have Evolved by Accumulated Mutations

An enzyme is a protein designed to help a chemical reaction go faster within the body. It does this by making the halfway point in the reaction easier to get to (lowering the activation energy and stabilizing the transition state), as you can see in the picture below:

File:Induced fit diagram.svg


In order to do this, the protein must be properly made and folded to provide the perfect environment for the halfway point to happen. One nucleotide of DNA, one amino acid wrong, and you may have a broken enzyme, leading to serious diseases like sickle cell anemia or cystic fibrosis. There are times when substitutions or changes can happen, with little or no effect on the enzyme’s effectiveness, and little or no disease as a result; but there are a limited number of changes that are allowed for the enzyme to do its job.

According to evolution, the DNA blueprints for our proteins were developed by the accumulation of random mutations over time. However, this is a very improbable way to make thousands of efficiently working enzymes. Even when scientists with the best equipment have the DNA blueprints, they seriously struggle to know what those blueprints will look like when they are built (1):

In computational biology, de novo protein structure prediction refers to an algorithmic process by which protein tertiary structure is predicted from its amino acid primary sequence [which comes directly from DNA]. The problem itself has occupied leading scientists for decades while still remaining unsolved. According to Science, the problem remains one of the top 125 outstanding issues in modern science. At present, some of the most successful methods have a reasonable probability of predicting the folds of small, single-domain proteins within 1.5 angstroms over the entire structure. [This is a small protein compared to many of those out there.]

De novo methods tend to require vast computational resources, and have thus only been carried out for relatively small proteins. De novo protein structure modeling is distinguished from Template-based modeling (TBM) by the fact that no solved homolog to the protein of interest is known, making efforts to predict protein structure from amino acid sequence exceedingly difficult. Prediction of protein structure de novo for larger proteins will require better algorithms and larger computational resources such as those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing projects (such as Folding@home, Rosetta@home, the Human Proteome Folding Project, or Nutritious Rice for the World). Although computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) to fields such as medicine and drug design make de novo structure prediction an active research field…

A major limitation of de novo protein prediction methods is the extraordinary amount of computer time required to successfully solve for the native confirmation of a protein. Distributed methods, such as Rosetta@home, have attempted to ameliorate this by recruiting individuals who then volunteer idle home computer time in order to process data. Even these methods face challenges, however. For example, a distributed method was utilized by a team of researchers at the University of Washington and the Howard Hughes Medical Institute to predict the tertiary structure of the protein T0283 from its amino acid sequence. In a blind test comparing the accuracy of this distributed technique with the experimentally confirmed structure deposited within the Protein Databank (PDB), the predictor produced excellent agreement with the deposited structure. However, the time and number of computers required for this feat was enormous – almost two years and approximately 70,000 home computers, respectively.

So my question is, if it takes 70,000 home computers two years to figure out how to fold one efficiently working protein (a transmembrane protein for Salmonella typhi), can we reasonably expect that unguided random mutations, without the benefit of computer support, will fold for us 8,000 efficiently working proteins? I say no.


Part 5: The Genetic Code Appears To Have Been Coded by a Programmer

Human, animal and bacterial DNA all contain overlapping genes. “Overlapping gene” means that a working gene on one strand of DNA is the backwards version of the “message” (the gene) on the other strand. Both strands contain working genes that given instruction on how to make necessary parts of the human body. Here’s an article with more detail about that. (1)

An overlapping gene is like a palindrome. A palindrome is a sentence that reads backwards and forwards the same way. These genes don’t even have to read the same backwards and forward; they just need to make sense backwards and forward (the technical name for that is a semordnilap; they seem even hard to make long than conventional palindromes). It’s pretty easy to make a small palindrome, like this:

A man a plan a canal Panama.

A man a plan a canal Panama.

When you read it backward, it makes sense (sort of). But it is nearly impossible to write longer ones that make sense with a clear message when read backwards. Search for yourself online for “world’s longest palindrome that makes sense”. I couldn’t find ones longer than 250 words. (2) Even what I could see of that one seemed somewhat a stretch to understand. Of course, people have written much longer ones, but they make no sense at all (3) (4)!

Back to our genes. Now our bodies have 51 genes with overlapping strands of DNA that BOTH code for working (“coding”) genes (that means they make sense), and the bodies of mice have 28, according to our reseachers: (1)

Moreover, there are only 51 genes (51/615 = 8.3%) and 28 genes (28/497 = 5.6%) that involve exon-exon overlaps on opposite strands in human and mouse, respectively.

There are even more overlapping genes, if you count the regulatory DNA in between coding segments (5).  Counting all of the overlapping different-strand regulatory and coding genes together, we have a total of 438 overlapping genes. (6) How big are these overlaps? Our authors tell us that 57% of the overlapping genes are longer than 1000 “letters” (nucleotides). This would be 250 overlapping genes that are over 1000 letters long, including 40 longer than 10,000 letters and 20 longer than 20,000 letters (7).

Think about this! Expert palindromists don’t seem to be able to write palindromes that make sense longer than 250 words. But here we have in our own DNA 250 “semordnilaps” (palindromes that say something different, but make sense, when you read them backwards) longer than 1000 letters, including 43 longer than 10,000 letters and 22 longer than 20,000 letters (7).

This is a persuasive indication that our overlapping genes, and the rest of our DNA, didn’t come from random mutations, but was carefully coded.

Technical note:

To arrive at the statistics I just showed you, look at Table 2 in the previously referenced article and note that there are a total of 438 human different strand (diverging and converging) overlapping genes. We ignore the embedded genes, because evolutionists have easy explanations for how they arrived there. Now look at figure 1. The article explains how to read the graph (1):

For example, ~43% of the overlap regions of different-strand overlaps are shorter than 1 kb, whereas less than 2% of the overlap regions of same-strand overlaps are shorter than 1 kb. 

This means that 57% of the overlap regions of different-strand overlaps are longer than 1 kb.

According to the graph, 90% of overlapping divergent genes had overlaps less than 10,000 base pairs long. (We are using divergent numbers to apply to both divergent and convergent, which will give us a underestimated result.) This means that 10% of the genes were longer than 10,000 base pairs. According to Table 2, there are a total of 438 human different-strand (diverging and converging) overlapping genes. 10% of 438 would be about 43 different-strand overlapping genes longer than 10,000 base pairs.

In the same way, the graph shows that 95% of divergent different-strand overlapping genes were less than 20,000 base pairs. This means that 5% of them were 20,000 base pairs or more. 5% of 438 is about 22.

the findings were similar in mice and humans.







(6) Ibid.


Part 4: Missing Links are Still Missing

National Geographic’s cover story for its November 1999 magazine, “Feathers for T. Rex”, describing a newly discovered fossil from China with the following announcement (emphasis original):

“”IT’S A MISSING LINK between terrestrial dinosaurs and birds that could actually fly.”

Interesting claim. Over a century of evolutionary paleontologists have sifted through massive numbers of fossils from around the world, and we only now have found a missing link? There should be missing links all over the place from the millions of years required for natural selection and survival of the fittest. Where are they? Why is it so hard to find missing links?

Notably, a year later, in the October 2010 National Geographic, the Society ran another article, describing in detail how they mistakenly described a forged fossil–one composed of parts from multiple species–as a “missing link”. This means the missing link is still missing. No new cover stories have found the missing link since then. This suggests that the missing link is simply not there.

Some evolutionary scientists agree that the missing links are often not there. They support an alternate explanation called punctuated equilibrium, which suggests that evolutionary changes happened comparatively rapidly (over thousands or ten thousands of years, a very short time compare to the millions of years traditional evolution requires). But even Stephen Jay Gould, one of the foremost proponents of punctuated equilibrium, admits that transitional fossils are missing between many species (although he believes they are present between larger groups, such as birds and reptiles) (emphasis added) (1):

Since we proposed punctuated equilibria to explain trends, it is infuriating to be quoted again and again by creationists—whether through design or stupidity, I do not know—as admitting that the fossil record includes no transitional forms. Transitional forms are generally lacking at the species level, but they are abundant between larger groups.  

However, since such a vast number of fossils have been discovered, and most transitional forms are “generally lacking at the species level”, it is appropriate to question what happened to those species members for thousands and ten thousands of years. Is there not even one of them fossilized? This process would have been repeated thousands of times across the history of life on earth. The fact that even Gould’s species-to-species transitional forms are not present is a strong evidence that his framework that predicted them is not accurate.

PS: If you read Gould’s article closely, you will see that he does not strongly advocate instantaneous changes between species, such as a dinosaur hatching a bird’s egg. He mildly favors similar concepts, but they are not part of his main argument, at least in this article. I still judge that, since the transitional forms that should be present according to his framework, are missing, we should not accept punctuated equilibrium as a valid explanation of the fossil record. Yet we need also to state his position correctly, for honesty’s sake.


Part 3: Experiments Have Only Found One Universe

Last time, we talked about how many observers notice that the universe seems fine-tuned to accommodate life, “almost as if a Grand Designer had it all figured out” (1). This has led some people to attempt to develop theories to explain the fine-tuning of the Universe which do not require a designer. The most prominent theories call for the existence of a vast number of universes, which could have different physical laws from ours. If they are accurate, they would help to skew the odds in favor of a gradual evolutionary development of life on our planet. In that case, evolution could have taken place in many of these universes, and we would randomly happen to be in the universe that “made it”.

Before accepting that we have a universe that is finely tuned to support life, just because there are very many randomly formed universes, and we just happened to be in the one that was capable of supporting life, we need to ask about the scientific basis of this claim. Is there reason to believe that many other universes exist? Is there experimental evidence for them?

Proponents of these theories point to evidence from other parts of physics. Here is one example. Let’s say that we shine a light on a wall, with two plates in between: one with one slit, and another with with two parallel slits between the light and the wall. Now in this case, light acts as a wave; the light spreads out as it passes through the slits and interferes with the other “wave” of light. Look at the two pictures:

Double Slit Diffraction

Photo courtesy of EPZCAW,


Double slit diffraction 2

The most common interpretation of how this happens is that the light is acting like a wave, until it “crashes” when it hits the wall, where it “collapses” into a single spot, which means it is not a wave anymore. (For the physicists reading this, I am attempting to describe the Copenhagen interpretation of quantum mechanics in very simple terms).

However, another interpretation, provided without supporting evidence, is that when the light hits the wall, some of the properties of its wave (of its wavefunction) stay in this universe, and some automatically go to another universe, which (depending on who you ask) may newly have been created on the spot. Since there is a lot of light hitting objects around us, this explanation predicts a lot of new universes.

The question I would invite you to ask is this: where is the experimental evidence for the existence of any other universe besides ours? We can see in the lab that the light beams interfere with each other, but what reliable evidence do we have that part of that light beam, once it hits the wall, is now in another universe? What evidence is there for any other universes at all?

I have not been able to find any evidence at all. There are quite a few smart people trying to support this experimentally, but if you read carefully, you will see that they are drawing vast conclusions from limited and unclear evidence, or even trying to support this unproven theory with other unproven theories. Here are five examples:

#1: The “first evidence” many people see for the existence of multiple universes was found by a satellite called the Wilkinson Microwave Ansiotropy Probe (WMAP), which basically took pictures of invisible microwave “light” coming to us from outer space. Some people say that there are patterns in the microwaves that show that other universes are like “bubbles” “colliding” into ours, but if you read the actual papers carefully, they actually show that there is NO EVIDENCE FOR OTHER UNIVERSES from WMAP:

“We therefore conclude that this data set does not favor the bubble collision hypothesis for any value of Ns.” (2)

“The WMAP 7-year data-set does not favor the bubble collision hypothesis for any value of Ns”. (3) (see also [4])

#2: Cosmologist Laura Mersini-Houghton has claimed to have unmistakable evidence for the existence of another universe by predicting the discovery of the CMB cold spot. Her rationale for her equations for a “wave-like equation” (a waveform) of the universe (on which her predictions are based) are sensible, but those equations do not appear to assume the existence of other universes. So even if her equations are correct, I see no reason to conclude that her theory explains other universes, although it might explain  some things about this one! Besides, her prediction of a northern cold spot and of “dark flow” have not panned out (though several other predictions of hers have) (5) (6) (7).

#3: Scientists in the last few years have realized that the existing data does not support the existence of multiple universes very well. As of 2013, everyone was looking forward to the high-definition pictures of the microwave radiation coming from outer space, that we were supposed to get from the new, higher-resolution Planck satellite (5):

Future data from the Planck experiment will allow us to greatly improve on these results. If confirmed, the presence of bubble collisions in the CMB would be an extraordinary insight into the origins of our universe.

(Note that the researchers recognize the “great” need to improve their results, and that those results have not yet been confirmed.)

The pictures from Planck were supposed to be much higher resolution than the WMAP pictures. But now that the Planck pictures have come out (they were released on March 21, 2013 [6]), there have been no new discoveries based on it supporting a multiverse. Search for scientific papers on it (not newspaper articles!). As of this writing, on June 24, 2013, I found none. In spite of the absence of new experimental findings, British newspapers still claim that the previously known patterns in the microwave, which were seen by Planck, constitute new evidence for multiple universes (8), (9). This is poor science, since all we are seeing higher resolution pictures of things we already knew about, without anything to cause new conclusions.

#4: Scientific American recently ran an article by Vilenkin and Tegmark promoting the existence of multiple universes, but the arguments both used were not sound, and they did not present experimental evidence in support of their position. Vilenkin uses an example of using a broader model of a single universe to predict constants of nature for our local region of the cosmos (this is fair in principle); but then applies this without justification to suggest the existence of universes OUTSIDE of the single universe he just talked about.

Tegmark enthusiastically defends the multiverse position without providing reasonable, clear evidence of how current theories, robustly supported by experimental evidence, predict the existence of multiple universes (10).

He cites predictions of the density of dark energy, made on the basis of string theory, as evidence of the validity of a nearly infinite number of multiple universes. But string theory has just as little experimental evidence to support it:

“Until some way is found to observe the yet hypothetical higher dimensions, which are needed for consistency reasons, M-theory [a unified form of string theory] has a very difficult time making predictions that can be tested in a laboratory. Technologically, it may never be possible for it to be experimentally confirmed” (11).

“As of 2010, there are no feasible experiments to test the differences between MWI [the many-worlds interpretation of multiple universes] and other theories” (12).

#5: Paul Davies, professor of natural philosophy at the Australian Center for Astrobiology, argures in the NY Times that although multiple universes are not impossible, they are difficult to prove, and should not be taken seriously. Although opposed to the concept of God as creator, he states the following (13):

How seriously can we take this explanation for the friendliness of nature? Not very, I think. For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there are an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification.

Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator.














Part 1: Evidence-based Faith: An Order-of-Magnitude Estimate for the Random Origin of the First Genetic Material, with Applications to Human Origins

What is an order-of-magnitude estimate? Enrico Fermi described this type of estimation by using it to estimate the number of piano tuners in San Francisco. By estimating the population of San Francisco, the percentage of people who own a piano, the percentage of people who get their piano tuned each year, the length of time that it takes a tuner to tune a piano (including an allowance for transportation time to and from the location), and similar information, we can make an order-of-magnitude estimate of the number of piano tuners in San Francisco. “Order of magnitude” refers to a multiplier of 10 between each number; 1000 is an order of magnitude larger than 100 and two orders of magnitude larger than 10. From an estimate like this, we can say that there are about 50 piano tuners in San Francisco (probably more than 5, and not as many as 500). We can’t say for sure whether there are 40, 50, or 60.
We can use an order-of-magnitude estimate like this one to evaluate the approximate probability of the formation of the first genetic information. Standard macroevolutionary theory teaches that the first genetic information was formed as RNA (ribonucleic acid) by the random polymerization of the four different types of nucleotides that compose RNA (adenine, cytosine, guanine, and uracil; hereafter abbreviated A, C, G, and U). These random polymerizations are said to have taken place on the surface of certain types of clay, which can be shown in laboratory tests to catalyze (or speed up) the random polymerizing of RNA (and ultimately the formation of the first genetic information). The assumptions we will make in this order-of-magnitude estimate will err on the side of favoring the macroevolutionary scenario; thus, if our probabilities are infinitesimally small, we will be justified in questioning its viability.
Here are the assumptions we will make:

  1. We are trying to produce the genome of a typical bacterium (6,000 base pairs, which means 6,000 nucleotides matching in a row)
  2. We will assume that if we can get one strand right, that the other will automatically complement it. This happens easily, even outside living creatures.
  3. Excess nucleotides will always be present (never mind where they came from–we’ll assume they are not the limiting factor. They must also be produced frequently, since cytosine has a comparatively short half-life.)
  4. The Earth’s entire surface is covered with some clay that promotes polymerization (such as montmorillionite clay–a current favorite).
  5. These nucleotides are allowed to polymerize throughout the Earth’s history, estimated by macroevolutionary geologists at 4.5 billion years (recall that this is unreasonably generous, since it leaves no time for higher forms of life to evolve).
  6. We will assume that there are a billion different viable forms of the bacterial RNA (to allow plenty of room for different species to arise, and also to allow plenty of room for polymorphisms; note that it is estimated that there are only 0.002 to 0.1 billion species of all kinds in the world today, including hundreds of thousands of insect species).
  7. For added generosity, let’s assume that there are a billion planets in the universe that have this setup (it will increase the odds that one planet will get a set of bacterial RNA)
  8. Let’s also assume that there are a billion universes with this same setup (macroevolutionary cosmologists and physicist postulate their existence in order to raise the odds for evolution happening on earth–but they have not been observed!)

Now, let us calculated the probability of getting a given viable bacterial RNA. The odds of getting the first nucleotide correct is 1/4; also for the second nucleotide and beyond. Since each polymerization is independent of the previous one, we multiple the probabilities to get the overall probability. Since the chain is 6,000 nucleotides long, we need to do this 6,000 times. The odds of doing this correctly for a given RNA sequence are (1/4)^6000. This number is too large to be calculated on most calculators, but, a search engine which specializes in scientific and mathematical information, calculated it for me: (1/4)^6000 = 1 in 2.3*10^3612. Since it’s difficult to understand how large this number is, I asked Wolfram-Alpha to give the full number. Here it is:
2290593203500326442498254071102877992464615830839054768055123450544313 3851077403791573877586580573186350995335624442848376566408900340661545 7341269160953934651531316272895970961099648619548663674165694428394886 9330648470173371350813320809268809952407079715398039210502009557335794 3662055666767306385538495087529677470990968153918788613785751389005221 2385415364000233552517923094155148081278364846747449615787812522617139 5342006341679075520576304970776016746818912261453204962575441115371836 9447156895505073882545721273943517481650733405401933044529879802965087 4661803072896341035911246341091848324390496868908539422798829655406361 3709807896975047594167461331023628146001054998291892885044803396603840 7878196527044715747436853386831577880020356214741210341558715729680198 0525189824097250230848812002387365002027283572275248844963488736471394 3526031912848227248826190464847696594892838239669305251912416877251755 3390869295245378359828370235435165885369163710464894220310701508827933 3805264299792599815801920922903898158871712892609715338272913453162186 5313978608581541705515982751534447133263250347818367765137031003609793 8897585753779083035010667766548311999605347475370343426743825340005381 0997864187276609708209309038066394442278969691365489002023222850825449 7953096787063044370098338492177314930216742550624871750833859476679189 5095680602732346712939153259990811489391303284206503760197305419615240 9217301646404793801369143966718432036059811187775136277557250792266837 4235979682286834034089138475154767372727122932222887885208321879666030 5975797728877829876864681599425995732540887496009877581583503399859516
4751217086975807460294738428018338592485796034133919973077413533686949 1956368516611377674237208178041919106870280789033916144099126661387307 7526600578045242253024373178584527824852295057513761093944464722805553 9117717164315059230286413698788578331540178223949579078165011005988727 4595946783100447198954930537574190738099064718222518825147478490657161 1675484975233339688122794911475119965635459462447339289782867275308572 1621023943443062014490727808446685389294420571986970601078764950034180 6904790181420256733072612769503473201816461274039931292984401423199725 4340930170763466037725337419662914359959934881352713101312534635085302 3203781630211532813886686430142939639476747185671316635043595580465472 5436951706056632361702749907044372801683830358699136529946432620564283 9343150405350488810175472025383807889192539392721103826349328251385543 8169772823869564875140655788823474751813846542682825520838131006911762 5217360239526199430454346435033842859303165451350797675107176380424351 2718983930779120937657434512013867455548820224148073627378623609980111 1130760640189547044207203761774747082024351686619800395756958410106080 4661356296500120146645677141557786648630936176345539004262109110167208 9100758253488015840017224071067971558665492397885347660725631381708401 9127947685341853735187972127773344945050773031895050404703449225069038 7355696568657085290734466234786952456543122517479114466613670208736084 2313671545657762822696089905680216827990227867450866967383478161022109 0005418907699377867277059648206586073751433641713011744511704016132334 9063389003771777472580944833242545989973822564674460973839015552175709 6422261937569234096692347902063011590763830494478011352558782053282752 6432990876482679910153249074963538068771014944040060242262380449774268 2401904233153226013937331725013335198352712395550422922110105171367715 4198166625001314304274403493877643127657624870317305687566284108475166 0001324414350620739304183073837766897250290371164996773381894357892372 5532823256616542654631382911359993958629376.
Wait, though! We’re not done yet. That number hasn’t been adjusted for all our generous assumptions. Let’s do that now. We have assumed that there are a billion different viable forms of this DNA, on a billion planets, in a billion universes. This makes 10^27 different simultaneous possible opportunities for our first RNA to have been produced. This means that we need to divide our probability of roughly 1*10^3612 by 1*10^27. The odds of that first RNA forming are now reduced to 2.3 × 10^3585. But we still have 4.5 billion years, or 141,912,000,000,000,000 (141 quadrillion, or 1*10^17) seconds to do it in. How many polymerizations would need to be happening per second on Earth? We divide our answer by this number of seconds and find that our probability has now decreased to only 1.6 × 10^3568. How many polymerizations per second per square centimeter of Earth? The earth’s surface area can be shown to be about 5 *10^18 cm2 (its radius is about 6.8 *10^6 meters). Dividing our previous probability by this amount, we find that we would require about 1× 10^3549 polymerizations per second per cm2 on Earth and all other Earth-like planets included in this analysis, simply to develop the first bacterial chromosome(s).
Researchers observering the polymerization of nucleotides on Montmorillonite clay have not noticed speeds in excess of 1 nucleotide per thirty minutes (in laboratory environments designed to facilitate the process). [1] This indicates that the current model does not provide an even remotely probable model for the origin of life–bacterial or otherwise.
Just to clarify this, let us apply the same reasoning we have previously used to the development of the human genome. Although developments after the first bacterium appeared would have been more complex, most of them would still have been by random single nucleotide mutations. This means that our probabilistic calculations can provide a rough estimate of the probability that the human genome arose by random genetic mutation. The human genome consists of 2.3*10^9 base pairs. The probability that this genome would arise by random mutation is 1 in 4^2300000000, or 1.133218×10^1384737980. It’s hard to describe how big this number is. I was going to copy and paste it into a separate document, but it is about 10^10 digits long. If we could put 1000 digits on a page, it would take about ten million pages to hold the number. It should be clear from our previous calculations that the generous assumptions we made to the standard model are inadequate to allow for the development of the first genetic material by random polymerization and mutation. In the face of these facts, it seems difficult to believe that this first step in the macroevolutionary process could have taken place. Thoughtful observers must continue to search for more viable explanations. Further posts will consider this subject in greater detail, drawing on evidence from a variety of disciplines.

Part 2: The Universe Has Been Fine-tuned To Support Life

During high school, I studied General Physics through the Educational Program for Gifted Youth (EPGY) at Stanford University. The standard textbook used by the program, though acknowledging evolution, made a fascinating comment at its end, in the chapter on cosmology:
The questions of cosmology are deep ones that fascinate the human intellect. One aspect that is especially intriguing is this: calculations on the formation and
evolution of the universe have been performed that deliberately varied the values– just slightly–of certain fundamental physical constants. The result? A universe in which life as we know it could not exist. [For example, if the difference in mass between proton and neutron were zero, or small (less than 0.5 MeV/c2), there would be no atoms: electrons would be captured by protons never to be freed again.] Such results have given rise to the so-called Anthropic principle, which says that if the universe were even slightly different than it is, we couldn’t be here. It’s as if the universe were exquisitely tuned, almost as if to accommodate us.

More will be coming on this subject at a later time. Think about how the fine-tuning of the universe relates to its origin.

Giancoli, G. Physics, 5th edition (1998). Upper Saddle River, New Jersey: Prentice Hall, p. 1031.