domingo, 28 de junho de 2015

Proteins: striking evidence of design

The proteins in living cells are made of just certain kinds of amino acids, those that are “alpha” (short) and “left-handed.” Miller’s “primordial soup” contained many long (beta, gamma, delta) amino acids and equal numbers of both right-and left-handed forms. Problem: just one long or right-handed amino acid inserted into a chain of short, left-handed amino acids would prevent the coiling and folding necessary for proper protein function. What Miller actually produced was a seething brew of potent poisons that would absolutely destroy any hope for the chemical evolution of life. 1

Paper Reports that Amino Acids Used by Life Are Finely Tuned to Explore "Chemistry Space" 4

A recent paper in Nature's journal Scientific Reports, "Extraordinarily Adaptive Properties of the Genetically Encoded Amino Acids," 3  has found that the twenty amino acids used by life are finely tuned to explore "chemistry space" and allow for maximal chemical reactions. Considering that this is a technical paper, they give an uncommonly lucid and concise explanation of what they did:

We drew 108 random sets of 20 amino acids from our library of 1913 structures and compared their coverage of three chemical properties: size, charge, and hydrophobicity, to the standard amino acid alphabet. We measured how often the random sets demonstrated better coverage of chemistry space in one or more, two or more, or all three properties. In doing so, we found that better sets were extremely rare. In fact, when examining all three properties simultaneously, we detected only six sets with better coverage out of the 108 possibilities tested. That's quite striking: out of 100 million different sets of twenty amino acids that they measured, only six are better able to explore "chemistry space" than the twenty amino acids that life uses. That suggests that life's set of amino acids is finely tuned to one part in 16 million. Of course they only looked at three factors -- size, charge, and hydrophobicity. When we consider other properties of amino acids, perhaps our set will turn out to be the best:

While these three dimensions of property space are sufficient to demonstrate the adaptive advantage of the encoded amino acids, they are necessarily reductive and cannot capture all of the structural and energetic information contained in the 'better coverage' sets. They attribute this fine-tuning to natural selection, as their approach is to compare chance and selection as possible explanations of life's set of amino acids: This is consistent with the hypothesis that natural selection influenced the composition of the encoded amino acid alphabet, contributing one more clue to the much deeper and wider debate regarding the roles of chance versus predictability in the evolution of life.

But selection just means it is optimized and not random. They are only comparing two possible models -- selection and chance. They don't consider the fact that intelligent design is another cause that's capable of optimizing features. The question is: Which cause -- natural selection or intelligent design -- optimized this trait?

To do so, you'd have to consider the complexity required to incorporate a new amino acid into life's genetic code. That in turn would require lots of steps: a new codon to encode that amino acid, and new enzymes and RNAs to help process that amino acid during translation. In other words, incorporating a new amino acid into life's genetic code is a multimutation feature.

The biochemical language of the genetic code uses short strings of three nucleotides (called codons) to symbolize commands -- including start commands, stop commands, and codons that signify each of the 20 amino acids used in life. After the information in DNA is transcribed into mRNA, a series of codons in the mRNA molecule instructs the ribosome which amino acids are to be strung in which order to build a protein. Translation works by using another type of RNA molecule called transfer RNA (tRNA). During translation, tRNA molecules ferry needed amino acids to the ribosome so the protein chain can be assembled.

Each tRNA molecule is linked to a single amino acid on one end, and at the other end exposes three nucleotides (called an anti-codon). At the ribosome, small free-floating pieces of tRNA bind to the mRNA. When the anti-codon on a tRNA molecule binds to matching codons on the mRNA molecule at the ribosome, the amino acids are broken off the tRNA and linked up to build a protein.

For the genetic code to be translated properly, each tRNA molecule must be attached to the proper amino acid that corresponds to its anticodon as specified by the genetic code. If this critical step does not occur, then the language of the genetic code breaks down, and there is no way to convert the information in DNA into properly ordered proteins. So how do tRNA molecules become attached to the right amino acid?

Cells use special proteins called aminoacyl tRNA synthetase (aaRS) enzymes to attach tRNA molecules to the "proper" amino acid under thelanguage of the genetic code. Most cells use 20 different aaRS enzymes, one for each amino acid used in life. These aaRS enzymes are key to ensuring that the genetic code is correctly interpreted in the cell.

Yet these aaRS enzymes themselves are encoded by the genes in the DNA. This forms the essence of a "chicken-egg problem": aaRS enzymes themselves are necessary to perform the very task that constructs them.

How could such an integrated, language-based system arise in a step-by-step fashion? If any component is missing, the genetic information cannot be converted into proteins, and the message is lost. The RNA world is unsatisfactory because it provides no explanation for how the key step of the genetic code -- linking amino acids to the correct tRNA -- could have arisen.


Few of the many  possible polypeptide chains wiil be useful to Cells

Bruce Alberts writes in Molecular biology of the cell :

Since each of the 20 amino acids is chemically distinct and each can, in principle, occur at any position in a protein chain, there are 20 x 20 x 20 x 20 = 160,000 different possible polypeptide chains four amino acids long, or 20n different possible polypeptide chains n amino acids long. For a typical protein length of about 300 amino acids, a cell could theoretically make more than 10^390  different pollpeptide chains. This is such an enormous number that to produce just one molecule of each kind would require many more atoms than exist in the universe. Only a very small fraction of this vast set of conceivable polypeptide chains would adopt a single, stable three-dimensional conformation-by some estimates, less than one in a billion. And yet the vast majority of proteins present in cells adopt unique and stable conformations. How is this possible?

The complexity of living organisms is staggering, and it is quite sobering to note that we currently lack even the tiniest hint of what the function might be for more than 10,000 of the proteins that have thus far been identified in the human genome. There are certainly enormous challenges ahead for the next generation of cell biologists, with no shortage of fascinating mysteries to solve.

Now comes Alberts  striking explanation of how the right sequence arised : 

The answer Iies in natural selection. A protein with an unpredictably variable structure and biochemical activity is unlikely to help the survival of a cell that contains it. Such
proteins would therefore have been eliminated by natural selection through the enormously long trial-and-error process that underlies biological evolution. Because evolution has selected for protein function in living organisms, the amino acid sequence of most present-day proteins is such that a single conformation is extremely stable. In addition, this conformation has its chemical properties finely tuned to enable the protein to perform a particular catalltic or structural function in the cell. Proteins are so precisely built that the change of even a few atoms in one amino acid can sometimes disrupt the structure of the whole molecule so severelv that all function is lost.

Proteins are not rigid lumps of material. They often have precisely engineered moving parts whose mechanical actions are coupled to chemical events. It is this coupling of chemistry and movement that gives proteins the extraordinary capabilities that underlie the dynamic processes in living cells

Now think for a moment . It seems that natural selection ( does that not sound soooo scientific and trustworthy ?! ) is the key answer to any phenomena in biology, where there is no scientific evidence to make a empricial claim. Much has been written about the fact that natural selection cannot produce coded information. Alberts short explanation is a prima facie example about how main stream sciencists  make without hesitation " just so "  claims without being able to provide a shred of evidence, just in order to mantain a paradigm on which the scientific establishment relies, where evolution is THE answer to almost every biochemical phenomena. Fact is that precision, coded information, stability, interdependence and irreducible complexity etc. are products of intelligent minds. The author seems also to forget that natural selection cannot occur before the first living cell replicates. Several hundred proteins had to be already in place and fully operating in order to make even the simplest life possible  



Amino acids link together when the amino group of one amino acid bonds to the carboxyl group of another. Notice that water is a by-product of the reaction (called a condensation reaction). 

Stephen Meyer writes  in Signature of the cell:

According to neo-Darwinian theory, new genetic information arises first as random mutations occur in the DNA of existing organisms. When mutations arise that confer a survival advantage on the organisms that possess them, the resulting genetic changes are passed on by natural selection to the next generation. As these changes accumulate, the features of a population begin to change over time. Nevertheless, natural selection can "select" only what random mutations first produce. And for the evolutionary process to produce new forms of life, random mutations must first have produced new genetic information for building novel proteins. That, for the
mathematicians, physicists, and engineers at Wistar, was the problem. Why?

The skeptics at Wistar argued that it is extremely difficult to assemble a new gene or protein by chance because of the sheer number of possible base or amino-acid sequences. For every combination of amino acids that produces a functional protein there exists a vast number of other possible combinations that do not. And as the length of the required protein grows, the number of possible amino-acid sequence combinations of that length grows exponentially, so that the odds of finding a functional sequence—that is, a working protein—diminish precipitously.

To see this, consider the following. Whereas there are four ways to combine the letters A and B to make a two-letter combination (AB, BA, AA, and BB), there are eight ways to make three-letter combinations (AAA, AAB, ABB, ABA, BAA, BBA, BAB, BBB), and sixteen ways to make four-letter combinations, and so on. The number of combinations grows geometrically, 22, 23, 24, and so on. And this growth becomes more pronounced when the set of letters is larger. For protein chains, there are 202, or 400, ways to make a two-amino-acid combination, since each position could be any one of 20 different alphabetic characters. Similarly, there are 203, or 8,000, ways to make a three-amino-acid sequence, and 204, or 160,000, ways to make a sequence four amino acids long, and so on. As the number of possible combinations rises, the odds of finding a correct sequence diminishes correspondingly. But most functional proteins are made of hundreds of amino acids. Therefore, even a relatively short protein of, say, 150 amino acids represents one sequence among an astronomically large number of other possible sequence combinations (approximately 10^195).

Consider the way this combinatorial problem might play itself out in the case of proteins in a hypothetical prebiotic soup. To construct even one short protein molecule of 150 amino acids by chance within the prebiotic soup there are several combinatorial problems—probabilistic hurdles—to overcome. First, all amino acids must form a chemical bond known as a peptide bond when joining with other amino acids in the protein chain

Consider the way this combinatorial problem might play itself out in the case of proteins in a hypothetical prebiotic soup. To construct even one short protein molecule of 150 amino acids by chance within the prebiotic soup there are several combinatorial problems—probabilistic hurdles—to overcome. First, all amino acids must form a chemical bond known as a peptide bond when joining with other amino acids in the protein chain (see Fig. 9.1). If the amino acids do not link up with one another via a peptide bond, the resulting molecule will not fold into a protein. In nature many other types of chemical bonds are possible between amino acids. In fact, when amino-acid mixtures are allowed to react in a test tube, they form peptide and nonpeptide bonds with roughly equal probability. Thus, with each amino-acid addition, the probability of it forming a peptide bond is roughly 1/2. Once four amino acids have become linked, the likelyhood that they are joined exclusively by peptide bonds is roughly 1/2 × 1/2 × 1/2 ×
1/2 = 1/16, or (1/2)4. The probability of building a chain of 150 amino acids in which all linkages are peptide linkages is (1/2)149, or roughly 1 chance in 10^45.

Second, in nature every amino acid found in proteins (with one exception) has a distinct mirror image of itself; there is one left-handed version, or L-form, and one right-handed version, or D-form. These mirror-image forms are called optical isomers (see Fig. 9.2). Functioning proteins tolerate only left-handed amino acids, yet in abiotic amino-acid production the right-handed and left-handed isomers are produced with roughly equal frequency. Taking this into consideration further compounds the improbability of attaining a biologically functioning protein. The probability of attaining, at random, only L-amino acids in a hypothetical peptide chain 150 amino acids long is (1/2)150, or again roughly 1 chance in 1045. Starting from mixtures of D-forms and L-forms, the probability of building a 150-amino-acid chain at random in which all bonds are peptide bonds and all amino acids are L-form is, therefore, roughly 1 chance in 1090.

Second, in nature every amino acid found in proteins (with one exception) has a distinct mirror image of itself; there is one left-handed version, or L-form, and one right-handed version, or D-form. These mirror-image forms are called optical isomers . Functioning proteins tolerate only left-handed amino acids, yet in abiotic amino-acid production the right-handed and left-handed isomers are produced with roughly equal frequency. Taking this into consideration further compounds the improbability of attaining a biologically functioning protein. The probability of attaining, at random, only L-amino acids in a hypothetical peptide chain 150 amino acids long is (1/2)150, or again roughly 1 chance in 10^45. Starting from mixtures of D-forms and L-forms, the probability of building a 150-amino-acid chain at random in which all bonds are peptide bonds and all amino acids are L-form is, therefore, roughly 1 chance in 10^90.

Functioning proteins have a third independent requirement, the most important of all: their amino acids, like letters in a meaningful sentence, must link up in functionally specified sequential arrangements. In some cases, changing even one amino acid at a given site results in the loss of protein function. Moreover, because there are 20 biologically occurring amino acids, the probability of getting a specific amino acid at a given site is small—1/20. (Actually the probability is even lower because, in nature, there are also many nonprotein-forming amino acids.) On the assumption that each site in a protein chain requires a particular amino acid, the probability of attaining a particular protein 150 amino acids long would be (1/20)150, or roughly 1 chance in 10^195.

How rare, or common, are the functional sequences of amino acids  among all the possible sequences of amino acids in a chain of any given length? 

Douglas Axe answered this question in 2004 3 , and  Axe was able to make a careful estimate of the ratio of (a) the number of 150-amino-acid sequences that can perform that particular function to (b) the whole set of possible amino-acid sequences of this length. Axe estimated this ratio to be 1 to 10^77. 

This was a staggering number, and it suggested that a random process would have great difficulty generating a protein with that particular function by chance. But I didn't want to know just the likelihood of finding a protein with a particular function within a space of combinatorial possibilities. I wanted to know the odds of finding any functional protein whatsoever within such a space. That number would make it possible to evaluate chance-based origin-of-life scenarios, to assess the probability that a single protein—any working protein—would have arisen by chance on the early earth.

Fortunately, Axe's work provided this number as well.17 Axe knew that in nature  proteins perform many specific functions. He also knew that in order to perform these functions their amino-acid chains must first fold into stable three-dimensional structures. Thus, before he estimated the frequency of sequences performing a specific (beta-lactamase) function, he first performed experiments that enabled him to estimate the frequency of sequences that will produce stable folds. On the basis of his experimental results, he calculated the ratio of (a) the number of 150-amino-acid sequences capable of folding into stable "function-ready" structures to (b) the whole set of possible amino-acid sequences of that length. He determined that ratio to be 1 to 10^74.

In other words, a random process producing amino-acid chains of this length would stumble onto a functional protein only about once in every 10^74 attempts. 

When one considers that Robert Sauer was working on a shorter protein of 100 amino acids, Axe's number might seem a bit less prohibitively improbable. Nevertheless, it still represents a startlingly small probability. In conversations with me, Axe has compared the odds of producing a functional protein sequence of modest (150-amino-acid) length at random to the odds of finding a single marked atom out of all the atoms in our galaxy via a blind and undirected search. Believe it or not, the odds of finding the marked atom in our galaxy are markedly better (about a billion times better) than those of finding a functional protein among all the sequences of corresponding length. 

1) https://answersingenesis.org/origin-of-life/the-origin-of-life-dna-and-protein/
2) B.Alberts  Molecular biology of the cell.
3) http://www.ncbi.nlm.nih.gov/pubmed/15321723
4) http://www.evolutionnews.org/2015/06/paper_reports_t096581.html

http://elshamah.heavenforum.org/t2062-proteins-how-they-provide-striking-evidence-of-design#3552

Origin of translation of the 4 nucleic acid bases and the 20 amino acids, and the universal assignment of codons to amino acids


The cell converts the information carried in an mRNA molecule into a protein molecule. This feat of translation was a focus of attention of biologists in the late 1950s, when it was posed as the “coding problem”: how is the information in a linear sequence of nucleotides in RNA translated into the linear sequence of a chemically quite different set of units—the amino acids in proteins?

The first scientist after Watson and Crick to find a solution of the coding problem, that is the relationship between the DNA structure and protein synthesis was Russian  physicist George Gamow. Gamow published  in the October 1953 issue of Nature  a solution called the “diamond code”, an overlapping triplet code based on a combinatorial scheme in which 4 nucleotides arranged 3-at-a-time would specify 20 amino acids.  Somewhat like a language, this highly restrictive code was primarily hypothetical, based on then-current knowledge of the behavior of nucleic acids and proteins. 3

The concept of coding applied to genetic specificity was somewhat misleading, as translation between the four nucleic acid bases and the 20 amino acids would obey the rules of a cipher instead of a code. As Crick acknowledged years later, in linguistic analysis, ciphers generally operate on units of regular length (as in the triplet DNA scheme), whereas codes operate on units of variable length (e.g., words, phrases). But the code metaphor worked well, even though it was literally inaccurate, and in Crick’s words, “‘Genetic code’ sounds a lot more intriguing than ‘genetic cipher’.”

An mRNA Sequence Is decoded in sets of three nucleotides

Once an mRNA has been produced by transcription and processing, the information present in its nucleotide sequence is used to synthesize a protein. Transcription is simple to understand as a means of information transfer: since DNA and RNA are chemically and structurally similar, the DNA can act as a direct template for the synthesis of RNA by complementary base-pairing. As the term transcription signifies, it is as if a message written out by hand is being converted, say, into a typewritten text. The language itself and the form of the message do not change, and the symbols used are closely related.

In contrast, the conversion of the information in RNA into protein represents a translation of the information into another language that uses quite different symbols. Moreover, since there are only 4 different nucleotides in mRNA and 20 different types of amino acids in a protein, this translation cannot be accounted for by a direct one-to-one correspondence between a nucleotide in RNA and an amino acid in protein. The nucleotide sequence of a gene, through the intermediary of mRNA, is translated into the amino acid sequence of a protein. This code was deciphered in the early 1960s.

Question: how did the tranlation of the triplet anti codon to amino acids translation, and its assignment, arise ?  There is no physical affinity between the anti codon and the amino acids. What must be explained, is the arrangement of the codon " words " in the standard codon table which is highly non-random, redundant and optimal, and serves to translate the information into the amino acid sequence to make proteins, and the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is, the origin of its translation. The origin of a alphabet through the triplet codons is one thing, but on top, it has to be translated to a other " alphabet " constituted through the 20 amino acids. That is as to explain the origin of capability to translate the english language into chinese. We have to constitute the english and chinese language and symbols first, in order to know its equivalence. That is a mental process. 


The sequence of nucleotides in the mRNA molecule is read in consecutive groups of three. RNA is a linear polymer of four different nucleotides, so there are 4 x 4 x 4 = 64 possible combinations of three nucleotides: the triplets AAA, AUA, AUG, and so on. However, only 20 different amino acids are commonly found in proteins. Either some nucleotide triplets are never used, or the code is redundant and some amino acids are specified by more than one triplet. The second possibility is, in fact, the correct one, as shown by the completely deciphered genetic code shown below:








Each group of three consecutive nucleotides in RNA is called a codon, and each codon specifies either one amino acid or a stop to the translation process.

In principle, an RNA sequence can be translated in any one of three different reading frames, depending on where the decoding process begins (Figure below). However, only one of the three possible reading frames in an mRNA encodes the required protein. We see later how a special punctuation signal at the beginning of each RNA message sets the correct reading frame at the start of protein synthesis.

AUG is the Universal Start Codon. Nearly every organism (and every gene) that has been studied uses the three ribonucleotide sequence AUG to indicate the "START" of protein synthesis (Start Point of Translation).

The same interrogation point goes here : Why and how should natural processes have " chosen " to insert a punctuation signal, a Universal Start Codon in order for the Ribosome to " know " where to start translation ? This is essential in order for the machinery to start translating at the correct place. 



Note that three codons are referred to as STOP codons: UAA, UAG, and UGA. These are used to terminate translation; they indicate the end of the gene's coding region. 




tRNA Molecules match Amino Acids to codons in mRNA

The codons in an mRNA molecule do not directly recognize the amino acids they specify: the group of three nucleotides does not, for example, bind directly to the amino acid. Rather, the translation of mRNA into protein depends on adaptor molecules that can recognize and bind both to the codon and, at another site on their surface, to the amino acid. These adaptors consist of a set of small RNA molecules known as transfer RNAs (tRNAs), each about 80 nucleotides in length.

RNA molecules can fold into precise three-dimensional structures, and the tRNA molecules provide a striking example. Four short segments of the folded tRNA are double-helical, producing a molecule that looks like a cloverleaf when drawn schematically. See below:





For example, a 5"-GCUC-3" sequence in one part of a polynucleotide chain can form a relatively strong association with a 5"-GAGC-3" sequence in another region of the same molecule. The cloverleaf undergoes further folding to form a compact L-shaped structure that is held together by additional hydrogen bonds between different regions of the molecule. Two regions of unpaired nucleotides situated at either end of the L-shaped molecule are crucial to the function of tRNA in protein synthesis. One of these regions forms the anticodon, a set of three consecutive nucleotides that pairs with the complementary codon in an mRNA molecule. The other is a short single- stranded region at the 3" end of the molecule; this is the site where the amino acid that matches the codon is attached to the tRNA. The genetic code is redundant; that is, several different codons can specify a single amino acid . This redundancy implies either that there is more than one tRNA for many of the amino acids or that some tRNA molecules can base-pair with more than one codon. In fact, both situations occur. Some amino acids have more than one tRNA and some tRNAs are constructed so that they require accurate base-pairing only at the first two positions of the codon and can tolerate a mismatch (or wobble) at the third position . See below



Wobble base-pairing between codons and anticodons. If the nucleotide listed in the first column is present at the third, or wobble, position of the codon, it can base-pair with any of the nucleotides listed in the second column. Thus, for example, when inosine (I) is present in the wobble position of the tRNA anticodon, the tRNA can recognize any one of three different codons in bacteria and either of two codons in eucaryotes. The inosine in tRNAs is formed from the deamination of guanine, a chemical modification that takes place after the tRNA has been synthesized. The nonstandard base pairs, including those made with inosine, are generally weaker than conventional base pairs. Note that codon–anticodon base pairing is more stringent at positions 1 and 2 of the codon: here only conventional base pairs are permitted. The differences in wobble base-pairing interactions between bacteria and eucaryotes presumably result from subtle structural differences between bacterial and eucaryotic ribosomes, the molecular machines that perform protein synthesis. 

(Adapted from C. Guthrie and J. Abelson, in The Molecular Biology of the Yeast Saccharomyces: Metabolism and Gene Expression, pp. 487–528. Cold Spring Harbor, New York: Cold Spring Harbor Laboratory Press, 1982.)

This wobble base-pairing explains why so many of the alternative codons for an amino acid differ only in their third nucleotide . In bacteria, wobble base-pairings make it possible to fit the 20 amino acids to their 61 codons with as few as 31 kinds of tRNA molecules. The exact number of different kinds of tRNAs, however, differs from one species to the next. For example, humans have nearly 500 tRNA genes but, among them, only 48 different anticodons are represented. 

Specific enzymes couple each Amino Acid to its appropriate tRNA Molecule

We have seen that, to read the genetic code in DNA, cells make a series of different tRNAs. We now consider how each tRNA molecule becomes linked to the one amino acid in 20 that is its appropriate partner. Recognition and attachment of the correct amino acid depends on enzymes called aminoacyl-tRNA synthetases, which covalently couple each amino acid to its appropriate set of tRNA molecules 





Most cells have a different synthetase enzyme for each amino acid (that is, 20 synthetases in all); one attaches glycine to all tRNAs that recognize codons for glycine, another attaches alanine to all tRNAs that recognize codons for alanine, and so on. Many bacteria, however, have fewer than 20 synthetases, and the same synthetase enzyme is responsible for coupling more than one amino acid to the appropriate tRNAs. In these cases, a single synthetase places the identical amino acid on two different types of tRNAs, only one of which has an anticodon that matches the amino acid. A second enzyme then chemically modifies each “incorrectly” attached amino acid so that it now corresponds to the anticodon displayed by its covalently linked tRNA. The synthetase-catalyzed reaction that attaches the amino acid to the 3" end of the tRNA is one of many reactions coupled to the energy-releasing hydrolysis of ATP , and it produces a high-energy bond between the tRNA and the amino acid. The energy of this bond is used at a later stage in protein
synthesis to link the amino acid covalently to the growing polypeptide chain. The aminoacyl-tRNA synthetase enzymes and the tRNAs are equally important in the decoding process 





These enzymes are not gentle with tRNA molecules. The structure of glutaminyl-tRNA synthetase with its tRNA (entry 1gtr) is a good example ( see above ) The enzyme firmly grips the anticodon, spreading the three bases widely apart for better recognition. At the other end, the enzyme unpairs one base at the beginning of the chain, seen curving upward here, and kinks the long acceptor end of the chain into a tight hairpin, seen here curving downward. This places the 2' hydroxyl on the last nucleotide in the active site, where ATP and the amino acid (not present in this structure) are bound. 

The tRNA and ATP fits precisely in the active site of the enzyme, and the structure is configured and designed to function in a finely tuned manner. How could such a functional device be the result of random unguided forces and chemical reactions without a end goal ? 


The genetic code is translated by means of two adaptors that act one after another. The first adaptor is the aminoacyl-tRNA synthetase, which couples a particular amino acid to its corresponding tRNA; the second adaptor is the tRNA molecule itself, whose anticodon forms base pairs with the appropriate codon on the mRNA. An error in either step would cause the wrong amino acid to be incorporated into a protein chain. In the sequence of events shown, the amino acid tryptophan (Trp) is selected by the codon UGG on the mRNA.

This was established by an experiment in which one amino acid (cysteine) was chemically converted into a differentamino acid (alanine) after it already had been attached to its specific tRNA. When such “hybrid” aminoacyl-tRNA molecules were used for protein synthesis in a cell-free system, the wrong amino acid was inserted at every point in the protein chain where that tRNA was used. Although, as we shall see, cells have several quality control mechanisms to avoid this type of mishap, the experiment establishes that the genetic code is translated by two sets of adaptors that act sequentially. Each matches one molecular surface to another with great specificity, and it is their combined action that associates each sequence of three nucleotides in the mRNA molecule—that is, each codon—with its particular amino acid.

Editing by tRNA Synthetases Ensures Accuracy

Several mechanisms working together ensure that the tRNA synthetase links the correct amino acid to each tRNA. The synthetase must first select the correct amino acid, and most synthetases do so by a two-step mechanism. First, the correct amino acid has the highest affinity for the active-site pocket of its synthetase and is therefore favored over the other 19. In particular, amino acids larger than the correct one are effectively excluded from the active site. However, accurate discrimination between two similar amino acids, such as isoleucine and valine (which differ by only a methyl group), is very difficult to achieve by a one-step recognition mechanism. A second discrimination step occurs after the amino acid has been covalently linked to AMP. When tRNA binds the synthetase, it tries to force the amino acid into a second pocket in the synthetase, the precise dimensions of which exclude the correct amino acid but allow access by closely related amino acids. Once an amino acid enters this editing pocket, it is hydrolyzed from the AMP (or from the tRNA itself if the aminoacyl-tRNA bond has already formed), and is released from the enzyme. This hydrolytic editing, which is analogous to the exonucleolytic proofreading by DNA polymerases , raises the overall accuracy of tRNA charging to approximately one mistake in 40,000 couplings.

Editing significantly decreases the frequency of errors and is important for translational quality control, and many details of the various editing mechanisms and their effect on different cellular systems are now starting to emerge. 8

High Fidelity

Aminoacyl-tRNA synthetases must perform their tasks with high accuracy. Every mistake they make will result in a misplaced amino acid when new proteins are constructed. These enzymes make about one mistake in 10,000. For most amino acids, this level of accuracy is not too difficult to achieve. Most of the amino acids are quite different from one another, and, as mentioned before, many parts of the different tRNA are used for accurate recognition. But in a few cases, it is difficult to choose just the right amino acids and these enzymes must resort to special techniques. 

Isoleucine is a particularly difficult example. It is recognized by an isoleucine-shaped hole in the enzyme, which is too small to fit larger amino acids like methionine and phenylalanine, and too hydrophobic to bind anything with polar sidechains. But, the slightly smaller amino acid valine, different by only a single methyl group, also fits nicely into this pocket, binding instead of isoleucine in about 1 in 150 times. This is far too many errors, so corrective steps must be taken. Isoleucyl-tRNA synthetase (PDB entry 1ffy) solves this problem with a second active site, which performs an editing reaction. Isoleucine does not fit into this site, but errant valine does. The mistake is then cleaved away, leaving the tRNA ready for a properly-placed leucine amino acid. This proofreading step improves the overall error rate to about 1 in 3,000.  9

This is a amazing error proofreading technique, which adds to other repair mechanisms in the cell. Once again the question arises : How could these precise molecular machines have arisen by natural means, without intelligence involved ? This seems to me one more amazing example of highly sophisticated nano molecular machinery designed to fullfill its task with high degree of findelity and error minimization, which can arise only by the forsight of a incredibly intelligent creator. 





A new peer-reviewed paper in the journal Frontiers in Genetics, "Redundancy of the genetic code enables translational pausing," finds that so-called "redundant" codons may actually serve important functions in the genome. Redundant (also called "degenerate") codons are those triplets of nucleotides that encode the same amino acid. For example, in the genetic code, the codons GGU, GGC, GGA, and GGG all encode the amino acid glycine. While it has been shown (see here) that such redundancy is actually optimized to minimize the impact of mutations resulting in amino acid changes, it is generally assumed that synonymous codons are functionally equivalent. They just encode the same amino acid, and that's it.  5

The ribosome is capable of reading both sets of commands -- as they put it, "[t]he ribosome can be thought of as an autonomous functional processor of data that it sees at its input." To put it another way, the genetic code is "multidimensional," a code within a code. This multidimensional nature exceeds the complexity of computer codes generated by humans, which lack the kind of redundancy of the genetic code. As the abstract states:

The codon redundancy ("degeneracy") found in protein-coding regions of mRNA also prescribes Translational Pausing (TP). When coupled with the appropriate interpreters, multiple meanings and functions are programmed into the same sequence of configurable switch-settings. This additional layer of Ontological Prescriptive Information (PIo) purposely slows or speeds up the translation decoding process within the ribosome. Variable translation rates help prescribe functional folding of the nascent protein. Redundancy of the codon to amino acid mapping, therefore, is anything but superfluous or degenerate. Redundancy programming allows for simultaneous dual prescriptions of TP and amino acid assignments without cross-talk. This allows both functions to be coincident and realizable. We will demonstrate that the TP schema is a bona fide rule-based code, conforming to logical code-like properties. Second, we will demonstrate that this TP code is programmed into the supposedly degenerate redundancy of the codon table. We will show that algorithmic processes play a dominant role in the realization of this multi-dimensional code.

The paper even suggests, "Cause-and-effect physical determinism...cannot account for the programming of sequence-dependent biofunction."


Progressive development of the genetic code is not realistic 7

In view of the many components involved in implementing the genetic code, origin-of-life researchers have tried to see how it might have arisen in a gradual, evolutionary, manner. For example, it is usually suggested that to begin with the code applied to only a few amino acids, which then gradually increased in number. But this sort of scenario encounters all sorts of difficulties with something as fundamental as the genetic code.

First, it would seem that the early codons need have used only two bases (which could code for up to 16 amino acids); but a subsequent change to three bases (to accommodate 20) would seriously disrupt the code. Recognising this difficulty, most researchers assume that the code used 3-base codons from the outset; which was remarkably fortuitous or implies some measure of foresight on the part of evolution (which, of course, is not allowed).

Much more serious are the implications for proteins based on a severely limited set of amino acids. In particular, if the code was limited to only a few amino acids, then it must be presumed that early activating enzymes comprised only that limited set of amino acids, and yet had the necessary level of specificity for reliable implementation of the code. There is no evidence of this; and subsequent reorganization of the enzymes as they made use of newly available amino acids would require highly improbable changes in their configuration. Similar limitations would apply to the protein components of the ribosomes which have an equally essential role in translation.

Further, tRNAs tend to have atypical bases which are synthesized in the usual way but subsequently modified. These modifications are carried out by enzymes, so these enzymes too would need to have started life based on a limited number of amino acids; or it has to be assumed that these modifications are later refinements - even though they appear to be necessary for reliable implementation of the code.

Finally, what is going to motivate the addition of new amino acids to the genetic code? They would have little if any utility until incorporated into proteins - but that will not happen until they are included in the genetic code. So the new amino acids must be synthesised and somehow incorporated into useful proteins (by enzymes that lack them), and all of the necessary machinery for including them in the code (dedicated tRNAs and activating enzymes) put in place – and all done opportunistically! Totally incredible!

What must be explained, is  the arrangement of the codons in the standard codon table which is highly non-random, and serves to translate into the amino acid sequence to make proteins, and the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is, the origin of its translation. The origin of a alphabet through the triplet codons is one thing, but on top, it has to be translated to a other " alphabet " constituted through the 20 amino acid sequence. That is, as to explain the origin of capability to translate the english language into chinese.  On top of that, the machinery itself to promote the process itself has also to be explained, that is the hardware. When humans translate english to chinese, for example, we recognise the english word, and the translator knows the equivalent chinese symbol and writes it down. In the cell,  Aminoacyl tRNA synthetase recognise the triplet anticodon of the tRNA, and attach the equivalent amino acid to the tRNA. How could random chemical reactions produced this recognition ? Some theories try to explain the mechanism, but they all remain unsatisfactory. Obviously. Furthermore, Aminoacyl tRNA synthetase are complex enzymes. For what reason would they have come to be, if the final function could only be employd after the whole translation process was set in place, with a fully functional ribosome being able to do its job? Remembering  the catch22 situation, since they are by themself made through the very own process in question ?  Why is it not rational to conclude that the code itself, the software, as well as the hardware, are best explained through the invention of a highly intelligent being, rather than random chemical affinities and reactions. Questions: what good would the ribosome be for without tRNA's ? without amino acids, which are the product of enormously complex chemical processes and pathways ? What good would the machinery be good for, if the code was not established, and neither the assignment of each codon to the respective amino acid ? had the software and the hardware not have to be in place at the same time? Were all the parts not only fully functional if fully developed, interlocked, set-up, and tuned to do its job with precision like a human made motor ? And even it lets say, the whole thing was fully working and in place, what good would it be for without all the other parts required, that is, the DNA double helix, its compactation through histones and chromatins and chromosomes, its highly complex mechanism of information extraction and transcription into mRNA?  Had the whole process , that is   INITIATION OF TRANSCRIPTION, CAPPING,  ELONGATION,  SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION, EXPORT FROM THE NUCLEUS TO THE CYTOSOL, INITIATION OF PROTEIN SYNTHESIS (TRANSLATION), COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING, and its respective machinery not have to be all in place ? Does that not constitute a interdependent, and irreducible complex system ? 

Stephen Meyer writes in Biocomplexitys paper :

Can the Origin of the Genetic Code Be Explained by Direct RNA Templating? 1  following : 

The three main naturalistic concepts on the origin and evolution of the code are the stereochemical theory, according to which codon assignments are dictated by physico-chemical affinity between amino acids and the cognate codons (anticodons). 

The genetic code as we observe it today is a semantic (symbol- based) relation between (a) amino acids, the building blocks of proteins, and (b) codons, the three-nucleotide units in messen- ger RNA specifying the identity and order of different amino acids in protein assembly.  The actual physical mediators of the code, however, are trans- fer RNAs (tRNAs) that, after being charged with their specific amino acids by enzymes known as aminoacyl transfer RNA synthetases (aaRSs), present the amino acids for peptide bond formation in the peptidyl-transferase (P) site of the ribosome, the molecular machine that constructs proteins. 


When proteins are produced in cells based on the "genetic code" of codons, there is a precise process under which molecules called transfer RNA (tRNA) bind to specific amino acids and then transport them to cellular factories called ribosomes where the amino acids are placed together, step by step, to form a protein. Mistakes in this process, which is mediated by enzymes called synthetases, can be disastrous, as they can lead to improperly formed proteins. Thankfully, the tRNA molecules are matched to the proper amino acids with great precision, but we still lack a fundamental understanding of how this selection takes place. 4



The secondary structure of a typical tRNA see figure below, reveals the coding (semantic) relations that Yarus et al. are trying to obtain from chemistry alone - a quest Yockey has compared to latter-day alchemy 


At the end of its 3' arm, the tRNA binds its cognate amino acid via the universally conserved CCA sequence. Some distance away—about 70 Å—in loop 2, at the other end of the inverted cloverleaf, the anticodon recognizes the corresponding codon in the mRNA strand.  (The familiar ‘cloverleaf’ shape represents only the secondary structure of tRNA; its three-dimensional form more closely resembles an “L” shape, with the anticodon at one end and an amino acid at the other.)Thus, in the current genetic code, there is no direct chemical interaction between codons, anticodons, and amino acids. The anticodon triplet and amino acid are situated at opposite ends of the tRNA: the mRNA codon binds not to the amino acid directly, but rather to the anticodon triplet in loop 2 of the tRNA. 

Since all twenty amino acids, when bound to their corresponding tRNA molecules, attach to the same CCA sequence at the end of the 3’ arm, the stereochemical properties of that nucleotide sequence clearly do not determine which amino acids attach, and which do not. The CCA sequence is indifferent, so to speak, to which amino acids bind to it

Nevertheless, tRNAs are informationally (i.e., semantically) highly specific: protein assembly and biological function—but not chemistry—demand such specificity. As noted, in the current code, codon-to-amino acid semantic mappings are mediated by tRNAs, but also by the enzymatic action of the twenty separate aminoacyl-tRNA synthetases 

Aminoacyl tRNA synthetase

An aminoacyl tRNA synthetase (aaRS) is an enzyme that catalyzes the esterification of a specific cognate amino acid or its precursor to one of all its compatible cognate tRNAs to form an aminoacyl-tRNA. In other words, aminoacyl tRNA synthetase attaches the appropriate amino acid onto its tRNA.
This is sometimes called "charging" or "loading" the tRNA with the amino acid. Once the tRNA is charged, a ribosome can transfer the amino acid from the tRNA onto a growing peptide, according to the genetic code. Aminoacyl tRNA therefore plays an important role in DNA translation, the expression of genes to create proteins. 2

This set of twenty enzymes knows what amino acid to fasten to one end of a transfer-RNA (tRNA) molecule, based on the triplet codon it reads at the other end. It's like translating English to Chinese. A coded message is complex enough, but the ability to translate a language into another language bears the hallmarks of intelligent design. 6

Most cells use twenty aaRS enzymes, one for each amino acid. Each of these proteins recognizes a specific amino acid and the specific anticodons it binds to within the code. They then bind amino acids to the tRNA that bears the corresponding anticodon. 

Thus, instead of the code reducing to a simple set of stereochemical affinities, biochemists have found a functionally interdependent system of highly specific molecules, including mRNA, a suite of tRNAs, and twenty specific aaRS enzymes, each of which is itself constructed from information stored on the very DNA strands that the system as a whole decodes. 

Attempts to explain one part of the integrated complexity of the gene-expression system, namely the genetic code, by reference to simple chemical affinities lead not to simple rules of chemical attraction, but instead to an integrated system of multiple large molecular components. While this information-transmitting system exploits (i.e., uses) chemistry, it is not reducible to direct chemical affinities between codons or anticodons and their cognate amino acids.

The DRT model and the sequencing problem

One further aspect of Yarus’s work needs clarification and critique. One of the longest-standing and most vexing problems in origin-of-life research is known as the sequencing problem, the problem of explaining the origin of the specifically-arranged sequences of nucleotide bases that provide the genetic information or instructions for building proteins.
Yet, in addition to its other deficiencies it is important to point out that Yarus et al. do not solve the sequencing problem, although they do claim to address it indirectly. Instead, Yarus et al. attempt to explain the origin of the genetic code—or more precisely, one aspect of the translation system, the origin of the associations between certain RNA triplets and their cognate amino acids. 

Yarus et al. want to demonstrate that particular RNA triplets show chemical affinities to particular amino acids (their cognates in the present-day code). They try to do this by showing that in some RNA strands, individual triplets and their cognateamino acids bind preferentially to each other. They then envision that such affinities initially provided a direct (stereochemical) template for amino acids during protein assembly.

Since Yarus et al. think that stereochemical affinities originally caused protein synthesis to occur by direct templating, they also seem to think that solving the problem of the origin of the code would also simultaneously solve the problem of sequencing. But this does not follow. Even if we assume that Yarus et al. have succeeded in establishing a stereochemical basis for the associations between RNA triplets and amino acids in the present-day code (which they have not done; see above), they would not have solved the problem of sequencing.

The sequencing problem requires that long RNA strands would need to contain triplets already arranged to bind their cognate amino acids in the precise order necessary to assemble functional proteins. Yarus et al. analyzed RNA strands enriched in specific code-relevant triplets, and claim to have found that these strands show a chemical affinity with their cognate amino acids. But they did not find RNA strands with a properly sequenced series of triplets, each forming an association with a code-relevant amino acid as the DRT model would require, and arranged in the kind of order required to make functional proteins. To synthesize proteins by direct templating (even assuming the existence of all necessary affinities), the RNA template must have many properly sequenced triplets, just as we find in the actual messenger RNA transcripts.

sábado, 27 de junho de 2015

Why are there no peer reviewed scientific papers of intelligent design in mainstream scientific journals ?

The reasoning here is that if creation or intelligent design were scientific, then it would be included in peer-reviewed journals. Since it does not appear in peer-reviewed journals, then it must not be scientific. The problem with this reasoning is the circular process by which papers are accepted for inclusion in such journals. The scientists in authoritative positions have established their own preconceived definition for science. “To be scientific in our era is to search for solely natural explanations” (Hewlett and Peters, 2006, p. 75, emp. added). Thus, if a paper even hints at something other than a “natural” explanation, it is rejected as “unscientific” regardless of the facts or research presented in the paper. Creationists’ papers are not allowed in peer-reviewed journals, not because they are poorly written or documented, but because they do not offer “solely natural explanations.”

http://elshamah.heavenforum.org/t1700-why-isn-t-intelligent-design-found-published-in-peer-reviewed-science-journals#3540

Explaining the origin of the triplet code, its translation, and the machinery to make proteins

In order to explain the origin of manufacturing of proteins, the origin of the hardware, that is the inumerous enzymes and proteins, specially the enormously complex  RNA polymerases, transcription factors , repair enzymes, ribosome, as well as the DNA double helix and mRNA's, amino acids  involved must be explained, but specially and also  the origin of the code itself, and how the tranlation of the triplet anti codon to amino acids, and its assignment, arised. There is no physical affinity between the anti codon and the amino acids. What must be explained, is the arrangement of the codons in the standard codon table which is highly non-random, redundant and optimal, and serves to translate the information into the amino acid sequence to make proteins,  the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is, the origin of its translation. The origin of a alphabet through the triplet codons is one thing, but on top, it has to be translated to a other " alphabet " constituted through the 20 amino acid sequence. That is as to explain the origin of capability to translate the english language into chinese. We have to constitute the english and chinese language and symbols first, in order to know its equivalence. That is a mental process.

On top of that, the machinery itself to promote the process  has also to be explained, that is the hardware. When humans translate english to chinese, we recognise the english word, and the translator knows the equivalent chinese symbol and writes it down to form the sentence. In the cell, Aminoacyl tRNA synthetase , special enzymes, of which each one is assigned to a specific amino acid, recognise the triplet anticodon of the tRNA, and attach the equivalent amino acid to the tRNA, which afterwards bondis it to the next amino acid.  How could random chemical reactions produced this recognition ? Some theories try to explain the mechanism, but they all remain unsatisfactory. Obviously. Furthermore, Aminoacyl tRNA synthetase are complex enzymes. For what reason would they arise, if the final function could only be employd after the whole translation process would be set in place, with a fully functional ribosome being able to do its job? Remembering the catch22 situation, since they are by themself made through the very own process in question ? Why is it not rational to conclude that the code itself, the software, as well as the hardware, are best explained through the creative act  of a highly intelligent creator, rather than random chemical affinities and reactions ?  Question: what good would the ribosome be for without tRNA's ? without amino acids, which are the product of enormously complex chemical processes and pathways ?

 What good would the machinery be good for, if the code was not established, and neither the assignment of each codon to the respective amino acid ? had the software and the hardware not have to be in place at the same time? Were all the parts not only fully functional if fully developed, interlocked, set-up, and tuned to do its job with precision, far better and more advanced than  a human made motor ? And even it lets say, the whole thing was fully working and in place, what good would it be for without all the other parts required, that is, the DNA double helix, its compactation through histones and chromatins and chromosomes, its highly complex mechanism of information extraction and transcription into mRNA? Had the whole process , that is INITIATION OF TRANSCRIPTION, CAPPING, ELONGATION, SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION, EXPORT FROM THE NUCLEUS TO THE CYTOSOL, INITIATION OF PROTEIN SYNTHESIS (TRANSLATION), COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING, and its respective machinery not have to be all in place ? Does that not constitute a interdependent, and irreducible complex system ?

 http://elshamah.heavenforum.org/t2057-origin-of-translation-of-the-4-nucleic-acid-bases-and-the-20-amino-acids-and-the-universal-assignment-of-codons-to-amino-acids

sexta-feira, 26 de junho de 2015

DNA repair


Rod Carty:  DNA repair mechanisms make no sense in an evolutionary presupposition. Error correction requires error detection, and that requires the detection process to be able to compare the DNA as it is to the way it ought to be.

Kunkel, T.A., DNA Replication Fidelity, J. Biological Chemistry 279:16895–16898, 23 April 2004. 

This machinery keeps the error rate down to less than one error per 100 million letters 

Maintaining the genetic stability that an organism needs for its survival requires not only an extremely accurate mechanism for replicating DNA, but also mechanisms for repairing the many accidental lesions that occur continually in DNA.Most such spontaneous changes in DNA are temporary because they are immediately corrected by a set of processes that are collectively called DNA repair. Of the thousands of random changes created every day in the DNA of a human cell by heat, metabolic accidents, radiation of various sorts, and exposure to substances in the environment, only a few accumulate as mutations in the DNA sequence. For example, we now know that fewer than one in 1000 accidental base changes in DNA results in a permanent mutation; the rest are eliminated with remarkable efficiency by DNA repair. The importance of DNA repair is evident from the large investment that cells make in DNA repair enzymes. For example, analysis of the genomes of bacteria and yeasts has revealed that several percent of the coding capacity of these organisms is devoted solely to DNA repair functions.

Without DNA repair, spontaneous DNA damage would rapidly change DNA sequences

Although DNA is a highly stable material, as required for the storage of genetic information, it is a complex organic molecule that is susceptible, even under normal cell conditions, to spontaneous changes that would lead to mutations if left unrepaired.

DNA damage is an alteration in the chemical structure of DNA, such as a break in a strand of DNA, a base missing from the backbone of DNA, or a chemically changed base. 15
Naturally occurring DNA damages arise more than 60,000 times per day per mammalian cell. 

DNA damage appears to be a fundamental problem for life. DNA damages are a major primary cause of cancer. DNA damages give rise to mutations and epimutations that, by a process of natural selection, can cause progression to cancer. 16

Different pathways to repair DNA

DNA repair mechanisms fall into 2 categories 

– Repair of damaged bases
– Repair of incorrectly basepaired bases during replication

Cells have multiple pathways to repair their DNA using different enzymes that act upon different kinds of lesions.

At least four excision repair pathways exist to repair single stranded DNA damage:

Nucleotide excision repair (NER)
Base excision repair (BER)
DNA mismatch repair (MMR)
Repair through alkyltransferase-like proteins (ATLs)

In most cases, DNA repair is a multi-step process

– 1. An irregularity in DNA structure is detected
– 2. The abnormal DNA is removed
– 3. Normal DNA is synthesized

DNA bases are also occasionally damaged by an encounter with reactive metabolites produced in the cell (including reactive forms of oxygen) or by exposure to chemicals in the environment. Likewise, ultraviolet radiation from the sun can produce a covalent Iinkage between two adjacent pyrimidine bases in DNA to form, for example, thymine dimers This type of damage occurs in the DNA of cells exposed to ultraviolet or radiation(as in sunlight) A similar dimer will form between any two neighboring pyrimidine bases ( C or T residues ) in DNA. ( see below )




If left uncorrected when the DNA is replicated, most of these changes would be expected to lead either to the deletion of one or more base pairs or to a base-pair substitution in the daughter DNA chain. ( see below ) The mutations would then be propagated throughout subsequent cell generations. Such a high rate of random changes in the DNA sequence would have disastrous consequences for an organism


Its evident that the repair mechanism is essential for the cell to survive. It could not have evolved after life arose, but must have come into existence before. The mechanism is highly complex and elaborated, as consequence, the design inference is justified and seems to be the best way to explain its existence. 


The DNA double helix is readily repaired

The double-helical structure of DNA is ideally suited for repair because it carries two separate copies of all the genetic information-one in each of its two strands. Thus, when one strand is damaged, the complementary strand retains an intact copy of the same information, and this copy is generally used to restore the correct nucleotide sequences to the damaged strand. An indication of the importance of a double-stranded helix to the safe storage of genetic information is that all cells use it; only a few small viruses use single stranded DNA or RNA as their genetic material. The types of repair processes described in this section cannot operate on such nucleic acids, and once damaged, the chance of a permanent nucleotide change occurring in these singlestranded genomes of viruses is thus very high. It seems that only organisms with tiny genomes (and therefore tiny targets for DNA damage) can afford to encode their genetic information in any molecule other than a DNA double helix.Below shows  two of the most common pathways. In both, the damage is excised, the original DNA sequence is restored by a DNA polymerase that uses the undamaged strand as its template, and a remaining break in the double helix is sealed by DNA ligase.



DNA ligase.

The reaction catalyzed by DNA ligase. This enzyme seals a broken phosphodiester bond. As shown, DNA ligase uses a molecule of ATP to activate the 5' end at the nick    (step 1 ) before forming the new bond (step 2). In this way, the energetically unfavorable nick-sealing reaction is driven by being coupled to the energetically favorable
process of ATP hydrolysis.


The main two pathways differ in the way in which they remove the damage from DNA. The first pathway, called

Base excision repair (BER) 9


It involves a battery of enzymes called DNA glycosylases, each of which can recognize a specific tlpe of altered base in DNA and catalyze its hydrolltic removal. There are at least six types of these enzymes, including those that remove deaminated Cs, deaminated As, different types of alkylated or oxidized bases, bases with opened rings, and bases in which a carbon-carbon double bond has been accidentally converted to a carbon-carbon single bond.


How is an altered base detected within the context of the double helix? A key step is an enzyme-mediated "flipping-out" of the altered nucleotide from the helix, which allows the DNA glycosylase to probe all faces of the base for damage ( see above image ) It is thought that these enzymes travel along DNA using base-flipping to evaluate the status of each base. Once an enzyme finds the damaged base that it recognizes, it removes the base from its sugar. The "missing tooth" created by DNA glycosylase action is recognized by an enzyme called AP endonuclease (AP for apurinic or apyrimidinic, endo to signify that the nuclease cleaves within the polynucleotide chain), which cuts the phosphodiester backbone, after which the damage is removed and the resulting gap repaired ( see figure below ) Depurination, which is by far the most frequent rype of damage suffered by DNA, also leaves a deoxyribose sugar with a missing base. Depurinations are directly repaired beginning with AP endonuclease.  

While the BER pathway can recognize specific non-bulky lesions in DNA, it can correct only damaged bases that are removed by specific glycosylases. Similarly, the MMR pathway only targets mismatched Watson-Crick base pairs. 2

Molecular lesion A molecular lesion or point lesion is damage to the structure of a biological molecule such as DNA, enzymes, or proteins that results in reduction or absence of normal function or, in rare cases, the gain of a new function. Lesions in DNA consist of breaks and other changes in the chemical structure of the helix (see types of DNA lesions) while lesions in proteins consist of both broken bonds and improper folding of the amino acid chain. 6 

DNA-N-glycosylases

Base excision repair (BER)  involves a category of enzymes  known as  DNA-N-glycosylases  These enzymes can recognize a single damaged base and cleave the bond between it and the sugar in the DNA removes one base, excises several around it, and replaces with several new bases using Pol adding to 3’ ends then ligase attaching to 5’ end

DNA glycosylases  are a family of enzymes involved in base excision repair, classified under EC number EC 3.2.2. Base excision repair is the mechanism by which damaged bases in DNA are removed and replaced. DNA glycosylases catalyze the first step of this process. They remove the damaged nitrogenous base while leaving the sugar-phosphate backbone intact, creating an apurinic/apyrimidinic site, commonly referred to as an AP site. This is accomplished by flipping the damaged base out of the double helix followed by cleavage of the N-glycosidic bond. Glycosylases were first discovered in bacteria, and have since been found in all kingdoms of life. 8

One example of DNA's  automatic error-correction utilities are enough to stagger the imagination.  There are dozens of repair mechanisms to shield our genetic code from damage; one of them was portrayed in Nature  in terms that should inspire awe. 10

How do DNA-repair enzymes find aberrant nucleotides among the myriad of normal ones? 
One enzyme has been caught in the act of checking for damage, providing clues to its quality-control process.

From Nature's article :
Structure of a repair enzyme interrogating undamaged DNA elucidates recognition of damaged DNA 11

How DNA repair proteins distinguish between the rare sites of damage and the vast expanse of normal DNA is poorly understood. Recognizing the mutagenic lesion 8-oxoguanine (oxoG) represents an especially formidable challenge, because this oxidized nucleobase differs by only two atoms from its normal counterpart, guanine (G).  The X-ray structure of the trapped complex features a target G nucleobase extruded from the DNA helix but denied insertion into the lesion recognition pocket of the enzyme. Free energy difference calculations show that both attractive and repulsive interactions have an important role in the preferential binding of oxoG compared with G to the active site. The structure reveals a remarkably effective gate-keeping strategy for lesion discrimination and suggests a mechanism for oxoG insertion into the hOGG1 active site.

Of the four bases in DNA (C, G, A, and T) cytosine or C is always supposed to pair with guanine, G, and adenine, A, is always supposed to pair with thymine, T.  The enzyme studied by Banerjee et al. in Nature is one of a host of molecular machines called BER glycosylases; this one is called human oxoG glycosylase repair enzyme (hOGG1), and it is specialized for finding a particular type of error: an oxidized G base (guanine).  Oxidation damage can be caused by exposure to ionizing radiation (like sunburn) or free radicals roaming around in the cell nucleus.  The normal G becomes oxoG, making it very slightly out of shape.  There might be one in a million of these on a DNA strand.  While it seems like a minor typo, it can actually cause the translation machinery to insert the wrong amino acid into a protein, with disastrous results, such as colorectal cancer.  12

The machine latches onto the DNA double helix and works its way down the strand, feeling every base on the way.  As it proceeds, it kinks the DNA strand into a sharp angle.  It is built to ignore the T and A bases, but whenever it feels a C, it knows there is supposed to be a G attached.  The machine has precision contact points for C and G.  When the C engages, the base paired to it is flipped up out of the helix into a slot inside the enzyme that is finely crafted to mate with a pure, clean G.  If all is well, it flips the G back into the DNA helix and moves on.  If the base is an oxoG, however, that base gets flipped into another slot further inside, where powerful forces yank the errant base out of the strand so that other machines can insert the correct one.

Now this is all wonderful stuff so far, but as with many things in living cells, the true wonder is in the details.  The thermodynamic energy differences between G and oxoG are extremely slight – oxoG contains only one extra atom of oxygen – and yet this machine is able to discriminate between them to high levels of accuracy.

The author, David, says in the Nature article : 

Structural biology:  DNA search and rescue 

DNA-repair enzymes amaze us with their ability to search through vast tracts of DNA to find subtle anomalies in the structure. The human repair enzyme 8-oxoguanine glycosylase (hOGG1) is particularly impressive in this regard because it efficiently removes 8-oxoguanine (oxoG), a damaged guanine (G) base containing an extra oxygen atom, and ignores undamaged bases.

The team led by Anirban Banerjee of Harvard, using a clever new stop-action method of imaging, caught this little enzyme in the act of binding to a bad guanine, helping scientists visualize how the machinery works. Some other amazing details are mentioned about this molecular proofreader. It checks every C-G pair, but slips right past the A-T pairs.  The enzyme, “much like a train that stops only at certain locations,” pauses at each C and, better than any railcar conductor inspecting each ticket, flips up the G to validate it.  Unless it conforms to the slot perfectly – even though G and oxoG differ in their match by only one hydrogen bond – it is ejected like a freeloader in a Pullman car and tossed out into the desert.  David elaborates:

Calculations of differences in free energy indicate that both favourable and unfavourable interactions lead to preferential binding of oxoG over G in the oxoG-recognition pocket, and of G over oxoG in the alternative site.  This structure [the image resolved by the scientific team] captures an intermediate that forms in the process of finding oxoG, and illustrates that the damaged base must pass through a series of ‘gates’, or checkpoints, within the enzyme; only oxoG satisfies the requirements for admission to the damage-specific pocket, where it will be clipped from the DNA.  Other bases (C, A and T) may be rejected outright without extrusion from the helix because hOGG1 scrutinizes both bases in each pair, and only bases opposite a C will be examined more closely.

 

Natural selection cannot act without accurate replication, yet the protein machinery for the level of accuracy required is itself built by the very genetic code it is designed to protect.  Thats a catch22 situation.  It would have been challenging enough to explain accurate transcription and translation alone by natural means, but as consequence of UV radiation, it  would have quickly been destroyed through accumulation of errors.  So accurate replication and proofreading are required for the origin of life. How on earth could proofreading enzymes emerge, especially with this degree of fidelity, when they depend on the very information that they are designed to protect?  Think about it....  This is one more prima facie example of chicken and egg situation. What is the alternative explanation to design ? Proofreading  DNA by chance ?  And a complex suite of translation machinery without a designer?  

I  enjoy to learn about  the wonder of these incredible mechanisms.  If the apostle Paul could understand that creation demands a Creator as he wrote in Romans chapter one 18, how much more we today with all the revelations about cell biology and molecular machines?   

Since the editing machinery itself requires proper proofreading and editing during its manufacturing, how would the information for the machinery be transmitted accurately before the machinery was in place and working properly? Lest it be argued that the accuracy could be achieved stepwise through selection, note that a high degree of accuracy is needed to prevent ‘error catastrophe’ in the first place—from the accumulation of ‘noise’ in the form of junk proteins specified by the damaged DNA. 18



Depending on the species, this repair system can eliminate  abnormal bases such as Uracil; Thymine dimers 3-methyladenine; 7-methylguanine

14

Since many mutations are deleterious, DNA repair systems  are vital to the survival of all organisms 

Living cells contain several DNA repair systems that can fix different type of DNA alterations



Nucleotide excision repair (NER)

Nucleotide excision repair is a DNA repair mechanism.   DNA damage occurs constantly because of chemicals (i.e. intercalating agents), radiation and other mutagens. 


Nucleotide excision repair (NER) is a highly conserved DNA repair mechanism. NER systems recognize the damaged DNA strand, cleave it on both sides of the lesion, remove and newly synthesize the fragment. UvrB is a central component of the bacterial NER system participating in damage recognition, strand excision and repair synthesis.[/b] We have solved the crystal structure of UvrB in the apo and the ATP-bound forms. UvrB contains two domains related in structure to helicases, and two additional domains unique to repair proteins. The structure contains all elements of an intact helicase, and is evidence that UvrB utilizes ATP hydrolysis to move along the DNA to probe for damage. The location of conserved residues and structural comparisons allow us to predict the path of the DNA and suggest that the tight preincision complex of UvrB and the damaged DNA is formed by insertion of a flexible β-hairpin between the two DNA strands. 3

DNA constantly requires repair due to damage that can occur to bases from a vast variety of sources including chemicals but also ultraviolet (UV) light from the sun. Nucleotide excision repair (NER) is a particularly important mechanism by which the cell can prevent unwanted mutations by removing the vast majority of UV-induced DNA damage (mostly in the form of thymine dimers and 6-4-photoproducts). The importance of this repair mechanism is evidenced by the severe human diseases that result from in-born genetic mutations of NER proteins including Xeroderma pigmentosum and Cockayne's syndrome. While the base excision repair machinery can recognize specific lesions in the DNA and can correct only damaged bases that can be removed by a specific glycosylase, the nucleotide excision repair enzymes recognize bulky distortions in the shape of the DNA double helix. Recognition of these distortions leads to the removal of a short single-stranded DNA segment that includes the lesion, creating a single-strand gap in the DNA, which is subsequently filled in by DNA polymerase, which uses the undamaged strand as a template. NER can be divided into two subpathways (Global genomic NER and Transcription coupled NER) that differ only in their recognition of helix-distorting DNA damage. 4

Nucleotide excision repair (NER)   is a particularly important excision mechanism that removes DNA damage induced by ultraviolet light (UV). 2UV DNA damage results in bulky DNA adducts - these adducts are mostly thymine dimers and 6,4-photoproducts. Recognition of the damage leads to removal of a short single-stranded DNA segment that contains the lesion. The undamaged single-stranded DNA remains and DNA polymerase uses it as a template to synthesize a short complementary sequence. Final ligation to complete NER and form a double stranded DNA is carried out by DNA ligase. NER can be divided into two subpathways: global genomic NER (GG-NER) and transcription coupled NER (TC-NER). The two subpathways differ in how they recognize DNA damage but they share the same process for lesion incision, repair, and ligation.

The importance of NER is evidenced by the severe human diseases that result from in-born genetic mutations of NER proteins. Xeroderma pigmentosum and Cockayne's syndrome are two examples of NER associated diseases.

Maintaining genomic integrity is essential for living organisms. NER is a major pathway allowing the removal of lesions which would otherwise accumulate and endanger the health of the affected organism.  5



Nucleotide excision repair (NER) is a mechanism to recognize and repair bulky DNA damage caused by compounds, environmental carcinogens, and exposure to UV-light. In humans hereditary defects in the NER pathway are linked to at least three diseases: xeroderma pigmentosum (XP), Cockayne syndrome (CS), and trichothiodystrophy (TTD). The repair of damaged DNA involves at least 30 polypeptides within two different sub-pathways of NER known as transcription-coupled repair (TCR-NER) and global genome repair (GGR-NER). TCR refers to the expedited repair of lesions located in the actively transcribed strand of genes by RNA polymerase II (RNAP II). In GGR-NER the first step of damage recognition involves XPC-hHR23B complex together with XPE complex (in prokaryotes, uvrAB complex). The following steps of GGR-NER and TCR-NER are similar.





 1


1) http://www.genome.jp/dbget-bin/www_bget?ko03420
2) http://en.wikipedia.org/wiki/Nucleotide_excision_repair
3) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1171753/pdf/006899.pdf
4) http://bioisolutions.blogspot.com.br/2008/04/ner-pathway.html
5) http://intelligent-sequences.blogspot.com.br/2008_06_01_archive.html
6) https://en.wikipedia.org/wiki/Molecular_lesion
8 ) https://en.wikipedia.org/wiki/DNA_glycosylase
9) http://www.csun.edu/~cmalone/pdf360/Ch15-2repairtanspose.pdf
10) http://www.nature.com/nature/journal/v434/n7033/full/nature03458.html
11) http://www.nature.com/nature/journal/v434/n7033/full/nature03458.html
12) http://creationsafaris.com/crev200503.htm
13) http://fire.biol.wwu.edu/trent/trent/DNAsearchrescue.pdf
14) http://www.genome.jp/kegg-bin/show_pathway?ko03410
15) https://en.wikipedia.org/wiki/DNA_damage_(naturally_occurring)
16) http://www.intechopen.com/books/new-research-directions-in-dna-repair/dna-damage-dna-repair-and-cancer
17) http://creation.com/dna-best-information-storage