Transdisciplinary, Psychology and Primary Theories of Origin

Chris Montoya, David Montoya and Graeme Mackay

Science does not exist without the scientist as religion does not exist without the believer. Both endeavors are human constructions and as such both methods of knowing are inextricably bound in a quagmire of self-interest, emotion and faith. A better understanding of the psychology behind how and why humans are driven to create primary theories of existence, ostensibly apart from organized religion, is critical to future scientific progress concerning origins. In this paper primary views concerning ontogeny, phylogeny and cosmology are examined. It is argued that building primary theories with an eye on the transdisciplinary perspective highlights distortions caused by sub-conscious mechanisms. Finally, in contrast to popular thought, bringing subconscious distortions into conscious appreciation demonstrates that science and religion are currently in alignment with respect to these three primary views of origin.

Until someone builds a time machine mankind will never know how things really began. With this is mind, primary theories of everything as creative endeavors may be seen as symptoms of "mental illness", a modern form of ancient Greek mythology where Sisyphus was eternally tasked by the gods to perform an impossible endeavor. 1 At what point does a theory become a driving force to create justification for psychopathology? Manifestations of the pathology can range from the neurotic to psychotic breaks from reality as the stricken repress, distort, deny and rationalize in a veiled attempt to make social reality intelligible from their egocentric perspectives. 2 The more intelligent the patient the more rational the theory appears and the more the contagion spreads. 3 Many exceptionally creative scientists and artists have had these predispositions. In the 18th century Hegel labeled all God reverencing scientists as irrational. Real or imagined threats, such as these, cause the mind to respond with fear. Fear in the predisposed mind releases defense mechanisms to protect the ego, but in addition these same mechanisms also distort reality invisibly, in light of the "creative illness." 4 In rational-emotive therapy individuals become aware of subconscious distortions. This awareness will facilitate and delimit areas of agreement between scientific and Judeo-Christian religious worldviews at three levels of transdisciplinary harmonization. A soft fusion at the level of ontogeny, a moderate fusion at the level of phylogeny and a hard fusion at the level of cosmology will be discussed. Further and armed with the knowledge of "creative scientific illnesses" we will, from a different parallax, re-evaluate Darwin’s Theory of evolution exploring the eclectic nature of our reality in light of our apparent human cognitive and emotional limitations.



Theories of Origin - A Soft Fusion (Ontogeny)

Theories of human ontogeny consider both the biological-genetic as well as the socio-cultural aspects of development. Religion appears neutral or in agreement with the various aspects with some exceptions. We will use the "fact" of when human life begins as an apparent exception between the perspectives, to highlight subconscious distortions that are a hindrance to scientific and religious harmonization. In the current world-view, pro-life vs. pro-choice is seen as a battle between religion and science. Pro-lifers are represented as irrational and religious claiming human life begins at conception, whereas, pro-choicers are represented as scientific rational thinkers choosing not to suppose when human life begins. Is this an accurate representation of the world or is it merely a psychological distortion? Viewing the conflict from statistics, another discipline, may bring reality into focus. The religious belief as to when human life begins is limited to a restricted number of "non-scientific cognitive frames," and although pro-lifers claim life begins at conception the exact time is not Biblically defined. Rational scientists, on the other hand, whose data appear to be invariant across a number of different "cognitive frames" are also blindsided. Contrary to the current worldview, university level Lifespan Development textbooks across North America define the human life span as beginning at conception and ending at death. 5 Human lifespan appears, at least on the surface, to be a simple, rational and adequate definition. From conception lifespan data are then collected, collated and tallied. This methodology appears straightforward and may seem to be a proper way for developmental sciences to operate. Ego defense mechanisms, however, create unconscious pitfalls that cause developmental scientists from the best universities to misreport the leading cause of death in Canada. In 2001, heart disease caused 74,824 deaths followed closely by cancer at 65,205 deaths. Abortion occurs post conception and claimed over 112,000 Canadian lives in 2001. 6 The subconscious distortion in this case may take the form of not wanting to be stigmatized for saying that North Americans kill more humans than are taken naturally or to be threatened by the label "irrational" for merely appearing to support a religious perspective. Data like 4 out of 10 women in North America have had an abortion come from the religious right. Isn’t this correct? Or is data data? With some distortions revealed, a new worldview emerges. Developmental science and religion are ostensibly reporting the same findings. Life begins at conception and any other line that is drawn after we receive our genetic code is arbitrary. Yet the current worldview does not reflect this. Even today if you ask well-read people about abortion and when human life begins the prominent reply one hears is Roe vs. Wade. Yet Roe vs. Wade considered a privacy issue not an ontogeny issue. Once we as intellectuals can see past the ego defense distortions, a unity of conscious thought leads us to a general confluence where a soft transdisciplinary fusion is possible. Soft fusion is basic face validity: a nominal agreement "metaphorically" that life begins at conception.

Theories of Origin - A Moderate Fusion (Phylogeny)

Theories of phylogeny span from God created the species (Creationist), to something created the species (Intelligent Design) to Darwin (Evolution). The current worldview concerning Charles Darwin, we feel, is best stated in the words of the Oxford University Press, "Darwin…argued for a material, not divine, origin of the species." 7 As recently as June 2005 the Harris Poll released information stating that 55% of North Americans believe that Creationism, Intelligent Design Theory, and Darwinian Evolution should all be taught together in public schools under the rubric of science. 8 Have scientists and theologians accurately defined the various phylogenic positions? Defense mechanisms at the personal level and groupthink at the psychosocial transpersonal level have again caused considerable distortion.

Specifically, in the Origin of the Species, in the chapter titled Recapitulation and Conclusion, Darwin clearly writes, "Therefore I would infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed by the Creator…To my mind it accords better with what we know of the laws impressed on matter by the Creator, that the production and extinction of the past and present inhabitants of the world should have been due to secondary causes." Darwin concluded in the last paragraph of the Origin of the Species, "There is grandeur in this view of life with its several powers, having been originally breathed by the Creator into a few forms or into one..." Breathed life into of course, given Darwin’s Christian roots comes from the God of Genesis where

God reportedly did just that when creating humans. Darwin takes poetic liberties as to the style and timing. Darwin further writes in his Difficulties in Theory Chapter, "Have we any right to assume that the Creator works by intellectual powers like those of man?"

Darwin was a deist creationist at the writing of his theory of evolution. In the current worldview, evolution stands on the shoulders of Darwin. Therefore, what is currently taught around the world is Intelligent Design. High school, College and University instructors should feel free then to teach "Traditional Darwinian Evolution" not as it has been misconstrued through fear-based subconscious mechanisms and groupthink, as Godless, but as the intelligent design science it was written to be. Eleven years after the publication of The Origin of the Species Darwin, before his death, published The Descent of Man. It was in this latter work Darwin again reiterated, "the idea of a Creator and Ruler of the Universe had been answered in the affirmative." 9 Yet again the current worldview does not reflect this.

Although both Darwinian Evolution and Christianity demonstrate convergent validity, in that both narratives appear under one rubric of intelligent design, more is needed for a statistically based moderate transdisciplinary fusion. Given 15 items in any storyline there would be over 1 trillion ways of combining them into your creation story. Fifteen ways to choose the first element, times fourteen ways to choose the second element, times thirteen ways to choose the third element, etc. If there were only one correct order the odds would be 1 trillion to one that you would randomly select the right order. The Bible narrative and scientific narrative demonstrate an excellent 4th dimensional alignment: One, before the Big Bang God set things up for explosion/expansion. 10 Two, God says let there be light and bang! Three, light escapes from the event horizon. Four, the heavenly firmament forms i.e. the disk of the Milky Way forms. 11 Five, the Earth cools and water appears. Six, life appears first in water. Seven, life appears next on land. Eight, plant life appears first. 12 Nine, animal life appears next. Ten, birds appear next. Eleven, mammals appear next. Twelve man appears last.13 Thirteen, man was naked first, then was clothed in animal skins. Fourteen, man was first a farmer. Fifteen, man was then a keeper of animals. 14 There is a definite and ordinal transgression of boundaries and knowledge coherence. 15 Once ego defense mechanisms and groupthink are exposed by psychological insight, scientists can envision the invisible: a general overarching confluence between phylogeny and a Creation narrative forming a moderate transdisciplinary fusion.

Theories of Origin (Cosmology)

Theories of cosmology ostensibly span from Einstein and the Big Bang to a steady state universe. Hypotheses concerning quantum foam, string theory and various other theories of everything are being published. On the street, however, the worldview revolves around the big bang, the cosmological constant and whether there will be a big crunch or continued expansion. Since the Big Bang Theory became dominant and displaced the constant state theory of the universe most cosmologists believe that if there was a beginning then there had to be a beginner, starter or Creator. The secularists apparently had lost the battle of God in cosmology because they could not get around the mathematics of Einstein. Math, in addition to humor, it seems is a way around unconscious defense mechanisms. Some day quantum physics may be joined mathematically to concepts in general relativity in a grand theory of everything. 16 However, even with a theory of everything, the paradox over God cosmology, general relativity and quantum physics will continue. Cosmology and religion are currently sharing a common narrative: because there was a start, there was a Creator or starter of the cosmos.

A Hard Transdisciplinary Fusion of Primary Theories of Origin

For an elegant fusion to occur at this level a basic tenet must be that there was truth in the Bible and there was factual probabilistic accuracy in the physical sciences. In addition, convergent validity with an exact mathematically defined interval or ratio interface between disciplines was required. 17 Specifically, what occurred was a point-by-point temporal linking of the Creation narrative with hard scientific data concerning the temporal unfolding of the universe? The authorized King James Edition (Ben Asher text) of the Bible was used. 18 Only the descriptions of days 3, 4, 5, 6 and 13 of Creation were used from the Bible because they could be literally linked to cosmological, geologic, fossil and historical data. Older, time tested scientific paleontology compilations were used for the correlation. Days in the Bible narrative were linked to scientific observations in the following fashion. From Genesis the span of the 3rd day was mapped onto scientific data from the creation of the solid planet to the appearance of seed bearing plants termed angiosperms. The span of day three was calculated as 4.46 billon years by subtracting 4.6 billion years (oldest estimated age of the earth) from 13.6 million years, the approximate arrival date of true seed-bearing plants. In a similar fashion on day 4 God said, "Let there be lights in the firmament of heaven to divide day from night; and let them be for signs and for seasons, and for days and for years." This epoch has been estimated scientifically as the time it took for our atmosphere to turn from opaque, to translucent, to transparent. This span took approximately 1 billion years spanning from 1.75 billion years ago to 750 million years ago. 19 In a similar fashion on day 5 God said, "Let the waters bring forth abundant moving creatures." Day 5 then spanned all the way to birds. From a scientific perspective this span included early jellyfish-like creatures termed coelenterates to the first birds. This span took approximately 305,000,000 years. The dates spanned from 500 million years ago to 195 million years ago. In a similar fashion on day 6 God created modern mammals and cattle were mentioned. Modern mammals came on the scene 53,000,000 years ago. (It is important to note that archaic mammals and water mammals were created earlier). Day 6 ended with the appearance of man. The earliest hominids with tools (Homo Habilis) appeared approximately 3 million years ago in Africa (although more recent evidence could place it earlier). The time span for day 6 therefore was approximately 50,000,000 years. 20

Using a different methodology, on day 13 fourteen hundred years BC, Moses wrote, "For a thousand years in Thy (God’s) sight are as yesterday..." The inspired time dilation circa day 13 therefore was 1000 years of our time to one day of God’s time. The time span of day 13 was 1000 years (Psalm 90:4). In fourth-dimensional curvilinear calculations of the appropriate day, the day selected was the one where the date falls closest to the midpoint of the span on graph 1.

The original five data points were plotted in standard psychophysical log-linear formats. 21 The data points were then analyzed using a Pearson Product Moment Correlation. A t-test to assess the significance of the correlation coefficient was calculated. Finally, the linear regression of Y on X for un-grouped data was calculated for the original five data points.

Figure 1 graphically illustrates the temporal overlap pattern created by applying the literal translation of the Holy Bible to the extant fossil and geologic records. Note at the end of Day 0, approximately 12.5 billion years from the present time, the so-called Big Bang occurred. The time of Moses is marked by a four pointed star on Day 13. Finally, we come into temporal synchrony with our distant hypothetical observer 6.66 epochs from the creative process that started long before the big bang

The correlation from the hard fusion of the 5 plotted points was r = - .9969. The temporal correlation was significant t (3) = -21.9, p < .001. The interpolated line is described by the linear equation: Y' = - .7567(x) + 15.34. It is apparent that temporal duration, as psychophysically described from a transdisciplinary perspective, correlating cosmological, geologic, fossil and historic records with the Bible narrative is curvilinear and predictable. Figure 1 graphically illustrates that the epochs present themselves in 6.66 overarching triplets. The length of all the other non-data point days was extrapolated from the linear regression (See Graph 2).



Figure 2 graphically illustrates a one to one mapping between the literal Biblical creation narrative and cosmological, geological, fossil and historic records for the planet Earth. This mapping, however, is only understandable when viewed from the parallax of traditional psychophysics and Einstein’s General Theory of Relativity. The days used in the correlation were days 3, 4, 5, 6 and 13. The interpolated correlation was r = - .9969. This means 99.38% of temporal deceleration is predicted from the number of relativistic epochs since the dawn of creation (t (3) = - 21.9, p < .001).


It is also clear that our universal time, compared to our distant hypothetical observer, is decelerating (see Figure 2). The length of each of the 5 epochs is predictably shorter on each of the original 5 selected days following Creation. 22 Of some interdisciplinary interest is the fact that in the Brahma religion the first heart beat of the earth (day after its creation) was approximately 4 ½ billion years in length, which matches our data. 23 In addition, the Mayan calendar, which started around the time of the ten commandments (J. Eric S. Thompson determined that Mayan Time corresponded to the Julian date 584283, which equaled August 11th, 3114 B.C. in our Gregorian calendar) ends in the same decade that our transdisciplinary data reaches temporal synchrony with our distant hypothetical observer some 5125 years later on December 21st, 2012 A.D. 24


Conclusions about Origins

The books of the Bible are divinely inspired human constructions dealing in faith and truth, whereas the books of science are human constructions dealing in the senses and probabilistic fact. The authors are convinced that theologians and scientists are well-meaning, moral people who diligently search their areas of expertise for meaning and purpose. However, the narratives extrapolated from the truth of the Bible or the hard data of science about our origins, by their very psychological nature, should not be considered immediately logical or consistent. On the other hand, time-tested Biblical interpretations and time-tested scientific theories should be confluent. The negative impact of ego defense mechanisms summated through social psychology at the transpersonal level via the mechanism of groupthink over time can be best detected using a broad transdisciplinary lens. This initial nexus is but a seminal step toward a seamless transition bridging interdisciplinary research into multidisciplinary designs and transdisciplinary awareness: an awareness that both science and religion will require to function in the new millennium.

Applying Transdisciplinary Principles When Comparing Scientifically Based Primary Theories of Origin

Further and penultimate to our purpose, the following question is posited: should our ultimate hypothesis be that there is one single best origin narrative? In addition, are we to assume that origin issues will simply be solved by the continued collection and analysis of data? Or, should we finally admit that there are many nontrivial distinct scientifically based yet equally sound ways of describing origin, whether speaking to the ontogeny, phylogeny or cosmology of the precept. That is to say, given the current numerous theories and narratives can competent seekers of knowledge be anything but eclectic in our questioning, methodologies and visions? To this general purpose we will in a transdisciplinary manner re-evaluate Darwin’s initial observations, this time remaining within the scientific arena to emphasize just one of many potential nontrivial distinct scientifically based alternate ways of describing origin data given the available objective scientific evidence.

Whereas many published articles cite the need for merging disparate areas of scientific investigation via transdisciplinary techniques 25, no articles to date, however, have attempted to lay down specific and simple, statistically-based methodologies for eliminating or modifying extant mono-disciplinary theories into "best fit" transdisciplinary models. Utilizing a transparent and simple transdisciplinary statistically- based method we significantly reduced a number divergent validity problems currently encountered with traditional Darwinian evolution (p< .001). In addition, to align the current interpretation of the fossil record with other disparate scientific communities, mutation as the driving force behind species change was de-emphasized. Instead an emphasis on an over-arching trans-species phenotypic expression proceeding from an infinite variety of similar and competing genetic ancestors was adopted.

A Transdisciplinary View of Darwin’s Grand Theory

Dennett, Dawkins, and Mayr, ostensibly claim that since 1859 the basic precepts, of Darwin’s Theory of Evolution, have remained virtually untouched. Much like Freudian Theory, Darwin’s Theory of Evolution has existed in a safe mono-disciplinary cocoon. Beyond biology, and indeed, the instant Darwin’s Theory was introduced into a "transdisciplinary" public school system, the war has raged. Battles were fought in the courtrooms, not lecture theatres and journals. Male oriented white scientific communities of the time, still entangled in Christian traditions, saw this war as "good science". Male domination whether sexual, colonial or scientific, given Darwin’s Victorian roots seemed the order of the day, and the order was religiously followed: An order formed as much out of politics, religion and history as scientific observation and documentation. The question now remains, have we, as a species, learned anything from our past century of behavior? Freud and Darwin would argue an emphatic and resounding no! If mankind were totally controlled by unseen subconscious forces and genetic determinism might not all man-made theories be forever distorted, twisted and yet last for millennia? 26

If there were another lens, however, to view distant scientific vistas, one with a self- correcting filter, allowing for a clearer synthesis and thesis would we employ it? If this new telescopic lens highlighted subconscious and indeed conscious distortions, other more accurate and objective scientific explanations might restructure older so called "special" theories via Popper’s classical disconfirmation philosophy? 27 Transdisciplinary and its emerging methodologies have the ability to create such a telescopic lens and filter. Based on the concept of meta-theoretical convergent validity, transdisciplinary is emerging as the current scientific methodology. What this broad new lens has and will continue to reveal is that the data was and is and will always be correct, however, the human process and the human interpretation was, can, and will continue to be flawed. Acknowledging this very human actuality allows for a better objective separation of form from flow from flaw.

Creationist Data from a Victorian

Since Darwin, scientists have, in an inter-disciplinary fashion, actively argued evolved genetic speciation based on fossil evidence. Genetic speciation involves micro and macro-adaptation caused by environmental pressures that change organisms genetically over immense periods of time. When once similar organisms can no longer produce viable offspring, due to genetic differences, a new genetic creature had evolved. Genetic change, via mutation, was the underlying principle of Darwin’s gradualism.

As mentioned previously although Charles Darwin named his book 'Origin of Species,’ the only statements about the arrival of species concerned the following. In his Origin of the Species, in the chapter titled Recapitulation and Conclusion, Darwin wrote, "Therefore I would infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed by the Creator… the laws impressed on matter by the Creator, that the production and extinction of the past and present inhabitants of the world should have been due to secondary causes…There is grandeur in this view of life with its several powers, having been originally breathed by the Creator into a few forms or into one..." As reiterated previously breathed life into given Darwin’s Christian roots comes from the God of Genesis where God reportedly did just that when creating humans. Darwin further writes in his Difficulties in Theory Chapter, "Have we any right to assume that the Creator works by intellectual powers like those of man?" Then eleven years later, in "The Descent of Man" Darwin stated, "… the idea of a Creator and Ruler of the Universe had been answered in the affirmative." We argue that Darwin’s Christian- based theory fuels the religious ideal that we, as a species, are somehow swimming up stream against the laws of physical entropy from simple to complex, from manhood toward godhood.

Attempts to eviscerate Darwin’s Original Theory by taking out the Creator gave evolution a statistically untenable random base and left us with concepts like Social Darwinism, the selfish gene and man as a simple "top of the food chain omnivore." Many social scientists resisted this type of simplistic reductionism. As over 90% of the world’s population believes in God and all major world religions claim Him or Her as theirs, could it be that Darwin in his original premise was correct and what we see is intelligent design? This is a question science will never, and can never, answer because it transcends the limits of our philosophy of science as argued by Popper. Many well-meaning scientists begin their theories starting with a non-observable intelligent designer. By starting origin theories in this fashion, they begin and finish in the metaphysical arena without ever once setting foot into a truly scientific one. In a similar fashion, Darwin’s theory spun a yarn where a simple beginning leads to a complex end. Objectively we simply do not see non-sentient organisms moving towards Darwin’s godhood.

To make his god-building theory work, Darwin believed that environmental influences were coded by somatic cells and transmitted to progeny via germ cells. Mendel's Laws of Heredity proved Darwin conclusively wrong. Darwin's theory of change, towards godhood via natural selection, was then to be further supported by De Vries’s unlikely Theory of Mutation. De Vries’s theory assumed that mutations produced progressive change. When mutations, however, consistently produced retrogression, micro-mutations were proposed. Since the lack of intermediate fossils was an insurmountable objection, S. J. Gould and others proposed a "Punctuated Equilibrium Theory" and hopeful monsters emerged. 28 Modern punctuated evolution pressured Darwinian evolution as the fossil record appeared to support punctuated evolution more closely, but the hopeful monster was still moving against the pull of universal entropy toward godhood driven by the artificial engine of natural selection. Moving past evolution’s pseudo-scientific beginnings no current scientist or theologian denies that natural selection improves stock within existing species. We argue, however, that natural selection has never been observed to produce new species.

Some feel that Darwin actually needed God to get things started. After all, what are the odds that life could evolve from inanimate matter randomly on its own? From a multidisciplinary perspective, Hoyle the English astronomer who first coined the term the Big Bang eloquently stated, "The odds of life beginning randomly from innate matter anywhere in the Universe was, 1 chance in 1040,000." To put it in numbers we can appreciate, if we were to pack the known universe full of electrons the number would only be 10130 electrons. 29 Let’s say we marked one of these electrons. If you were then blind folded and allowed to randomly walk anywhere in our universe looking for the one marked electron, you would have a much better chance of finding it than the odds of life spontaneously generating from inorganic material. So from this physics and mathematics transdisciplinary perspective it is functionally impossible that life could have randomly occurred. This was not Darwin’s fault in the beginning. Darwin created an intelligent design theory but was emotionally powerless to stop what was intellectually happening to his theory.

Darwin in his "The Origin of the Species" argued that all life came from possibly one primordial organism. An organism so formed he argued could be pre-programmed to change given environmental pressures by the hand of the Creator. The idea that one life form could genetically mutate randomly, however, into the abundance of life seen around us is statistically inconceivable given the 4.5 billion year time frame we are working with. Punctuated evolution with its multiple starts is also statistically highly improbable. Take for example the following example from biology.

"A gene group, PAX-6, is a key regulator in the development of eyes in all vertebrates. It's analog has been found to control the development of the visual systems of mollusks, insects, flatworms, and nemerteans (ribbon worms). These represent 5 of the 6 phyla that have eyes. The molecular similarity is nothing less than astounding. The paired domain of the gene contains 130 amino acids. The match of the amino acids between insects and humans is 94%! Between zebra fish and humans the match is 97%!

Could five genetically separate phyla have evolved these similar genes individually by random chance? There are twenty different amino acids available to fill each of the 130 spaces on the gene. This means there are 20130 or 10170 possible combinations. The number far exceeds the number of particles in the entire universe.

Any combination is possible one time. Getting the same or even similar combinations a second, third, fourth and fifth time is the statistical problem. The likelihood that random mutations would cause the same combination five times is 10170 raised to the fifth power. There is no way this same gene could have evolved independently in each of the five phyla. The gene that controls the development of the eye must have been programmed at a level below the Cambrian. That level is either the amorphous sponge-like Ediacarans or the one-cell protozoa, but neither had eyes. Evolution is not random. In addition, morphological constraint is evident in complex body organs, when very different phyla of animals develop very similar organs to satisfy similar needs. Being different phyla, they have been genetically separated since the inception of multi-cellular life. Hence they had to develop these similar solutions independently. The astounding similarity

between octopus, squid and human eyes makes it so statistically improbable as to be functionally impossible that they evolved from different organs and converged." 30

Since life as perceived from a multidisciplinary perspective could not have started from one or even 5 mutating phyla what probably happened? There is indeed a more elegant, multi-disciplinary explanation of the fossil evidence than either Darwinian or punctuated evolution and an elegant mathematical procedure to investigate the ability of transdisciplinary theories to describe the extant data. 31

The Migratory Theory of Genetic Fitness a Transdisciplinary Primary Theory of Origin

It is a fact that environmental changes that lead to creating new gene pools have never been scientifically demonstrated in living organisms that leave fossil evidence. According to the philosophy of science laid down by Popper an explanation is needed that does not rely on a genetic change over time that we cannot observe or religious explanations as originally proffered. Such speculative non-observable explanations are pseudoscientific and irrational. So what can we observe? Or more importantly, what do we observe and what explanation can be proffered as the current best-fit scientific model given our new transdisciplinary lens and telescope?

The Migratory Theory of Genetic Fitness (MTGF) is a transdisciplinary model that re-interprets and re-evaluates the current evolutionary account of mankind’s origins. Whereas Darwin argued that all life forms originally evolved from a few or even one primordial genetic code, The Migratory Theory of Genetic Fitness predicts just the opposite. It stands to reason, that when life first appeared, our planet spawned an infinite variety of very similar, 4 base pair, organisms all at once. Some organisms terminated immediately and some patterns never lived at all. There was no one single terrestrial organism that all other life came from, trillions of similar yet distinct patterns began the journey. Following the principle of survival of the fittest, these hardy primordial gene pools are still competing for survival today. Nothing has changed from the beginning of life on our planet but their number, which is ever decreasing via the process of extinction. Over the eons it has been "involution not evolution" that shaped our biosphere. To date no one has demonstrated how life began on Earth, from the inanimate to the animate, and indeed as there could be, yet to be discovered multiple trajectories we may never know. The transdisciplinary approach, however, gives us a broader field of view from which to reach a stable objective and testable consensus. The Migratory Theory of Genetic Fitness, based on the fossil record, is an outwardly observable phenotypic exposition rather than a non-observable unnecessary inward genotypic mutation. The MTGF involves simple mass migrations of many similar genetically stable organisms as climate patterns changed. As global climates slowly oscillated the mass extinction of millions of competing genetically similar organisms was inevitable. The MTGF predicts, in fact, just the opposite of what the current theory of punctuated evolution predicts. Instead of new genetic organisms arising via mutations to their genetic codes in response to varied environmental pressures, (genetic changes that are so statistically improbable "as to be functionally impossible") the fossil record clearly and plausibly indicates a high degree of phenotypic plasticity coupled with total stunning silence concerning genetic plasticity and mutation.

The current evolutionary approach cannot simply and clearly explain the apparent explosion of life at various epochs. Yet there is a straightforward and extremely parsimonious explanation. The MTGF suggests that the conditions needed to fossilize bones were not available in all areas and at all times on earth. The reason for the "apparent and misinterpreted" sudden explosions of new species in certain temporal epochs was that large groups of similar species migrated into fossil producing areas while adapting to climate change and those that could not phenotypically adapt quickly enough left their fossil traces behind layer after similar but slightly different phenotypic layer. These clear phenotypic variations observed across temporal epochs and rock strata have been misinterpreted as "unobservable" genetic mutations brought on by environmental pressure within a common ancestor leading to phenotypic change. The creation of an unobservable genetically based construct is unnecessary and statistically cumbersome.

In the past, origin scientists have speculated as to what they could not objectively and directly observe and have fit their speculations into a pre-existing theory/model. We feel a better methodology is to objectively and directly observe what is currently occurring and first assume that this was the past pattern and then create a theory model from the data. Following this methodology, currently we do not observe viable "genetically new" (non-man made) species that would leave fossil remains spontaneously occurring today. However, what we continue to observe is genetic erosion occurring through the bio-entropic mechanism of extinction. Using this new more appropriate methodology the MTGF clears up some more of evolution’s external validity and multi-disciplinary problems. The MTGF clearly demonstrates that terrestrial life follows the physics based universal entropy principle. With billions of genetically similar organisms in competition, the principle of survival of the fittest has been eliminating countless unsuccessful but similar organisms for eons. Primordial gene pools have always survived by phenotypic adaptation and migrating into new and better territories. In addition to diversity created by reshuffling via sexual reproduction, environmental stress can also cause major phenotypic changes. For example, asexual eukaryotes (sponges) fuse two haploid cells to create a diploid zygote when stressed. Indeed similar mechanisms may transform cell colonies into aggregates and then into multi-cellular organisms. 32 As is plainly observed, major structural phenotypic adaptation does not have to equal transformation or mutation into "a new genetic species." As similar more fit species competed more successfully, they made extinct similar less fit organisms. The less fit similar but different organisms left their fossil residue behind layer after layer. The MTGF Theory argues that the terminal extinction trajectories of unsuccessful primordial organisms have been misinterpreted, in evolutionary theory, as a common ancestor branching into distinct new species and that this belief is held firmly in the face of contradictory statistical evidence.

Finally, it is also well documented that most, if not all, genetic mutations are lethal. It makes more sense, therefore, to assume that such lethal combinations were eliminated long ago and that today we are left with only the most genetically fit gene pools. The MTGF predicts that in modern times we will not observe gene pools spontaneously evolving contrary to the universal principle of entropy. All possible genetic combinations were randomly attempted at the beginning of time.

In conclusion then, and not to put too fine a point on it: Darwin in his "The Origin of the Species" argued that all life came from possibly one organism and then mutated into the abundance of life we observe around us today. One life form could not genetically mutate into the abundance of life seen around us today given the 4.5 billion year time frame we are working with. Punctuated evolution with its multiple starts is also statistically highly improbable. The current genetically based theory of evolution is cumbersome, unnecessary and insufficient to explain the current bio-diversity observed on our small planet. The MTGF is a phenotypic theory. Extremes in phenotypic plasticity that result from an interaction between the internal genetic code and the external environment have been overlooked for too long a period.

Transdisciplinary Improvements with the MTGF

There are a number of other a trans-disciplinary improvements with the MTGF. 1. The MTGF theory is immediately and directly testable and therefore refutable within our lifetime. The Darwinian Model would take millions of years to replicate and refute. Putting into practice the "what you can currently observe principle," the MTGF clearly predicted and demonstrated with current observable data that changes in gravity, pressure, CO2 levels, radiation (indeed anything that induces stress) can create impressive phenotypic structural changes in a variety of genetically stable species within short periods of time. In support of the MTGF, plants and animals raised in outer space with lower gravity change their phenotypic expression in one generation. 33 We have also seen that background radiation can have a profound effect on offspring. 34 In addition, simple pressure changes can have a dramatic effect on the phenotypic expression in plants. 35

2. By not creating unobservable constructs the MTGF, from Popper’s Logic of Scientific Discovery as a theoretical model, is more parsimonious. The theory does not have to predict things we cannot see. The MTGF simply assumes that what we see today, the extinction level event of 100s of species a year, has been going on from the beginning of time. 3. The MTGF is also helpful at the inter-disciplinary level. For years scientists have found discrepancies in the fossil records i.e. supposed older-appearing fossils occurring after newer ones. Because small pockets of biosphere can be sheltered while others have increased changes, the MTGF would predict different overlapping phenotypic trajectories. Based on abnormal gravitation, gas levels, radiation sources, or even sound vibrations phenotypic expressions could have been prematurely triggered or conversely maintained for many generations. 36 4. The ontogeny replicates phylogeny principle is viewed differently from the MTGF parallax. First we ask the question, could a mutated species still go through these types of developmentally ordinal and appropriate phylogenetic stages? The MTGF model suggests that the ontogeny replicates phylogeny principle is simply a developmental record of the response of a stable gene pool to environmental stressors. Just how plastic a phenotypic response can be, given a stable gene pool and a stressful environment, is unbelievable yet all around us. Imagine identical genes that on one hand give us an aerodynamic butterfly maneuvering in 3 dimensional gaseous space with more precision than a modern helicopter, while on the other hand with identical DNA genes can create a totally different phenotype: the slow moving clumsy terrestrial caterpillar. 5. One of the most astounding pieces of evidence to proceed out of the human genome project is that we seem NOT to need approximately 95% of the genes in our DNA. Dormant junk genes do what exactly? Are our junk genes simply a defense against viral infections or have we burned through 95% of our species iterations? Taking the ontogeny replicates phylogeny principle to a new vista, what if the reverse is also true? Again using the "what you can currently observe principle" Since there is a limited number of divisions for our human cells at the ontogeny level, is there also a limited number of iterations for our entire species? 37 Maybe what killed off the dinosaurs was not an extraterrestrial comet but the simple fact they came to the end of their natural generational iterations.

In conclusion, the MTGF predicts that the phenotypic expression of a unique gene pool is far more plastic than we had ever imagined. In one hundred years of non-random quasi-experimentation we have horses that range in size from 24 inches to over 8 feet, Beefalo and Ligers, midgets and giants. Instead of multiple vague definitions of species as the current evolutionary theory uses, the MTGF has only one definition of a sexually reproducing species. If organisms can produce viable offspring they are from the same seminal gene pool that occurred in primordial times. If they cannot interbreed they are a different primordial species. To wit, primordial species adapt in a highly plastic phenotypic fashion; they do not evolve by genetic mutation. Our purpose in proposing the MTGF was two fold. One was to honor Darwin’s original observations of macro-adaptation, natural selection and extinction and secondly to propose a more elegant trans-disciplinary explanation of the fossil record with Darwin in mind. We feel we have achieved our purpose.

A Simple and Elegant Statistical Comparison between Two Competing Transdisciplinary Theories

We do not have to reinvent the mathematical wheel to consider the transdisciplinary statistical improvements that the MTGF has over Darwin’s evolutionary theory. Disciplines like paleontology, history, scientific philosophy, and statistics as well as Popper’s criterion for theory testing i.e. parsimonious (including simple definitions of things like species), directly testable, directly observable, directly manipulatable etc. can be seen as comparable categories. The scientist/statistician can also easily compare one theory category to another. For example: Yes theory A is more parsimonious than theory B or No theory A is not as directly testable as theory B. A statistic that deals with categories like Chi Square (X2) can be easily adapted. Work by Camilli and Hopkins indicates that X2 for both random and mixed models provides reasonably accurate estimates of Type I error, without correction, for N ≥ 8. 38 The Montoya & Mackay Transdisciplinary Method states that to adequately compare transdisciplinary theories across multi-disciplinary borders you need at least 7 degrees of freedom (N-1 = 7). That is 8 transdisciplinary category areas will be needed for a minimum transdisciplinary comparison. All theories are expected to have at least one transdisciplinary peccadillo. As science deals in probability not truth, we can expect to eventually disconfirm or modify every theory no matter how grand. All scientists are required to criticize their own theory in at least one area compared to other theories that it is being compared with. For example the MTGF gives little to nothing as an explanation of how life began whereas Darwin stated that the "Creator" created everything in the beginning. Using this simply methodology let’s categorically compare Darwin’s Theory and the MTGF. We use 9 category areas.
























A Chi Square Categorical Comparison of Two Primary Theories of Origin


Darwin MTGF

Non-Creationist Theory No Yes

More Parsimonious No Yes

In Line with Universal Entropy No Yes

Directly Manipulatable No Yes

Directly Refutable No Yes

Directly Observable No Yes

Life Statistically Probable Yes No

Non-mutation Based Highly Probable No Yes

Simple Definition of Species No Yes


For each theoretical point an advantage (A) i.e. better transdisciplinary fit for example with the philosophy of science directly refutable or disadvantage (D) i.e. a weaker transdisciplinary fit for example with statistics life highly improbable, is awarded to each theory.

∑ (O – E) 2

X2 = _____





Margin Totals









Margin Totals





Expected cell 1 value 9 x 9 / 18 = 4.5

Expected cell 2 value 9 x 9 / 18 = 4.5

Expected cell 3 value 9 x 9 / 18 = 4.5

Expected cell 4 value 9 x 9 / 18 = 4.5

Observed Expected O – E (O-E) 2 (O – E) 2 / E





















Total X2 (1) = 10.888, p< .001

It is clear that the MTGF provides us with a statistically significant transdisciplinary improvement over Darwin’s Theory of Evolution. This type of straightforward statistical maneuver can be used with soft, moderate or hard fusion comparisons and should become the benchmark for objectively and straightforwardly comparing all transdisciplinary theories in the future.

Other Scientifically Based Non-Darwinian Theories of Evolution: New Directions

The current worldview is that Darwin’s theory of evolution is the only scientific theory that explains the evolution of the animal kingdom. In fact, however, several other theories have been proposed in addition to the MTGF. Professor Stuart Kauffman a proponent of Complexity Theory states, "Darwin’s natural selection captivates us all. But is it right? Better, is it adequate? I believe it is not." 39 Indeed many aspects of evolution in the animal kingdom can better be explained by Complexity Theory, and this leads Kauffman and others to conclude that Darwin’s Theory is not the correct explanation for the evolution of the animal kingdom. 40 In complete contrast to Darwin’s Theory the Neutral Theory of Molecular Evolution, championed by Professor Motoo Kimura, of the National Institute of Genetics in Japan, holds that new mutated genes are neither more or less advantageous than the genes they replace. Variability in species, according to this theory, is not caused by selection but by random drift of mutant genes. This model is as statistically viable as Darwin’s theory but also suffers from the same malaise. 41 The Acquiring Genomes Theory, argued by Margulis and Sagan highlights the fact that "Mutation accumulation does not lead to new species or even to new organs or new tissues." What does lead to new species is termed symbiogenesis. Symbiogenesis according to this theory occurs when one organism absorbs genetic material from another species via a process of incomplete digestion. This new genetic material changes the aggressors genetic code directly e.g. photosynthetic animals. 42 Finally the Impact Theory posits that some extinction level events have nothing to do with genetics at all, but can be due to non-local phenomena. 43 Dr. Professor Nathan Aviezer gives an excellent review of non-Darwinian scientifically based theories in his book Fossils and Faith. 44 In agreement with Kauffman and Kimura, it is patently clear that large portions of Darwin’s Theory of evolution (synthetic or not) are neither necessary nor sufficient to explain the bio-diversity on our planet. Therefore in the future we propose that a transdisciplinary comparison be performed, as outlined in this paper, of all viable "scientifically based" primary theories of how life came to be on Earth. We believe it is only by clearly seeing our past mistakes that our future may be more properly envisioned and prepared for. In the words of Sir Fredrick Hoyle, "The Darwinian Theory is wrong, and the continued adherence to it is an impediment to discovering the correct evolutionary (or involution) theory." 45 In conclusion, transdisciplinary mathematical methodology, as previously outlined, is our best, and indeed perhaps only hope of quickly and successfully sorting through best-fit primary models of origin.








Taking a close look at Special Relativity and you find that the reason it is called "SPECIAL" is because it is set in an area where there is no gravity because gravity complicates the situation and Einstein was using a simplified version to explain the relativity effects to average people not gravitational effects. Most people including teachers in high school focus on special relativity theory and have very little understanding of the general relativity theory and it is very confusing to set aside what you have learned in your youth and relearn the actual effects of relativity. We have found a very clear illustration presented by Professor Kip S. Thorne of UCAL that explains the time variation effects in a manner that may clear up any misconceptions generated by the control conditions introduced into the special relativity theory. We personally believe that experimental evidence is far more realistic than any theoretical debate ever presented. This experiment performed by Nasa's Gravity Probe A is illustrated to show how gravity effects time in our local space.

Now to understand how the near infinite Gravitational field of the Singularity is related to earth time variation all you do is put the earth in the place of the probe and the singularity in the place of the earth and multiply the results by near infinity!!!!!!!!

The time variance formula provided by Kip Thorne for encounters with black holes is


were: T = time passed for the observer near a black hole measured in 24 hour days

           g = the gravitational strength of the black hole in solar masses

          D = the time measured by distant observer measured in light years

(2c/7.57511)ln(7.57511)(1.332/c2) =18 days

This formula with 7.57511 solar masses and 1.332 kilometers (or twelve billion light years) gives us .04928 year of time passage for the observer sitting on the event horizon of a singularity of 7.57511 solar masses. If you use the 1 sec for every 1000 years as given in the Probe A experiment. The radioactive clocks of material and the geological record would record 12 billion years on earth but the observer near the event horizon would record 18 days of earth history. 



This figure graphically illustrates the temporal slow down of our universe. Starting from a flare (Einstein-Rosen bridge or Giant Worm Hole), created from instability around the Original Singularity. The Figure ends with our universe falling back into the Event Horizon, i.e. coming into temporal synchrony once again with our outside the system hypothetical Observer. We believe this occurs because the singularity was never destroyed. The entire process only took about 2 ½ weeks from God’s relativistic perspective near the event horizon, but it took approximately 15 plus billion years from our perspective. From the parallax of Einstein’s General Theory of Relativity the correlation between geological and fossil records and the Biblical Creation story is highly significant (p < .001). The meta-theoretical data obtained allows us to predict universal temporal deceleration with 99.4% accuracy.


Rivers of Gravity

This X-ray photo taken by the Chandra satellite in July 2002 shows a background structure that is not related to the concentration of visible mass. There appears to be an invisible gravitating source creating huge canyons in space time..


This structure is not consistent with the classical view of space but is a strong indicator that we are just skimming along the surface of a far greater gravitational source.


Comparative analysis of UPP

The entire view of cosmology must be reviewed if what we have found is true in the correlations of the biblical account and the available geological and astronomical evidence. First and most important is why is it assumed that the big bang left no residual gravitating force when all the present evidence of super nova events points to a percentage of material or gravitating energy is always left behind and this volume is usually 2/3 the original volume and in large solar masses always a black hole or singularity. It may be true that our universe started out as a large Quantum fluctuation but that does not negate the gravitational effects immediately after the fluctuation. There still would have been time variation effects, lensing effects and warped space as predicted by classical general relativity. The present m-theories are very interesting and will play an important role in our understanding but they do not eliminate the existence of the residual primordial singularity or it's effects on our observable time line.

In the Unified Primary Perspective we have shown the correlation of the biblical account and the archeological evidence but the Cosmological time line is also the foundation of evolution of biological life on earth though it is virtually impossible for life to have evolved from rocks even over 14 billion years it is even less likely if the time line is even shorter and only appears to be 12 billion years. Recently in Dr. Hawkin's book The Universe In a Nutshell he presents the argument that the universal time line is shaped like a pear and the observational evidence does imply this conclusion. In the UPP this time line does agree with our expectations;




 Appendix 1

Cosmological and biological evolution??

Cosmological and biological evolution are both dependent on very long time lines that give rise to the opportunity for chance events to occur even though the likelihood that even these events would accomplish what the evolutionist desire is unlikely. These evolutionary hoax's continue to be the choice of reputable scientists maintaining the status qua not to mention their funding.


Big Bang Nucleosynthesis

The Universe's light-element abundance is another important criterion by which the Big Bang hypothesis is verified. It is now known that the elements observed in the Universe were created in either of two ways. Light elements (namely deuterium, helium, and lithium) were produced in the first few minutes of the Big Bang, while elements heavier than helium are thought to have their origins in the interiors of stars which formed much later in the history of the Universe. Both theory and observation lead astronomers to believe this to be the case.
The predicted abundance of elements heavier than hydrogen, as a function of the density of baryons in the universe is expressed in terms of the fraction of critical density in baryons, Omega_B and Hubble constant, h.

In the 1950's and 60's the predominant theory regarding the formation of the chemical elements in the Universe was due to the work of G.Burbidge, M.Burbidge, Fowler, and Hoyle. The BBFH theory, as it came to be known, postulated that all the elements were produced either in stellar interiors or during supernova explosions. While this theory achieved relative success, it was discovered to be lacking in some important respects. To begin with, it was estimated that only 1/4 of all matter found in the Universe should consist of helium if stellar nuclear reactions were its only source of production. In fact, it is observed that upwards of 2/5 of the Universe's total matter consists of helium---much greater than predicted by theory! A similar enigma exists for the deuterium. According to stellar theory, deuterium cannot be produced in stellar interiors; actually, deuterium is destroyed inside of stars. Hence, the BBFH hypothesis could not by itself adequately explain the observed abundances of helium and deuterium in the Universe.

Thanks to the pioneering efforts of George Gamow and his collaborators, there now exists a satisfactory theory as to the production of light elements in the early Universe.

In the very early Universe the temperature was so great that all matter was fully ionized and dissociated. Roughly three minutes after the Big Bang itself, the temperature of the Universe rapidly cooled from its phenomenal 1032 Kelvin to approximately 109 Kelvin. At this temperature, nucleosynthesis, or the production of light elements, could take place. In a short time interval, protons and neutrons collided to produce deuterium (one proton bound to one neutron). Most of the deuterium then collided with other protons and neutrons to produce helium and a small amount of tritium (one proton and two neutrons). Lithium 7 could also arise form the coalescence of one tritium and two deuterium nuclei.

The Big Bang Nucleosynthesis theory predicts that roughly 2/5 of the mass of the Universe consists of Helium. It also predicts about 0.001 Deuterium, and even smaller quantities of lithium. The important point is that the prediction depends critically on the density of baryons (ie neutrons and protons) at the time of nucleosynthesis. Furthermore, one value of this baryon density can explain all the abundances at once. In terms of the present day critical density of matter, the required density of baryons is a few percent (the exact value depends on the assumed value of the Hubble constant). This relatively low value means that not all of the dark matter can be baryonic, ie we are forced to consider more exotic particle candidates.

The fact that helium is nowhere seen to have an abundance below 2/5 of the mass it is very strong evidence that the Universe went through an early hot phase. This is one of the corner stones of the Hot Big Bang model. Further support comes from the consistency of the other light element abundances for one particular baryon density. It seems like we are just beginning to understand the physical processes that went on in the first few minutes of the Universe.


Weak interaction freeze-out at high temperature, the matter in the Big Bang consisted only of its most elementary constituents. When the temperature dropped below a few hundred MeV, ordinary nucleons (or baryons) could form: these are protons and neutrons since no heavier nuclei would have survived the high temperatures. In addition there are the light particles (leptons), such as electrons, neutrinos and photons. Neutrons and neutrinos interact with electrons and protons by means of the weak nuclear interaction. This is the interaction that is responsible for radioactive decays of unstable isotopes. When the temperature of the universe drops below about 1 MeV (or 1010 K), the weak interaction rate becomes slower than the rate of expansion of the universe. At this stage, about 1 second has elapsed of cosmic time since the Big Bang. Once the weak interactions have effectively halted, the residual number of neutrons (and neutrinos) is fixed. There is approximately one neutron remaining for every ten protons.


This is were evolutionary cosmology must include the super inflation point to maintain their credibility.


This model postulates that the universe started from a deviation in a set of "Higg's Fields" otherwise called a "false vacuum" which provided the initial mass energy of the universe. The universe expanded at a slow rate up to the point that temperatures were low enough for rest mass to form. During the short period of rest mass formation, the universe experienced a very short period of extreme expansion and then rapidly slowed to a rate consistent with the energy equation and a critical density universe.


This is necessary to create the time frame for evolutionary cosmology.

Primordial nucleosynthesis

The lifetime of a free neutron to decay is about ten minutes. However most neutrons do not have the time to decay. After only about three minutes have elapsed, something else occurs. Neutrons interact with protons to form nuclei of deuterium, or heavy hydrogen. The deuterium soon gains another neutron to form tritium, which in turn rapidly absorbs a proton to form a helium nucleus of mass 4, consisting of two protons and two neutrons. There is no stable element of mass 5, nor of mass 8, so additional nucleosynthesis via He + p or He + He is generally not possible although trace amounts of one or two heavier elements, most notably lithium (of mass 7) do form. One finds that practically every neutron ends up in a helium nucleus. The Big Bang therefore predicts that there should be one helium nucleus for every ten protons, created in the first three minutes of the expansion. Approximately 25 percent by mass of the matter in the universe is now in the form of helium nuclei: the rest consists of protons. For the Sun helium is about 30%, since some of the hydrogen has already been processed through stars (including the Sun itself!), ie; the solar material is not "primordial".

A Helium abundance of about 25% turns out to be a robust prediction of the Big Bang theory, and depends only on the fact that the very early universe passed through a high temperature, high density phase, much like the center of a star. This abundance is in fact just what we observe when we look at material which we believe to be close to primordial. Other important predictions include small amounts of deuterium and lithium, although the final abundances of these elements, deuterium especially, depends on the precise value of Omega_b. If the density of ordinary matter (baryons) is high, the early nucleosynthesis is efficient, and one makes essentially no deuterium. If the baryon density is low, however, one makes an amount of deuterium that is comparable to what is observed by astronomers.


Helium is synthesized inside stars by thermonuclear fusion. However, most stars, like the sun, are still burning hydrogen and so have made little helium, and certainly dispersed none of it. The synthesized helium is deep inside the stellar interior. Yet the universe indeed is observed to contain one helium atom for every ten atoms of hydrogen: by mass, it is about 25 percent helium. This is close to the case for the sun, it is as observed in solar cosmic rays, for interstellar gas in HII regions, and for hot stars, where the helium emission lines are excited. Moreover, when we compare stars which are metal-rich with metal-poor stars, one finds essentially the same helium abundance. There are metal-deficient galaxies which contain almost the same helium abundance. This confirms that helium has mostly not been synthesized along with the heavier elements, such as the metals, but was made prior to the formation of the first stars. The coincidence between observation and prediction of the helium abundance in the universe provides one of the major pieces of evidence for the Big Bang theory.

Deuterium and the baryon density.

Unlike helium, deuterium is a very fragile element. It burns at a temperature of only 106 K, well below the temperature in the solar core. A considerable fraction of any primordial deuterium at the beginning of the galaxy would have been destroyed by the present time. This is confirmed by observation: interstellar clouds contain deuterium, as do protostars, stars which have not yet developed nuclear burning cores, whereas evolved stars have essentially no deuterium. Allowing for that destruction, one infers a pregalactic deuterium abundance of 0.01 percent relative to hydrogen. Comparison with the Big Bang prediction requires one to choose a larger density that cannot exceed about a tenth of the critical density for closure of the universe, otherwise too little primordial deuterium would have been synthesized. There is no alternative to the Big Bang for synthesizing deuterium: stars destroy it rather than produce it. The significance of this result is that if the universe is at critical density, ninety percent of the matter in the universe must be non-baryons, consisting of weakly interacting neutral particles that did not participate in the nuclear reactions that led to deuterium production. "Joe Silk"


Cosmologists believe that the universe went through a phase when it was 10-35 seconds old called 'inflation' during which time the decay of a new kind of field in nature caused a pressure to be developed in every cubic centimeter of space. This pressure led to the energy of the universe increasing exponentially, propelling the expansion of space and matter. Although not exactly infinite, the total energy of the universe is so enormous that, for all practical purposes, our universe is infinite to one part in 10 followed by 60 zeros or something like that. So, the energy of our expanding universe came from the decay of this primordial field. We would like to know if such fields actually exist in nature today. One of these is called the Higgs Field, which is why physicists are so keen on discovering it.

There is no reason at this time to say that neutral particles didn't exist. The so called Higgs fields are not presently observable so lets not count out the neutral particles. It all comes down to abundance of deuterium and critical density but the most recent observations from MAP and COBE indicate a noticeable variation in density near the big bang and possibly even before the big bang.

The NASA COBE / DMR data showed that the large-scale structure in the universe may not have been generated exclusively by gravitational perturbations. They found traces of clumping in the cosmic background radiation which were millions of light years across even at a time when the universe was only 300,000 years old. Clumping this large cannot be generated by any influence that travels at the speed of light such as gravity. Only Inflationary Big Bang cosmology provides a plausible answer to the structure seen by COBE or by preordained structure introduced by GOD’S intervention. Today, these structures would be 1000 times larger, but COBE was not sensitive to seeing smaller features, so it is entirely reasonable that these non-gravitational clumping extend all the way down to the scale of the Abell Cluster system measured by Lauer and Postman. If that is the case, the velocity irregularities they see are not strictly generated by gravitational influences, as their computations require.

These results are very important for cosmology. But it is too soon to tell whether they are in severe conflict with Big Bang cosmology. We have only explored less than 1 percent of the volume of the visible universe, and it is hard to eliminate a cosmological theory based on such limited data, especially a theory which provides the only plausible answers to many of the other equally significant questions.

The expansion of the universe could have affected time. If there is an effect, it may be observable because any expansion rate-dependent change in the uniformity of time would affect such things as the primordial element abundances during the first few minutes after the Big Bang when the expansion rate was thousands of times faster than it is now. The dilation of time through the era of nucleosynthesis would have probably changed the element abundances by a measurable amount from their observed values which do not include such a 'non-linear' time effect.





COBE 2000


Yet we have every possible anti-creation cosmology theory that you can think of and the idea of pre-creation design entirely ignored even when the evidence points in the direction of pre-creation design.





Map Simulation 2002

The Hubble Expansion


During the 1920's and 30's, Edwin Hubble discovered that the Universe is expanding, with galaxies moving away from each other at a velocity given by an expression known as Hubble's Law: v = H*r. Here v represents the galaxy's recessional velocity, r is its distance away from Earth, and H is a constant of proportionality called Hubble's constant.

The exact value of the Hubble constant is somewhat uncertain, but is generally believed to be between 50 and 100 kilometers per second for every mega parsec in distance, km/sec/Mpc. A mega parsec is given by 1 Mpc = 3 x 106 light-years). This means that a galaxy 1 mega parsec away will be moving away from us at a speed of between 50 and 100 km/sec, while another galaxy 100 mega parsecs away will be receding at 100 times this speed. So essentially, the Hubble constant sets the rate at which the Universe is expanding. Additionally, the present age of the Universe can be assessed vis-à-vis the Hubble constant: the inverse of the Hubble constant has units of time. By substituting in kilometers for Mpc in the Hubble constant, we find that upon inverting H we get a quantity with units of seconds (kilometers canceling out in the denominator and numerator). For a Hubble constant of 100 kilometers per second per Mpc, we get 3 x 107 seconds, or about 10 billion years. For H=50 kilometers per second per Mpc, the time scale is 20 billion years.

The size of the universe is a function of time? In the case of a closed universe there is enough matter that gravity halts the expansion and the universe collapses (and perhaps bounces to repeat the process). In an open universe there is not enough matter to halt the expansion and the universe expands forever, becoming more and more dilute as time passes. The border between collapsing and expanding forever is called a critical universe.

The standard picture of cosmology, based on Einstein's general theory of relativity explains how to picture this expanding universe. As an example consider a loaf of bread, with raisins sprinkled evenly throughout it. As the bread expands during cooking all the raisins are moved further and further apart from each other. Seen from any raisin all the other raisins in the bread appear to be receding with some velocity.

This model also explains the linearity of the Hubble law, by which we mean the fact that the recession velocity is proportional to distance. If all the lengths in the universe double in 10 million years then something that was initially 1 mega parsec away from us will end up a further mega parsec away. Something that was 2 mega parsecs away from us will end up a further 2 mega parsecs away. In terms of the speed at which the objects appear to be receding from us, the object twice as distant has receded twice as fast!

On very large scales Einstein's theory predicts departures from a strictly linear Hubble law. The amount of departure, and the type, depends on the value of the total mass of the universe. In this way a plot of recession velocity (or red shift) vs. distance (a Hubble plot), which is a straight line at small distances, can tell us about the amount of matter in the universe and provide crucial information about dark matter.

The brightness of an object seen at various distances in the universe determines time and distance. The object has been arbitrarily chosen to have brightness 1 at red shift z=0.01 (about 60Mpc away). For small distances the brightness falls with the square of the distance, since we see less and less of the total light put out by the object. At larger distances the curvature of the universe, which depends on how much matter it contains, starts to play a role. In a high Omega universe objects at fixed red shift appear brighter than in a lower density universe, because they are closer. This effect becomes more and more important at higher red shift. Note that Astronomers often plot such a diagram in magnitudes vs. red shift, so the vertical axis will run the other way.

For more information on one promising way to measure the departures from the linear Hubble law and measure the mass density of the universe, see the Home Pages of the Supernovae Cosmology Project and the High Z Supernova Team.

This is the current view of the universal time line.

What follows is a good first approximation. First, there are two solutions in Big Bang cosmology for the 'scale factor' of the universe as a function of time, depending on whether the expansion is dominated by the pressure in the matter component, or the radiation component. At about 1 million years after the Big Bang, the expansion changed from being radiation-dominated to matter-dominated. The formula for the change in the scale factor versus time is given by:



scale at time t = scale at current time x time2/3 is A(t) = a0 x (t/t0)2/3


Scale at time t = scale at current time x t1/2 is A(t) = a0 x (t/t0)1/2

For more recent times, or for the earliest times within a year of the Big Bang:

A(t) = 1.19 x 1010 / (t/seconds)1/2

Now the scale factor today is defined to be 1.000 , so in the matter-dominated case which describes the universe since about 1 million years after the big bang, we have t0 = 15 billion years, a0 = 1.000. If we ask what the scale factor of the universe was when the universe was 5 billion years old we get:

A = 1.00(5/15)2/3 = 0.48

This means that if we look at the current distance between the Milky Way and a quasar located at a distance of 1 billion light years, the distance between them when the universe was 5 billion years old was just 1 billion light years times 0.48 or 480 million light years. The scale factor tells you by what factor a current distance interval has changed.

This scale factor is also related to the red shift factor Z since 1 + Z = A0/A so that for A = 0.48 in the example above, A0/A = 2.0 and so Z = 1.0. If we were to look at a galaxy at a red shift of 4.0, the scale factor at the time its light was sent to us would have been A0/A = 5.0 so A = 0.20. The universe was 15 billion x (0.2)3/2 = 1.3 billion years old.

How rapidly was the universe expanding then? Hubble's constant measures the expansion rate in kilometers per second per mega parsecs and is defined as da/dt x 1/a = H so that taking the derivative of a = a0 x (t/t0)2/3 you get:

da/at = (2/3)*a0* (1/t0) * (t/t0)-1/3

then dividing by a = a0 (t/t0)2/3 you get

H(t) = H0*(t0/t)

where H0 = the current value of 75 kilometers/sec/mpc. This means that when the universe was 1.3 billion years old, the Hubble constant was about 75*(15/1.3) = 838 kilometers/sec/mpc. This is only an approximation because we have not included the details of the deceleration of the expansion, nor the effect of the cosmological constant and curvature 'k' term. As for an effective velocity with which particles seem to be separating, suppose we calculate the scale factor for the radiation dominated era before the universe was 1 million years old. At the end of this era, the red shift was about 2000, so that the scale factor was 1+ 2000 = a0/a and a = 0.0005. Lets begin with a quasar 1 billion light years away from us today, so that at 1 million years after the Big Bang, its distance was 1 billion light years x 0.0005 = 500,000 light years and get the following table:


2 million    2828      0.0007          707,000

1 million    2000      0.0005          500,000

0.5 million 1400      0.00035        353,000


We see that between 1 and 2 million years, a difference in time of 1 million years, the distance to the quasar changed by 707,000 - 500,000 = 207,000 light years for an effective speed of 207,000 light years/1 million years or 0.2 times the speed of light. Now lets look at an earlier epoch when the universe was only 1 year old using a(t) = 1.19x1010 / (t/seconds)1/2 to describe the evolution of the scale factor during this time:


2 years    1,500,000  0.00000071    10,700

1 year      2,100,000  0.00000050      7,500


Again taking the difference in light years 'traveled' in the 1 year interval we get 10,700 - 7,500 = 3,200 light years in 1 year for an effective speed of 3200 times the speed of light. BUT, no matter, radiation or information actually moves at this speed. It just represents the general relativistic 'stretching' of space. We do not like to think of this possibility because we never see such things happen in our non-relativistic world so we have never had the pleasure of developing the right intuition about it. In the following table, I give the times, scale factors, and the separation between the matter in the Milky Way and that in the most distant quasars we now see along with the effective velocity in multiples of the speed of light, C:

Time                    Scale       Distance                  Speed


Now                     1.0           14 billion LY            1.0 C

1 million yrs       1/1800       7 million LY            7.0 C

1 thousand yrs   1/56000      240,000 LY            240 C

1 year                 1/1700000     78,000 LY          78,000 C

1 second             10-10                 1.4 LY    43 million C


So, you don't have to go very far back in time before you seemingly run into problems. The speed in question is how fast the matter in the galaxy would have to travel to get as far from the Milky Way as it is at the various times. But of course, modern Big Bang cosmology says that this is not the correct perspective at all. Instead, the matter didn't do any real traveling at all, but was carried along by the expansion of the universe, which caused distances to expand faster than light speed.


If you were not able to follow that line of reasoning go back and try again. But I wouldn't bother go on to the next.

The universe expanded from the size of a grapefruit to the size of the solar system by 10-10 seconds after the Big Bang, so they say the expansion traveled faster than light.

Ah, the wonders of relativity! According to general relativity, it wasn't the matter that was moving faster than light but the distance between various points in space that was increasing faster than light. No matter actually made the journey from one point to another at trans-light speed. In general relativity, space is free to do things that matter cannot do, and one of these is to cause the distances between galaxies to increase without the galaxies themselves having enough time to traverse the distance.

Space here is an invention of the mind, our understanding of space is distance (the three dimensions height, width, and depth) over time! In 1913 it was proved that gravity warps space and the general theory of relativity provides the math. I would like to see the proof that space is free to do whatever.



Orders of magnitude

The timescales and temperatures indicated on this diagram span an enormous range. A cosmologist has first to get the order of magnitude (or the power of ten) correct. Quantities which are given as 10 to some power 6 (say) are simply 1 followed by 6 zeros, that is, in this case 1,000,000 (one million). Quantities are given as 10 to some minus power -6 (say) have 1 in the 6th place after the decimal point, that is, 0.000001 (one millionth). At extremely high temperatures we tend to use gig electron volts (GeV) instead of degrees Kelvin. One GeV is equivalent to about 10,000,000,000,000K.

Chronology of the Universe

The following diagram illustrates the main events occurring in the history of our Universe. The vertical time axis is not linear in order to show early events on a reasonable scale. The temperature rises as we go backwards in time towards the Big Bang and physical processes happen more rapidly. Many of the transitions and events may be unfamiliar to newcomers; we shall explain these in subsequent pages.


The standard cosmology is the most reliably elucidated epoch spanning the epoch from about one hundredth of a second after the Big Bang through to the present day. The standard models for the evolution of the Universe in this epoch have faced many stringent observational tests.

Particle cosmology is a picture of the universe prior to this at temperature regimes, which still lie within known physics. For example, high-energy particle accelerators at CERN and Fermilab allow us to test physical models for processes which would occur only 0.00000000001 seconds after the Big Bang. This area of cosmology is more speculative, as it involves at least some extrapolation, and often faces intractable calculational difficulties. Many cosmologists argue that reasonable extrapolations can be made to times as early as a grand unification phase transition.


Quantum cosmology considers questions about the origin of the Universe itself. This endeavors to describe quantum processes at the earliest times that we can conceive of a classical space-time, that is, the Planck epoch at 0.0000000000000000000000000000000000000000001 seconds. Given that we as yet do not have a fully self-consistent theory of quantum gravity, this area of cosmology is even more speculative.






I find this funnel shape a most interesting illustration because if you place a singularity at the bottom you may well have a clear picture of what the universe looks like, one big solar like flare on the surface of a singularity and not too far from the surface either.











Cosmic spin?

As for the universe, some cosmological models predict that the universe could have a net spin, and this would be detectable in the isotropy of the cosmic background radiation as a 'quadruple' type anisotropy. The current limits to any such spin are now so restrictive that, to the best of our measurements, the amount of spin in the universe is so small that it would have had no significant cosmological consequences. It amounts to a rotation speed of less that 10-12 radians per year or 10-7 seconds of arc per year!

If you take that back to the time when the singularity effects are dominate, just as a ballet dancer speeds up when they pull their arms in the spin increases so does the spin of the singularity and any amount of spin near zero time and infinite gravity, amounts to a tremendous energy this may be one explanation. But that does not account for the present acceleration measured recently at this great distance from the source.

The evidence is mounting that indicates acceleration of the universe.


Acceleration of the universe

February 27, 1998
Web posted at: 1:01 a.m. EST (0601 GMT)


WASHINGTON (CNN) -- Scientists are scratching their heads over a finding that indicates the universe, rather than slowing down, the Universe is being expanded by a mysterious force at an accelerating rate. If true, says one astronomer, in billions of years many of the stars will be gone from the night sky. "The universe will be a very lonely place to look at," says Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics. And if the finding is correct, it also supports a concept first proposed by Albert Einstein, who later dismissed it as his biggest blunder. "It is such a strange result we are still wondering if there is some other sneaky little effect climbing in there," says Adam Riess, an astronomer at the University of California, Berkeley. Riess said he, Kirshner and others in the 15-member international team that made the discovery "have looked hard for errors" but found none. The findings were discussed at a meeting of scientists in Los Angeles last month and reported in the journal Science.

Tracking the debris of exploded stars.

Using the Hubble Space Telescope and ground-based telescopes in Hawaii, Australia and Chile, the astronomers tracked and repeatedly measured the debris of 14 supernovae, or exploded stars, 7 billion to 10 billion light-years from Earth. A light-year is the distance that light travels in one year -- about 6 trillion miles. Team members measured the speed at which these distant supernovae are moving away. The rate was then compared with the motion of supernovae much closer to Earth. "How far away a supernova is, and how fast it's moving away from us, tells us how fast the universe is expanding," Riess says. They expected to find that the expansion of the universe was slowing from the effect of gravity. "People thought ... the universe was just coasting" from the force of the Big Bang, Kirshner said. "Instead, we found it is actually speeding up."

According to the Big Bang theory, the universe exploded from a tiny point of matter about 12 billion years ago and is still expanding, but at a slower and slower rate.

But Riess and the others found that it is actually expanding faster than it was 5 to 7 billion years ago.

Parallel study confirms finding.

Rocky Kolb, a University of Chicago astronomer, said in Science that the finding is so startling, "I think everyone should reserve judgment." Kirshner said the conclusion will go through an intensive review before the results are accepted, although he noted that preliminary results from a parallel study by another astronomy group are in agreement.

"We are scratching our heads to think if there could be an alternative explanation for it," says Riess, "something more mundane than a repulsive force."

It is being called a repulsive force because it seems to be working against gravity to speed up the expansion of the universe.

"If it's confirmed by other results and other approaches, it's going to tell us there is something important, another constituent to the universe," says Kirshner.

A fifth force at work?

Unlike matter, which slows down as it moves through space, the new force -- if the researchers are correct -- moves faster. "That's very weird," says Kirshner. "But it's not unprecedented that weird things might be true things."

Four forces are accepted by modern physics: the strong force, which holds the nucleus of an atom together; the weak force, which causes atomic decay; electromagnetic force, which holds electrons in orbit in an atom; and gravity. Kirshner says a fifth force could be at work. Physicists have speculated about the idea of a fifth force, he says.

"They have impossible ideas before breakfast," he said. "The interesting thing is that some of these funny-sounding ideas might turn out to be right."

If the researchers are right and the universe is, indeed, accelerating, the finding could solve a problem for astronomers. Some measurements have put the age of the universe at about 10 billion years, which is younger than the measured ages of some stars.

With the acceleration of the universe factored in, said Riess, the universe would have to be about 14 billion years old, some 2 billion years older than the oldest star.

"That would no longer make the daughter older than the mother," he said.



Einstein's 'cosmological constant’

Einstein first proposed a "cosmological constant," which Riess described as "a repulsive force that is a property of vacuum in space and time."

Riess said the constant, which Einstein dismissed, is "the only explanation we have" for the acceleration.

"Our everyday experience tells us that a vacuum is empty, that there is nothing in it. But that might not be true," Riess said. "There may be an energy, a force, associated with a vacuum."

Over short distances, said Riess, this repulsive force can't be detected, but over distances of 7 billion to 10 billion light-years, "this force becomes something to reckon with, and is strong enough to overcome gravity and cause the universe to accelerate."

Riess said he isn't surprised that the force hasn't been detected before.

"The force is very weak on a small scale and it only becomes important when you are looking back," he said. "It's like a lot of little ants -- one is weak but a lot of them can lift a big weight."


Is this really fifth force necessary?


It's true that there is a problem but to invent an unseen force to explain it is unnecessary. Our observations of super nova and the math that explains the creation of neutron stars and black holes, even if the universe was created by a quantum flux, the math still indicates the presence of a residual primordial singularity and this acceleration is an indication that it may be influencing our existence. Drawing us back into the gravity well at an accelerating rate!




In the diagram our time cone only allows us to see a limited amount of the universe and what we see is in the past through warped space. Our present position relative to the primordial singularity is indicated by the acceleration as we fall back into it's gravity well and we would be unable to detect it because of warped space obscuring our view looking back and no space in our future looking forward.

Now for another new observation giving the universe direction

In the April 21, 1997 issue of Physical Review Letters, however, Borge Nodland of the University of Rochester and John P. Ralston of the University of Kansas present evidence that the principle of rotational symmetry may be violated on a cosmic scale. Measurements of light from distant galaxies, they say, vary depending on the galaxies' position in the sky. This is challenging the prevailing theory of cosmic structure. Other theorists doubt whether the claim will stand up to close scrutiny; at least two critical analyses have already been posted on the Internet. For the moment, however, even the critics can savor the frisson of a tremor rocking their field's foundations. "Nobody would be happier than me if they were right," says Sean M. Carroll of the University of California at Santa Barbara.

The surprising work on cosmic asymmetry began three years ago, while Nodland was working for his doctorate under Ralston's supervision. In a search for signs of unconventional large-scale nonuniformity, the two researchers decided to investigate whether polarized light from remote galaxies changes with their direction or distance. (Polarized light typically oscillates within one plane rather than in all directions, as ordinary sunlight does; it can be produced by a variety of phenomena.) Polarized light often twists as it propagates through space as a result of its encounters with electromagnetic fields; this well-understood phenomenon is called the Faraday effect . But Nodland and Ralson wondered whether additional twisting effects might be at work. To find out, they focused on studies of galaxies that emit large amounts of synchrotron radiation, a highly polarized form of electromagnetism generated by charged particles passing through a strong electromagnetic field. After scouring the published literature, Nodland and Ralston compiled polarization data for 160 galaxies.

Their investigation involved a crucial assumption, that the initial angle of polarization of the light relative to the plane of each galaxy was the same for all 160 galaxies. Given this assumption and the estimated distances to the galaxies (inferred from their red shifts), Nodland and Ralston could calculate whether the light underwent any twisting other than that caused by the Faraday effect. The researchers' calculations showed that polarized light from galaxies does indeed exhibit an extra rotation, in an amount proportional to the galaxies' distance from the earth. The fact that the effect varies with distance, Nodland says, rules out the possibility that it was local, stemming from phenomena occurring in the vicinity of our solar system. But the biggest surprise is that the amount of rotation depends on the direction of each galaxy in the sky. Nodland and Ralston define this effect in terms of the angular distance between each galaxy and the constellation Sextans. The twisting appears strongest when the direction to the galaxy is nearly parallel to the earth-Sextans "axis" and weakest when the direction is perpendicular. The effect may derive from a heretofore-undetected particle, force or field, Nodland suggests, or even a property of space itself that gives it a preferred direction. The universe, he elaborates, may not be "as perfect and symmetrical and isotropic as we think."

Other astronomers suspect that the imperfection lies in the analysis of Nodland and Ralston. Three days after their article's publication, a paper faulting their statistical methods --and their over-reliance on galaxies from specific sectors of the sky--was released on the Internet by Daniel J. Eisenstein of the Institute of Advanced Study and Emory P. Bunn of Bates College. A similar critique was posted shortly thereafter by Carroll and George B. Field of the Harvard-Smithsonian Center for Astrophysics.

Ralston notes that these papers, unlike the one he wrote with Nodland, have not yet undergone the scrutiny of peer review. He hopes his research, at the very least, will force theorists to re-examine some of their long-held beliefs about how the universe works. "That would make a good contribution," he reflects, "even if another analysis comes along and this effect goes away."

So it appears the universe is possibly moving from somewhere going somewhere!!!!

What is the Great Attractor?

In the 1980's, astronomers Alan Dressler, Sandra Faber, David Burstein and Gary Wegner had investigated thee motions of the Virgo cluster of galaxies, the Local Group   (which contains the Milky Way), and the Hydra-Centaurus super cluster, and discovered that these vast collections of thousands of galaxies were also being tugged, gravitationally, by what appeared to be an even larger collection of matter. This collection of matter, which they estimated was located 3 times farther away than the Virgo cluster ( which is 77 million light years distant) in the direction of the constellation Centaurus, was dubbed the Great Attractor. The team of astronomers even found that galaxies located on the other side of the Great Attractor were being pulled in by it. In other words, rather than these collections of galaxies expanding with the rest of the universe, they were being held back very slightly by the gravitational pull of the Great Attractor. Unfortunately, much of the predicted mass in the Great Attractor is hidden behind the obscuring dust and gas in the plane of our Milky Way, so it is very hard for astronomers to study this collection of objects directly; or to independently confirm that it is indeed there in the first place!

For more information, see page 475 of the May, 1990 issue of Sky and Telescope for a brief description.

The motion of all galaxies consists of the part of their motion produced by local gravitational influences, and a part determined by cosmological expansion. The typical velocity of galaxies within their clusters, such as the Milky Way and the Andromeda Galaxy, is about 200 - 1000 kilometers per second. The cosmological expansion effect increases at the rate of about 75 kilometers per second per mega parsecs. This means that at a distance of 2.5 to 3 mega parsecs, the galaxies random motion within its cluster ( say 200 kilometers per second) is about as large as its cosmological red shift. When you add these velocities together, you could get any speed from 0 to 400 kilometers per second. For nearer galaxies such as the Andromeda Galaxy, its random velocity is larger than its cosmological red shift so you can get blue shifts! This is why astronomers have to look at very distant galaxies so that they can easily see the cosmological red shift above the random speeds of the galaxies in their respective clusters.

Cosmological effects only occur at cosmological distances and in the very distant past. Andromeda is so close to the Milky Way that it feels the gravitational field of the Milky Way and the Great attractor as being a stronger factor in its dynamics that the much weaker gravitational field of the rest of the universe. This is a common misunderstanding about Big Bang theory. It says that space expands only at scales where the matter in a particular region of space has about the same average density as the universe. This is not the case inside clusters of galaxies, but becomes closer to the situation at scales many times the size of a cluster of galaxies.


PROF. JOHN DUBINSKI (University of Toronto): The Andromeda galaxy is actually falling towards the Milky Way which means they'll probably have some close encounter at some point in the future.

NARRATOR: At the moment Andromeda is moving towards us at 400,000 kilometers per hour and scientists think one day it will hit us. So Dubinski decided to work out what'll happen to us in 3 billion years, when the two galaxies finally collide. After a long and complex calculation the result was a vivid picture of the impending collision. A detailed prediction of how the Milky Way will end.

JOHN DUBINSKI: The clouds of gas hit each other at these huge velocities, hundreds of kilometers per second, and that basically creates great shockwaves which move through the gas and heat it to great temperature.

NARRATOR: At the heart of this maelstrom the boiling gas is hurled towards the two converging black holes. This kick-starts a violent dual feeding frenzy as the two monsters spiral towards each other.

JOHN DUBINSKI: And eventually those two independent black holes with their accretion discs will spiral together and merge themselves and form an even more massive black hole.

NARRATOR: Dubinski worked out that this violent collision would knock the Earth and its Solar System out of orbit. Two possible fates await us. If we're on one side of the galaxy when this clash happens, we could be thrown out into the emptiness of space - if we're lucky.

JOHN DUBINSKI: The second possibility is that we're on the other side of the galaxy at the time of the collision in which case we could be thrown right into the center of this chaos.

NARRATOR: In the active center of the merging galaxy the huge feeding black hole will trigger giant stellar explosions and supernovae. This is bad news for Earth.

JOHN DUBINSKI: There could be a horrible catastrophe. The wave of radiation from the blast wave of the supernova would hit the atmosphere and boil it off in an instant, so the atmosphere would be gone, the seas would boil off into space and the Earth would be toast.

And that brings us to the BIG CRUNCH

A Picture Of The BIG CRUNCH

Let me put it this way. If you'll permit, let's imagine ourselves as in an ice cave, and let's think of time as pointing upward from the floor. The floor of ice represents the Big Bang. The roof of ice represents the Big Crunch -- and some spikes hanging down, icicles, represent black holes. Think of water gradually filling the cave as it comes up, representing the advance of time. No water, and you're back at the Big Bang; a little water, and you're in the early days of the universe. More water and your time level is where we are now. As the water rises -- as time goes on -- it engulfs a few of the spikes, the icicles -- that's the moment when black holes are formed. Keep the water level going on up and you get to the point where the spikes are completely immersed and the water even reaches to the top of the cave. Then you have arrived at the Big Crunch. From this point of view, you can see that the Big Crunch or final Gate of Time is not distinct in nature from the black hole.

The experience of crossing an event horizon!!!!!!!!!!


The closer a person approaches the hole’s event horizon, the slower they seem to travel. You can think of the event horizon as the surface of the black hole but it’s not solid. To the outside observer, the person would appear to stop, seemingly forever suspended at the event horizon. They would begin to turn orange, then red, then fade fairly rapidly from view. Though the person is gone, you never saw where or how they disappeared. If you yourself fell into a black hole you would not even notice the event horizon. From there on everything only goes one way: in. You cannot send out messages for help. However, you can still receive messages from outside so, to you, everything would seem OK. You would never know when you had crossed the event horizon except that the increasing gravity would draw your body longer and longer, squeezing you in from the sides. You wouldn’t last long, which is too bad, because time and space are so weird in a black hole that some scientists think time travel might be possible. Or you might be able to travel to a parallel universe through a wormhole-holes in the fabric of space and time. The only problem is how do you survive the tremendous gravity?


A 'Big Crunch' will only be forecasted if astronomers can demonstrate that our universe has MORE than a critical density of gravitating 'stuff'.

The best estimates as of 2000, and shown in the figure below, indicate we live in an open universe with 'Omega' near 1.0. About 0.3 of this is contributed by dark matter and 0.7 by the cosmological constant which is accelerating the expansion of the universe. Oscillatory universes would 'live' in the lower right hand corner of the figure, and this is far away from where the data is leading, the intersection of the red and blue curves, on this diagram.

Presently scientists rate the likelihood of finding more than the critical density as pretty remote. An oscillatory universe is not even a possibility even if the universe were to recollapse under the best of conditions anyway.





Flatness-Oldness Problem

However, if Omega is greater than 1, the Universe will eventually stop expanding, and then Omega will become infinite. If Omega is less than 1, the Universe will expand forever and the density goes down faster than the critical density so Omega gets smaller and smaller. Thus Omega = 1 is an unstable stationary point, and it is quite remarkable that Omega is anywhere close to 1 now.




The figure above shows a(t) for three models with three different densities at a time 1 nanosecond after the Big Bang. The black curve shows the critical density case with density = 447,225,917,218,507,401,284,016 gm/cc. Adding only 1 gm/cc to this 447 sextillion gm/cc causes the Big Crunch to be right now! Taking away 1 gm/cc gives a model with Omega that is too low for our observations. Thus the density 1 ns after the Big Bang was set to an accuracy of better than 1 part in 447 sextillion. Even earlier it was set to an accuracy better than 1 part in 1059 sextillion. Since if the density is slightly high, the Universe will die in an early Big Crunch, this is called the "oldness" problem in cosmology. And since the critical density Universe has flat spatial geometry, it is also called the "flatness" problem -- or the "flatness-oldness" problem. Whatever the mechanism for setting the density to equal the critical density, it works extremely well, and again it is a most remarkable coincidence that Omega is close to 1 but not exactly 1.


There is still a growing body of evidence that something is missing in our inventory of the contents of the universe. The big question is, what is the nature of this non-luminous ( or under-luminous) stuff, and how much is there in the universe? In 1994, a new round of experiments by physicists showed that neutrinos have a definite non-zero mass. Since there are as many of these as there are photons in the cosmic microwave background radiation, they may contribute as much as 30% of the total mass density needed to make the universe 'critical' with an Omega = 1.

Some of the expected 'dark matter' can be made to go away if we adopt a non-zero Cosmological Constant in the universe. This might also help the current incompatibility between the age of the universe dated from the expansion rates implied by the two galaxies observed by the Hubble Space Telescope, and the ages of the oldest globular cluster stars.

Since 1998, astronomers using distant supernovae have begun to detect that the universe's expansion is accelerating just as though there is a cosmological constant force present in space. By combining the supernova data and the cosmic background data from balloon observations, in the following plot, it is becoming very clear that Omega is very close to 1.0; that dark matter and baryonic luminous matter (Omega-m) account for 30% of this, and that a cosmological constant of 70% of Omega is consistent with the two independent sets of data. In general, the astronomical community now seems pretty convinced from over 20 years of studies using many different techniques, that there is a cosmological constant present and lots of cold dark matter. The purple intersection region next page shows the area of overlap between the different estimates. Clearly, a universe with no cosmological constant ( lambda = 0) is excluded, just as one in which Omega is 100% lambda with no/little matter. The best range is for Dark Matter to be about 30% and Lambda about 70% with ranges of about 20-40% and 60-80% respectively. The NASA, MAP mission will refine these numbers to 1% accuracy by 2002.






The Unified Primary Perspective must past the cosmological test to be even considered viable but the tests are very dependent on low omega values and due to the proposed presence of the primordial singularity with a mass equivalence of twice the known universe the consideration seems to be unfair.


Cosmological Tests


A. Density.


We can evaluate the Friedmann equation at the present era. where H = Ho, and d = do. The result is an equation for the curvature constant:



where dcrit = 3Ho2/8pG is the value of the density appropriate to the Einstein-de Sitter (k = 0) cosmology, and W = d0/dcrit. It is immediately clear that if Wo > 1, or, if d0 exceeds dcrit, k is positive, and we have a closed universe, where as, if W0 < 1, or, if d0 is less than dcrit, k is negative, and the universe is open. Measurement of d0 would therefore provide a conclusive cosmological test. Unfortunately, we have only a lower limit on the value of d0, because of the many possible forms of non luminous matter. This lower limit is approximately equivalent to 

 W = 0.1.


In our proposal the density is increased by the unobservable primordial singularity!


B. Red shift-Magnitude Test.


Consider a distant galaxy of luminosity L, whose distance, as measured in our coordinate system, is r. We require a more convenient way of calculating the distances of galaxies than by using coordinates that we cannot easily measure. One solution is to introduce the concept of luminosity distance. The flux of radiation received at earth from the galaxy is spread over a sphere (area = 4pr2) and is diminished by the red shift. The energy of each photon is decreased by 1 + z, and the rate at which the photons arrive is diminished by the same factor. The net result is that we receive from the galaxy



This provides us with an effective luminosity distance (by analogy with the concept of flux = L/4pr2 from a nearby source at distance r),



We can compute rlum by measuring the flux, if we know L; it is therefore an experimentally determined distance.

To proceed further, we evaluate r along an actual light ray. This procedure requires use of general relativity; however, the result can he expressed fairly simply. For an Einstein-de Sitter universe,



We can now use the definition of magnitude,



where r is the luminosity distance expressed in parsecs, to derive



This is the relation between magnitude and red shift for a flat universe. Curved Friedmann models yield different functions of red shift. All models, however look alike at small red shifts, and the relation can be written as


This explicitly shows that the relation between apparent magnitude to and log z is linear at small red shifts.

Thus, a graph of m against log cz, where we identify cz with the recession velocity of a galaxy if z is small, would result in a straight line whose intercept with either axis yields the value of H0 (assuming that wc know the absolute

Magnitude M). Comparison with the definition formula for m shows that, at small red shifts, z is a direct measure of distance, since we have cz = Hor. For more remote objects, a nonlinear term, 1.086(1 - 1/2 Wo)z. introduces a curvature into the in m - log z relation that depends on whether the universe is open or closed. If the universe is open, a galaxy at a given red shift has a greater value of m and therefore appears to be fainter than it would be in a flat universe. Intuitively, we can think of space in an open universe as being stretched-distances in curved space increase relative to those in flat space.

If galaxies were brighter in the past because of evolutionary effects, we would have to correct for this effect by reducing Wo or making the universe more open than it appears to be.


As already pointed out in our proposal Wo is greater than 1 and the redshifting data is distorted by gravitational lensing caused by the primordial singularity!


C. Angular Diameters.


Consider a galaxy of actual diameter d and apparent

angular diameter A at coordinate distance r. The relation between d and A can be shown to be



Again, although this result is taken from a more sophisticated analysis, its meaning is quite clear. We must take the product of A and the distance from the source and divide by 1 + z to yield the diameter of the source (as locally determined). At small red shifts, the formula reduces to that obtained in a local region of space, where expansion is unimportant, d = rA. The angular size is obtained by substituting the formula given previously for r in terms that are valid for an Einstein-de Sitter universe. This yields:



This formula has an interesting property. When z is small, A varies as z-1 (or as the inverse of distance, just as expected for nearby sources), whereas, at large z, A must increase as 1 + z (since [1 + z]-1/2 becomes negligible compared with 1). In other words, the angular diameter goes through a minimum. This actually occurs in an Einstein-de Sitter universe at a red shift of 1.25. At larger red shifts, galaxies appear to grow bigger! This completely unexpected result is a consequence of the focusing of light by the gravitational field of the universe, which acts as a kind of gigantic lens.

This effect occurs even in the open Friedmann model, although at a larger red shift because of the reduced pull of gravity. In principle, it offers a way of distinguishing between different cosmological models by measuring the

angular diameters of very remote galaxies. In practice, it has been very difficult to perform this test, because the images of galaxies are exceedingly fuzzy and the edges are not well defined. The angular diameter of a galaxy at red shift is typically only about 3 arc-seconds, equivalent to the angle subtended by a penny at a distance of 1 kilometer. The angular diameter-red shift relation offers greater promise as a cosmological test when applied to clusters of galaxies, the sizes of which seem to be relatively uniform. Evolutionary corrections are very uncertain, however.

Having found the angular diameter of a galaxy, it is easy to obtain its surface brightness, which determines how detectable a galaxy is relative to the sky background. To obtain the surface brightness, we simply divide the flux from the galaxy by the apparent area it subtends as measured on a sphere of unit radius. (This angular area is called a solid angle.) This leads to:

Surface-brightness=(flux/p{A2/4})=L/4 pr2(1+z)2[p/4(1+z)2d2/r2]-1=L/p2d2(1+z)4 erg/cm2/s


The factor L/p2d2 is the surface brightness as determined by a nearby observer, and the red shift factor is due to the cosmological expansion. This result has not required any assumption about a cosmological model-it is generally true that surface brightness decreases as the fourth power of red shift. An earlier illustration of this result (in Note 8) was implicit in the variation of the background radiation energy density u~ as (1 + z)4(equivalent to the surface brightness divided by c).


Already mentioned earlier the gravitational lensing is far more intense due to a nearby singularity with a mass equivalent greater than the entire universe the red shift is exaggerated!



D. Number counts.


Let us first consider a static universe with a uniform distribution of sources of luminosity L. If f is the flux from a source at distance r, then


f=L/4 pr2


Hence, we can see sources brighter than f out to a distance r, where

r=(L/4 pf)1/2


The total number N of sources brighter than f is given by


N=4 p/3r3no


where n0 is the local number density of sources. In other words,


N=4 p/3no(L/4 pf)3/2∞1/f3/2


Thus, the number of sources increases as the inverse three-halves power of the flux.

It is straightforward to adapt this result to the case of an Einstein-dc Sitter universe. We must simply replace r in this derivation by the luminosity distance rlum= r(1+z). We now obtain


N=4 p/3no(L/4 pf)3/2(1+z)-3


In other words, the number brighter than any given flux is reduced by a factor (1 + z)3 relative to that in the nonexpanding case.

A similar but more complicated reduction is found when open or closed Friedmaun models are considered, which give a shallower rise in the predicted number of sources than would be given by the inverse three-halves power of the flux. We can graph log N against log f, and the slope of the resulting curve should therefore be flatter than 3/2 because of the effects of the cosmological red shift, fewer sources being found at higher red shifts than expected in a nonexpanding universe.

In the case of the radio source counts, a steeper slope of approximately 1.8 was actually found. This finding can be explained most easily as resulting from evolution; either or both of the following effects must occur. Sources must be brighter (L must increase) or be more frequent (no must increase) so as to overcompensate for the (1 + z)3 factor and give an increase at low flux levels over that expected in a nonexpanding universe. In this way, a slope steeper than 3/2 will be obtained.

It is our contention that the reason there are not as many large red shifts visible to us is because they have already begun to re-enter the event horizon of the singularity!



One of the hardest things to do is to try to visualize what the universe might look like on its largest scales. The only guide we have is Einstein's theory of general relativity, and until such time as this theory is shown to be incorrect, it is our only best shot at visualizing what is going on.

The universe does not exist, embedded in some larger space, but contains 3- dimensional space within itself. This isn't philosophy, but hard physics. The universe is not an expanding sphere in space, nor is it a torus or any other shape in space. Instead, it is the space itself that for a closed universe is bent around into a finite spherical volume. At any instant in cosmic time, the shape of space in a closed universe is that of a hypersphere with a finite 3-dimensional space. In future instants in time, the radius of this hypersphere increases or stretches out. The larger picture is that of a 4- dimensional hypersphere, and it is this 4-dimensional shape that is a solution of Einstein's relativistic equation for gravity. Newton's equations just don't work when you talk about curved space and gravity.

When you take the 4-dimensional hypersphere and slice it like an apple, you are slicing it along some unique instant in time. The cross section is a 3-dimensional spherical volume just as the slicing of a 3-dimensional apple gives you a 2-dimensional cross section. To visualize what the 3-dimensional slice through our universe is, is a bit tricky and that's where you end up with all types of misunderstandings. This slice is not a balloon-like object. It is a mathematical object resembling a balloon whose surface is not 2- dimensional, but 3-dimensional space. The radius of the balloon does not represent a direction in space, but a scale of the curvature of the 3- dimensional surface 'in time'.

No one ever said that understanding the universe would be easy for beings that live on a small planet, and who never travel more than a few thousand miles from their birth place!


According to General Relativity, the wavelength of light (or any other form of electromagnetic radiation) passing through a gravitational field will be shifted towards redder regions of the spectrum. To understand this gravitational red shift, think of a baseball hit high into the air, slowing as it climbs. Einstein's theory says that as a photon fights its way out of a gravitational field, it loses energy and its color reddens. Gravitational red shifts have been observed in diverse settings.

Earthbound Red shift

In 1960, Robert V. Pound and Glen A. Rebka demonstrated that a beam of very high energy gamma rays was ever so slightly red shifted as it climbed out of Earth's gravity and up an elevator shaft in the Jefferson Tower physics building at Harvard University. The red shift predicted by Einstein's Field Equations for the 74 ft. tall tower was but two parts in a thousand trillion. The gravitational red shift detected came within ten percent of the computed value. Quite a feat!

Solar Red shift

In the 1960s, a team at Princeton University measured the red shift of sunlight. Though small, given the Sun's mass and density, the red shift matched Einstein's prediction very closely.

If the universe were not expanding, would we still see a red shift from the gravitating matter in the universe?

The gravitational red shift depends on the mass of the object emitting the light, not its rate of motion ( the Doppler effect) or the expansion of the universe ( the Cosmological Red shift). If L0 is the wavelength of light at rest in our laboratory reference frame, and L is the observed wavelength of light from a very distant object such as a quasar, then

L0/L = ( 1 - 2GM/(R c2))1/2

where M is the mass of the body producing the red shift, and R is the distance from the body at which the light was emitted in our direction. To get appreciable red shifts from conventional galaxies that are comparable to what you see in quasars, and assuming that R = 10 kilo parsecs (typical galaxy size), you can easily see that to get a red shift of 1/2 you need a galaxy with an enormous amount of mass of about 8 x 1016 suns, or equal to 250,000 Milky Ways crammed into a volume as big as our galaxy!

Now, if we could shut off the cosmological red shift, then the only red shift we would expect to see would be the very weak gravitational red shifts of the light escaping from distant galaxies. Most of the light from galaxies is distributed over many thousands of parsecs worth of stars and gas clouds in interstellar space, so the gravitational red shift would not be more than about


L0/L = ( 1 - 2 x ( 6.6x10-8)x(M)/(9x1020 x R))1/2

or for M ( galaxy) = 1010 x 2x1033 grams = 2 x 1043 grams, and R = 10000 x 3 x 1018 centimeters, you get about 0.999999976 which is utterly insignificant. Now the universe itself has a lot of mass, and the farther out into space we look, the more mass we would intercept along each line of sight. However, out to the nearest quasar, this amounts to only about a few thousand or so galaxies, very approximately. We can recast the above formula into a statement about the density of mass since M = 4/3 pi R3 x density, at least in Euclidean space which the universe is not. This means that:

L0/L = ( 1 - ( 8 pi G / (3 c2))*(R2 x density))1/2

For the universe, if we take a mean density of 10-31 grams/cc and a typical distance of 10 billion light years or so, we get a gravitational red shift of:

L0/L = (1 - (6.1 x 10-28)(10 x 109 x 1018)2 x 10-31))1/2

L0/L = (1 - 1.2 x 10-30)1/2

which is vastly smaller than the intrinsic gravitational red shift of the individual galaxies themselves.

How do astronomers know that a galaxy’s gravitational field does NOT cause the cosmological red shift?

Because a gravitational red shift of the magnitude seen in distant quasars would require that the masses of distant galaxies be vastly greater than typical nearby galaxies that otherwise look identical to the distant 'high red shift' ones. This leads to an implausible and inconsistent picture of the universe, in which galaxies can look the same, but have vastly different masses. Independent indicators of the dynamics of distant galaxies show that their masses are entirely in line with those of nearby galaxies, so there cannot be enough mass within distant galaxies to produce a gravitational red shift. This leaves the cosmological red shift as the only reasonable explanation that has survived repeated tests.

The presence of a primordial singularity is not unreasonable and it would easily replace cosmological red shifts !!!

How could galaxies 90 degrees apart in the sky come from the same Big Bang when their ancient light seen now, shows them billions of light years apart?  If we follow the history of the universe back into time, the scale of the universe decreases rapidly. By 300,000 years after the Big Bang, the separations between galaxies was only ( 300,000/15 billion years)2/3 or 0.0007 what it is now. The galaxies we see 90 degrees apart at a distance of say 14 billion light years, were only about 14,000 x 0.0007 = 19 million light years apart. At a time 1 year after the Big Bang, they were only 19 million x ( 1 / 300000)1/2 = 34,000 light years apart. And by 1 second, they were 6 light years apart. The light, and the light paths taken during this journey have been stretched and gravitationally bent by a factor of 15 billion/6 nearly 3 billion times during the journey to us. In terms of the Big Bang, the matter in these distant galaxies was actually closer together than the distance between your eyes and the TV screen you are now staring at, at one time in the remote past. Gravitational lensing has now given us the illusion that these galaxies have always been very far apart in the universe.




At the Big Bang, why didn't the matter turn into a swarm of black holes instead of an expanding plasma?

David Morrison at Duke University, Andrew Strominger at UC Santa Barbara and Brian Greene at Cornell, in 1995, discovered that this may have happened. What they did was to investigate super string theory and what it predicts for the very early universe around the time of the so-called Planck Era at 10-43 seconds 'after' the Big Bang when the densities were 1094 grams/cc. They found that a spectacular 'phase transition' might have occurred in which like water turning into ice as it cooled, the strings changed into quantum black holes. This suggests that quantum black holes and elementary particles are really one and the same thing as they smoothly change from one into the other. There may have existed a state of the universe in which matter was briefly an unimaginably dynamic swarm of quantum black holes. Now, these black holes would have contained about 10 micrograms of mass each, and would have evaporated in only 10-43 seconds according to Stephen Hawking's process, so they were not a long-lived phase in the history of the universe. By the end of the Planck Era, they would all have evaporated into the precursors of the normal particles and fields in our universe. What is exciting about this discovery, mathematically, is that it brings together the description of the universe in terms of string theory, and the concept of the quantum black hole, which for decades has been predicted by various back of the envelope estimates of what quantum mechanics plus general relativity ought to include.

If billions of super massive black holes existed within the first billion years after the Big Bang, would this have altered the expansion of the universe?

It could have, but apparently it didn't. We can see this in the uniformity of the cosmic fireball radiation mapped by the NASA COBE satellite. If the expansion of the universe had been affected by super massive black holes, there would be some directions in which the universe was now expanding faster than in others. The background would be very anisotropic. If the dominant source of gravity were in 1 billion super massive black holes, each with the mass of a single galaxy like the Milky Way, each of these would be about the size of the solar system today. If they were now spread out uniformly in space, their typical distances from each other would be about 15 billion light years/(1000) or 15 million light years. If we were to run the clock back to the time when the universe first became transparent to its own radiation, the red shift factor is about 3000 so that the scale of the universe was about 3000 times smaller. This means that the typical distances between these black holes would be about 15 million/3000 = 5000 light years. At this time, some 500,000 years after the Big Bang, the visible universe would be about 500,000 light years in radius, and this means that in this volume of space, there would be about 1 million super massive black holes. The dynamics of any test particle would have been strongly perturbed by such massive bodies and the expansion of the universe would have been completely masked. Even a few million super massive black holes would have had their effect, so the isotropy of the background radiation is a powerful clue to just how many of these objects could have been present.

This is an interview with blackhole researchers.

NARRATOR: What they discovered was a very simple relationship, a relationship between the galaxy we live in and the most destructive force in the Universe. A super massive black hole. It set the world of cosmology alight.
KARL GEBHARDT: Six months ago people were not that excited about super massive black holes. The general astronomer did not care that much about super massive black holes. Now they have to and now they'd better!
NARRATOR: The ultimate aim of cosmology is to understand how the Universe was formed. One of the most important questions is how galaxies were created, because without them we wouldn't exist.
DR ANDREW FRUCHTER (Space Telescope Science Institute): Galaxies contain almost all the stars we see in the Universe and maybe the places where all stars in the Universe will be created and stars are what produce oxygen, carbon, planets, everything you need for life and without life you don't get astronomers.
NARRATOR: We see our galaxy, the Milky Way, as a band of stars in the sky. In fact it's a giant rotating disc 200,000 light years wide. It contains over 200 billion stars like our own sun, circling slowly around the center, but we are just one in 125 billion galaxies of different shapes and sizes spinning through space. Yet scientists haven't been able to explain how a single one of these galaxies was created.
ANDREW FRUCHTER: Galaxy formation is a very complicated process. It involves gravity and it involves large balls of gas colliding , it involves the dynamics of stars, it involves the chemistry of the gas coming together.
NARRATOR: All we know is that when the Universe was young there were no stars or planets, just great swirling clouds of hydrogen gas. The mystery is how each of these clouds turned into: the complex galaxies of stars we see today.
ANDREW FRUCHTER: We just don't know how they do it, how galaxies formed out of the, the ionized hot gas that filled the Universe is still physics that we do not really understand yet.
NARRATOR: Exactly how galaxies were created has troubled the world's leading astronomers and physicists for decades, but 6 months ago scientists found evidence for an extraordinary answer. The Nuker team is a group of world respected astronomers, but they're not galaxy experts. They are experts in the most violent and destructive forces known to science: super massive black holes. Until recently, super massive black holes were mere theory. These are giant black holes of apocalyptic proportion.
KARL GEBHARDT: Super massive black holes are a million to a billion times the mass of, of a, of a typical black hole.
PROF. SANDRA FABER (Nuker Team): They could fill a whole solar system.
NARRATOR: A super massive black hole is quite simply gravity gone mad. An object of such concentrated matter its gravitational pull is insatiable. Nothing can escape it, not even light itself. Anything that gets close - gas, stars and entire solar systems - are sucked into oblivion. It even destroys the very fabric of the Universe. If you think of the Universe as a space-time web, the gravity of ordinary stars and planets creates a dent in this web, but the immense gravity of a super massive black hole is so destructive that it distorts space-time to breaking point. At the heart of a super massive black hole is one of the most mysterious things in physics - the singularity, a point where space, time and all known laws of physics fall apart.
SANDRA FABER: What happens at the center of the singularity is a complete mystery and solving it is going to require new physics that we just don't have right now. Some people think that you can fall through the singularity and pop out in another part of the Universe. The theories for the singularity are, some of them are very, very radical. We just don't know.
NARRATOR: Super massive black holes are so bizarre that until recently many scientists doubted they existed at all. They were an extreme idea, dreamt up to explain a very rare and distant type of galaxy: active galaxies. These are amongst the brightest objects in the Universe. These galaxies have a brilliant burning core with vast jets of energy spurting out of the center. This ferocious heart of brilliant hot gas is called a quasar. Scientists thought this whirling mass might be caused by a giant black hole sucking up gas and stars, literally feeding on the center of the galaxy.
PROF. JOHN KORMENDY (Nuker Team): The idea is that the quasars that we see that look so bright are not the black hole, the super massive black hole, they are the gas that's just about to fall into the super massive black hole, that's going around it, shining very brightly just before it disappears down the black hole.
NARRATOR: A giant black hole would have a gravitational pull so overwhelming it would hurl gas and stars around it at almost the speed of light. The violent clashing would heat the gas up to over a million degrees.
JOHN KORMENDY: The gas rubs against itself essentially and gets extremely hot and extremely hot gas shines very brightly.
NARRATOR: In reality, although a quasar burns brightly, it is actually impossible to see if there's a black hole in the middle. Paradoxically the black hole is made invisible by the fact that it swallows light. So for years no-one could be certain if super massive black holes really did exist at the heart of these strange active galaxies. The Nukers have spent the last two decades hunting for these elusive monsters. The first problem they faced was to prove that super massive black holes existed at all. What they were to discover would be stranger than most people could have imagined. One of the first of the Nukers to try to find one was Alan Dressler. In 1983 he came to the Palomar Telescope in California, convinced that he'd found a way to prove that super massive black holes exist.
DR. ALAN DRESSLER (Nuker Team): You can't see a black hole directly - that's what makes it a black hole - so what you're looking for is evidence of its gravity, you're looking at how it pulls on the stars that are coming nearby.
NARRATOR: Dressler knew that although a black hole is invisible, its immense gravity would hurl stars around it at over 500,000 kilometers an hour. By measuring how fast these stars were moving, he could prove if there really was a black hole at the center of an active galaxy.
ALAN DRESSLER: I picked a galaxy nearby which is called NGC1068, an active galaxy, which meant that it probably had a super massive black hole in it, at least that's what we wanted to prove.
NARRATOR: To be certain that the stars were moving unnaturally fast in NGC1068, Dressler wanted to compare them with stars in a normal galaxy, without a black hole. Stars circling around a weak center of gravity would move at half the speed. So for this comparison he chose the very ordinary galaxy next door to us, Andromeda, with a quiet, inactive center like our own. To measure the speed of the stars in these two very different galaxies, Dressler used an instrument called a spectroscope. This looks at the changing pattern of light coming from stars as they rotate around the galaxy core. The spectroscope shows the center of the galaxy as a white band and a dark vertical line traces the movement of stars around the core. If the stars at the galaxy's center are circling slowly then the dark band would show hardly any change, but if they're traveling at great speed, whizzing towards and away from us either side of a super massive black hole then the dark band should show a sudden shift across the center of the galaxy.
ALAN DRESSLER: I would expect to see some rather rapid change in this dark line so that there'd be a very big change in the speed from one side of the galaxy to the other, very suddenly, right over the center, and that would show that the stars were moving very rapidly in the center of the galaxy because of the influence of a great mass in the center, the super massive black hole.
NARRATOR: Over the next few nights Dressler measured the speed of the stars in NGC1068 and in Andromeda. When the results came down from the telescope he saw something that was completely unexpected. The picture from the active galaxy, where he hoped to find a black hole, was unreadable. NGC1068 was just too far away for the telescope to get a clear picture. The surprise came from Andromeda, the quiet, normal galaxy right next to us.
ALAN DRESSLER: I was astonished when I found what I was looking for, but not where I was looking for it. This jog in this dark band shows that on one side the stars were moving very rapidly away from us at 150 kilometers a second, which is 500,000 kilometers an hour.
NARRATOR: Dressler thought there could only be one thing that would cause the stars to move this fast: a super massive black hole and he wasn't alone. Fellow Nuker John Kormendy had found exactly the same thing.
JOHN KORMENDY: The moment I could see that wiggle I knew essentially instantly that there was a very good chance that this would be a super massive black hole. When you see something like that you know you're on to something.
NARRATOR: They'd found evidence of the most terrifying force in nature, but worryingly it wasn't in some far-off active galaxy. This super massive black hole was in the very ordinary galaxy right next door to us. Andromeda seemed to have a black hole but no bright quasar.
ALAN DRESSLER: If there was a super massive black hole why wasn't it shining? That suggested that there was not stuff falling in. Maybe lots of galaxies could have a dormant phase where they had a super massive black hole but they weren't being fed so they weren't shining.
NARRATOR: A few theorists had predicted this very thing: super massive black holes could exist in two states. When it's feeding a giant black hole creates a bright burning gas disk around it and then for some reason it stops feeding, leaving a dark, deadly core lurking menacingly in the center of the galaxy and one of these dark, silent monsters had been found in our neighboring galaxy. The discovery of a massive black hole lurking so close to us made headlines around the world, but many scientists found the news impossible to believe. They didn't think the evidence was good enough for such an extreme idea. Even the Nukers themselves began to have doubts.
JOHN KORMENDY: There is always the danger that instead of being a black hole, it's a dense cluster of something else that's dark, that's not a black hole.
PROF. DOUG RICHSTONE (Nuker Team): I thought there was a fair chance that we'd made some terrible bone-headed mistake and that somebody within a year was going to write a paper and show that we were a bunch of idiots and we would feel terrible about it.
NARRATOR: To convince the skeptics, they needed to find more super massive black holes in many more galaxies. For this they needed to look further into space. So they turned to Hubble Space Telescope. From 1994 Hubble began a systematic survey of the centers of distant galaxies searching for the telltale signature of stars speeding around a super massive black hole.
Astronomers started by looking at an active galaxy, M87. As expected it had a giant feeding black hole shooting a great jet of energy into space. But it was when the search broadened out to include inactive galaxies as well, that something incredible happened. In every galaxy scientists looked at they found evidence for a super massive black hole.
KARL GEBHARDT: In total there's probably 20-30 or so black holes that have been found.
NARRATOR: Super massive black holes were supposed to be rare, but Hubble was finding them everywhere, both feeding in active galaxies and lurking quietly in ordinary galaxies.
SANDRA FABER: Pretty soon we got used to the idea that everything we would look at would have a black hole in it. You know, after the first three or four cases we were beginning to wonder: does every one have a black hole.
DOUG RICHSTONE: One by one we were seeing this picture sort of emerge out of the fog that, that every galaxy, or almost every galaxy, had a super massive black hole in it. It was really quite astonishing.
NARRATOR: Far from being rare freaks of nature, the Nukers began to suspect that all galaxies could have giant black holes at their hearts. If this was true it would revolutionize ideas of what a galaxy actually is. More disturbingly, it meant there could be a super massive black hole lurking at the heart of our very own galaxy, the Milky Way. Andrea Ghez has been coming to Hawaii for the last five years, trying to find out if there's a super massive black hole in the middle of the Milky Way.
PROF. ANDREA GHEZ (University of California, Los Angeles): When I first started thinking about astronomy it never occurred to me that there might be a super massive black hole at the center of our galaxy. The idea was that galaxies rotated just around the mass of the center, which was just stars and gas and dust and nothing particularly exotic.
NARRATOR: Andrea Ghez has been using a telescope even more powerful than Hubble - the Keck Telescope, perched 14,000ft up on the sacred mountain of Mauna Kea. The Keck telescope is the biggest optical telescope in the world. It has a vast mirror, 10 meters across, made up of 36 segments of highly polished aluminized glass.
ANDREA GHEZ: The Keck telescope is a fabulous telescope to use. It's great because it's large. This is a case where bigger is better. You get to collect a lot of photons, you can see very faint things and it allows you to see very fine details.
NARRATOR: Four times a year, Ghez focuses the telescope on the stars at the very heart of our Milky Way. She's looking for the telltale high speeds that reveal the presence of a black hole. The center of the Milky Way is so near and the Keck telescope so powerful Ghez is able to see closer into the center of the galaxy than anyone has ever done before.
ANDREA GHEZ: Here's an example of one of the images we got just last night. The seeing was, it was kind of a typical night, not the best night, not the worst night. Each one of these blobs here is a star and what you see is each star is distorted - that's what the atmosphere does. It's like looking through a pond, like you want to look at a penny at the bottom of a pond and the water's moving, it looks all distorted and it looks different every time you look, so this is one exposure and the next exposure looks like this.
NARRATOR: By superimposing thousands of these pictures taken overnight, the computer can compensate for the atmosphere's distortion producing a detailed picture of the center of the galaxy.
ANDREA GHEZ: You can see the position of the stars very accurately. If we go into the center here and rescale it, we actually see that there are fainter stars towards the center of our field of view and these stars are extremely important. It's the motion of these stars that reveal the presence of the black hole.
NARRATOR: Ghez has been following the motions of these stars for the last five years. If there was no black hole they'd be moving very slowly, but she's discovered they're circling at speeds of over 1,000 kilometers a second.
ANDREA GHEZ: These stars that we've been watching are 2 light weeks from the center of our galaxy, so their motion, the fact they are going 1,000 kilometers per second tells us that within 2 light weeks there's 2 million times the mass of the sun of matter there.
NARRATOR: There's only one thing in the Universe this dense. Lying at the center of this necklace of spinning stars is a super massive black hole. You can't see it, but it's there. The most destructive force in the Universe is lurking at the heart of our very own galaxy, the Milky Way. The puzzle for cosmologists now is what affect it has on the galaxy around it. If, as it now seems, every single galaxy has a black hole at its heart, this can't be a coincidence. Perhaps black holes are an essential part of what galaxies are and how they work.
SANDRA FABER: Well now that we know they're in every galaxy, the question is what do they do? Are they fundamental, or are they just a frill? Do they really influence the life of galaxies, or is it the other way round - do galaxies influence them? That's what we're trying to find out.
NARRATOR: So the Nuker team set out to find out if there was any relationship between super massive black holes and the galaxies around them.
JOHN KORMENDY: When you're studying an object that you know almost nothing about, the first thing you want to do is find some regular pattern of behavior, because that sort of thing can teach you new science.
NARRATOR: One of the first things they noticed was a strange link between the size of the galaxy and the size of the black hole in the middle.
DR. JOHN MAGGORIAN (Nuker Team): We found that there's a relationship between the mass of the black hole and the mass of the surrounding host galaxy in the sense that small galaxies have small black holes of around a million solar masses and big galaxies have big black holes of round a billion solar masses.
NARRATOR: Every single black hole was almost exactly in proportion to the size of its galaxy. No matter how big or small, bizarrely the galaxy always had a black hole half a percent of its entire mass.
JOHN MAGGORIAN: This was surprising and immediately leads to questions: why?
NARRATOR: No-one had expected that black hole size and galaxy size could possibly be related. It suggested some mysterious invisible connection between a galaxy and its black hole, but what this was a mystery scientists would have to wait three years to solve.

The first breakthrough came when a new instrument was added to Hubble Space Telescope.

This dramatically accelerated the discovery of new black holes, giving scientists a wealth of new potential leads to follow. For three years the data has been coming down to Earth. Amongst those who've been sifting through it are two young competing researchers. What they were to discover this year would turn the world of cosmology on its head.
LAURA FERRARESE: Every day I go to work I don't really know what's going to happen, but I can count that it's going to be something exciting every single day.
KARL GEBHARDT: These past six months have been phenomenal in terms of black hole research. We've been extremely excited, we're finding these black holes in, in, in numbers that we had never been able to do before.
NARRATOR: Karl Gebhardt and Laura Ferrarese were trying to find the fundamental connection linking black holes and their galaxies, so they searched through all the different galaxy characteristics looking for any new links that might give a clue. But it wasn't until they looked at a property called sigma that the mystery began to unravel.
LAURA FERRARESE: Sigma is a just a very, very fancy name for something that's actually very simple.
NARRATOR: Sigma is the speed at which the stars are circling in the outer reaches of the galaxy. The stars at the edge of the galaxy are so far away from the black hole that they're completely unaffected by its gravity.
JOHN KORMENDY: Those stars don't feel the black hole, they feel the rest of the stars in the galaxy, they don't know or care that the black hole is there. If you took the black hole away from the galaxy they'd be moving at exactly the same speeds.
NARRATOR: This has lead scientists to believe there couldn't possibly by any connection between the size of the black hole and the speed of the stars at the edge of the galaxy. They were about to be proved wrong. As the two researchers went through the new data, they first had to calculate the mass of each black hole. Then they found out the speed at which the stars were moving at the edge of the galaxy and plotted all these figures on a graph.
KARL GEBHARDT: As they came in I would take that new black hole mass and the sigma for that galaxy and add it to my plot.
NARRATOR: There should be no relationship between the two, yet as they added each new point marking the speed of the stars against the mass of the black hole, a clear pattern started to emerge. To their amazement the points lay in an obvious band across the graph. The properties were clearly related: the bigger the black hole, the faster the speed of the stars at the edge of the galaxy.
LAURA FERRARESE: What we discovered is that the super massive black holes at the center of galaxies and the galaxies themselves are really very tightly intertwined.
NARRATOR: The stars on the edge of the galaxy have no physical connection with the black hole. Yet somehow their speed is tightly bound with the size of the black hole billions of miles away. If the two things aren't physically linked now, it means they must have been at some point in the past.
KARL GEBHARDT: The fact that we see there's such a tight relationship between the speed of the stars and the black hole in the middle is a probe to what happened early on in the galaxy.
JOHN KORMENDY: It screams at you something that you don't yet understand about the connection between galaxy formation and black hole formation.
NARRATOR: The relationship points at an extraordinary idea: that galaxies and their giant black holes could be linked from birth. In fact, scientists thought that super massive black holes might even be involved in the formation of the galaxies themselves.

SANDRA FABER: This correlation's the most important thing we've learned about super massive black holes so far. Astronomers are always looking for correlations. Whenever you find one that's really tight, like this one, it's a sign that there's some basic physics there that you need to look for.
NARRATOR: As it happens, the physics that might explain what was going on had been suggested years before: by theorists Martin Rees and Jo Silk. Jo Silk has spent much of his life trying to solve the mystery of galaxy formation. Three years ago it became clear that he'd been missing a vital ingredient. If there was a black hole in every galaxy, then scientists would need to explain what it was doing there.
PROF. JOSEPH SILK (University of Oxford): We had to rethink our ideas of how galaxies were made. To understand the first light of the Universe we really have to include the role of these super massive black holes in galaxy formation.
NARRATOR: All previous ideas of galaxy formation had assumed that gas in the early Universe simply condensed to form stars and galaxies. Silk and Rees came up with a completely different idea they proposed that the centre of each early gas cloud could have collapsed to form a giant black hole. The black hole would immediately start feeding on the gas around it, creating a brilliant quasar. Silk realized that the energy from this newly formed quasar would create intense temperature changes in the surrounding gas. This would cause the gas around the black hole and its newly formed quasar to condense into stars, which means, in effect, that the black hole could have helped to trigger the birth of the galaxy.
JOSEPH SILK: We think of black holes normally as being destructive influences on their surroundings. In this case they're creative, they're having a very positive impact on the formation of the galaxies.
NARRATOR: But there was more. This theory predicted when and why the black hole would eventually stop feeding and go quiet. They calculated that this would happen when the feeding black hole grew so large that the vast amount of energy spewing from its bright quasar would literally force the rest of the galaxy out of its reach.
JOSEPH SILK: It has the effect of pushing a wind against the surrounding gas and driving the surrounding gas away like a snow plough.
NARRATOR: With only its hot whirling quasar within its reach, the black hole would swallow that up and then stop feeding. It would be left invisible at the center of the galaxy. Silk and Rees calculated that this moment when the black hole pushed the surrounding galaxy away, would depend, bizarrely, on how fast the stars in the outer galaxy were moving. The faster these stars were circling, the harder it would be to push them away and the bigger the black hole would need to grow to produce enough energy to overcome the motion of the circling stars. Which means the size of the black hole in the end depends on how fast the stars are moving in the newly formed galaxy around it.
JOSEPH SILK: If our theory is correct there should be a simple relation between the mass of the central black hole and the speed or the sigma of the stars in the newly formed surrounding galaxy.
NARRATOR: And this is exactly what has just been found. It means that Silk and Rees's theory may be right and if it is also right that super massive black holes helped trigger star formation, then it must mean that all giant black holes and their galaxies are connected from birth. It means the answer to the mystery of galaxy formation may lie in the creation of the super massive black holes at their heart.

LAURA FERRARESE: The real implication of the relation is that whatever controlled the formation of the galaxy and whatever controlled the formation of the super massive black hole is basically the same thing, there is only one thing behind everything.
NARRATOR: The discovery has caused a sensation. Suddenly super massive black holes are big news in cosmology.
KARL GEBHARDT: I'm very excited to talk to you about some of the results that I've been working on these past couple of months.
SCIENTIST: We believe that most, and perhaps all, galaxies contain super massive black holes at the…
LAURA FERRARESE: Really till five years ago they were just considered oddities, you know, very exotic curiosities and certainly fascinating but of really no consequence and now we know that is not true. Super massive black holes are really the fundamental constituent of galaxies and they have to be taken into account.
NARRATOR: Together theory and observation are leading scientists to a new view of galaxy formation. It's still just a theory and there are many details to be worked out, but if it's true, then it all would have begun like this. In the early Universe, a time of formless gas, each swirling gas cloud would have become a galaxy with one crucial event: the creation of a voracious feeding black hole. The black hole would have immediately started churning its way through the gas cloud. This would have triggered giant bursts of star formation and the galaxy would have sparked into life. Eventually the black hole and its quasar would have pushed the rest of the galaxy away. After sucking up its quasar and with nothing left to feed on, the black hole would have been left dark and silent at the heart of the galaxy. So a super massive black hole, a force of terrible destruction, could also be fundamental in the creation of our galaxy. Nevertheless, its latent destructive power should not be underestimated. Back in Hawaii Andrea Ghez has made a new discovery. She's discovered a new source of light in the center of our galaxy. The black hole may be starting to feed again.
ANDREA GHEZ: All of a sudden we saw something that looks like a star, but maybe isn't a star, but it's definitely a new object in our map and the interesting thing is that it's located where we think the black hole is.
NARRATOR: Ghez thinks this spot of light could be something amazing.
ANDREA GHEZ: One idea that I'm particularly intrigued by at the moment is the idea that perhaps the black hole is feeding more right now.
NARRATOR: A quiet super massive black hole can start feeding again at any time. The light could be coming from hot gas as it's sucked into the black hole. If this light was just another star, it would be circling with the others. If it's the black hole itself, then it should stay still. So to see whether it was moving Andrea took two pictures of it, one in May and one in July.
ANDREA GHEZ: If we flash back and forth between the two, you can see the new source. You can also see the other stars that we said are high velocity stars are moving.
NARRATOR: Although the stars around it move, the new source stays still. This suggests the light is coming from the very center of the galaxy, the super massive black hole itself. Andrea thinks that the light she sees is coming from hot gas being sucked into the vortex of the black hole. So if our black hole has started feeding again, could this affect the Earth even though we're 24,000 light years away?
ANDREA GHEZ: We're in absolutely no danger of being eaten by the super massive black hole and in fact if we do think the black hole is going through a slightly larger feeding at the moment, it's tiny, it's tiny compared to what other galaxy, galaxies are doing so in fact still this a very quiet black hole. In spite of the fact that there might be new emission from it it's still extremely low.
NARRATOR: Our black hole is merely having the equivalent of a small snack, feeding on a wisp of gas that's strayed too close. The black hole stopped growing billions of years ago. Only a major catastrophe could make it fire up again, something violent enough to hurl stars from the safety of our galaxy's edge into its deadly heart and we now know that one day this catastrophe could happen. In January this year, John Dubinski set out to calculate the final fate of our galaxy, the Milky Way, and that of our nearest neighbor, Andromeda.




What are cosmic strings and superstrings?

Superstrings are what physicists call the vibrating loops of energy that are thought to make up each and every particle: photons, quarks, electrons, neutrinos and so on. They have sizes some 100 billion billion times smaller than the diameter of an atomic nucleus. They are the essential ingredients to the proposed unification scheme called String Theory which some physicists have enthusiastically dubbed the Theory of Everything.

Cosmic strings are also theoretical entities predicted by some versions of Inflationary Cosmology. They were produced when the universe was less that a trillionth of a trillionth of a second old. They are the result of the universe undergoing a 'phase transition' that produced 1-dimensional defects in what you might think of as the fabric of space-time. These defects may be millions of light years long but less than the diameter of an atom in thickness. Like spaghetti, they are jumbled together in intergalactic space, and some cosmologists have proposed that may be responsible for the string-like structure of galaxy clusters. There are other less exotic explanations for these patterns of galaxy clustering, and currently cosmic string theory is on its way out of mainstream thinking about the large scale structure of the universe, especially after the results of the NASA/COBE investigation. This showed that the universe is a lot smoother than cosmic string theory would have predicted.



In the May 10, 1995 issue of The Astrophysical Journal volume 444, page 507, astronomers Michael Strauss at Princeton University and his colleagues Renyue Cen, Jeremiah Ostriker, Tod Lauer and Marc Postman analyzed some of the earlier data by Lauer and Postman and concluded that many 'standard' cosmological models failed to account for the so-called 'bulk flow' seen by the examining the motions of very distant Abell Clusters.

Lauer and Postman had discovered that if you examined the motions of all the Abell galaxy clusters with red shifts less than 15,000 kilometers per second, corresponding to distances of 300 mega parsecs ( H = 50 km/sec/mpc) or 150 mega parsecs ( h = 100 km/sec/mpc), they appear to be moving with a speed relative to the cosmic microwave background of 689 km/sec. They tried to reproduce this anomalous 'bulk motion' by computer simulations of what the distant universe ought to look like based on 6 different, popular cosmological models. They discovered that bulk motions as large as the ones found in the Abell Clusters are not too hard to reproduce, with a mean value of 400 km/sec and a rather broad 'Maxwellian' spread around this average value. The problem they encountered was that the detailed structure of how these peculiar velocities are distributed on the sky could not be matched by any of these simulations to a statistical confidence of 95 percent. In other words, if you plotted the red shifts of the 119 Abell Clusters on the sky, you would end up with a pattern of red shifts that could not be reproduced, although the magnitude of this pattern ( 689 km/sec) could be reproduced.

They assumed that structure formed in the expanding universe by gravitational instability from some initially small-amplitude perturbations in the expanding gas. The present velocity field projected on the sky is then just determined by its present amplitude ( 689 km/sec) and what is called the 'Initial Power Spectrum' of the initial density field. This measures how much structure there is present in the universe at each length scale. Each cosmological model predicts what this Power Spectrum should look like, so it is a relatively easy matter to work out what the present irregularities in velocity should look like as a result of the gravitational influences of these irregularities in matter density. All of the models they considered assumed that the initial conditions were 'Gaussian' and in some cases that the density fluctuations were 'adiabatic', so the models they considered were specific versions of Big Bang cosmology. There are other versions which make different predictions.

Their results are very impressive, and it is exciting that they can use their velocity data at these scales to examine whether any of the popular versions of Big Bang cosmology are viable. The caveat is that the NASA COBE / DMR data showed that the large-scale structure in the universe may not have been generated exclusively by gravitational perturbations. They found traces of clumping in the cosmic background radiation, which were millions of light years across even at a time when the universe was only 300,000 years old. Clumping this large cannot be generated by any influence that travels at the speed of light such as gravity. Only Inflationary Big Bang cosmology provides a plausible answer to the structure seen by COBE or by preordained structure introduced by GOD’S intervention. Today, these structures would be 1000 times larger, but COBE was not sensitive to seeing smaller features, so it is entirely reasonable that these non-gravitational clumping extend all the way down to the scale of the Abell Cluster system measured by Lauer and Postman. If that is the case, the velocity irregularities they see are not strictly generated by gravitational influences as their computations require.

There results are very important for cosmology. But it is too soon to tell whether they are in severe conflict with Big Bang cosmology. We have only explored less than 1 percent of the volume of the visible universe, and it is hard to eliminate a cosmological theory based on such limited data. Especially one which provides the only plausible answer to many other equally significant questions.



Map 2002



Yet we have every possible anti creation cosmology theory that you can think of and the idea of pre-creation design entirely ignored even when the evidence points in the direction of pre creation design.

The big crunch is a future event prophesied in the bible in the book of Isaiah;

Isaiah 34:4 " And all the host of heaven shall be dissolved, and the heavens shall be rolled together as a scroll: and all their host shall fall down, as the leaf falleth off from the vine, and as a falling fig from the fig tree."

If there is any truth to our proposal concerning past events then future events of prophesy found in the bible should also line up with the evidence. In the book of Matthew chapter 24 Jesus prophecies a series of events:

Mat 24:27 For as the lightning cometh out of the east, and shineth even unto the west; so shall also the coming of the Son of man be.

Here the Son of man appears at the speed of light, which could be the results of operating near the event horizon.

Mat 24:29 Immediately after the tribulation of those days shall the sun be darkened, and the moon shall not give her light, and the stars shall fall from heaven, and the powers of the heavens shall be shaken:

This series of events is exactly what would happen as we pass through an event horizon first the sun would be swallowed up then the light would stop illuminating the moon and next the stars would all look like they are falling toward earth. Lastly the intense gravity would start to elongate us and the space around us.

Mat 24:30 And then shall appear the sign of the Son of man in heaven: and then shall all the tribes of the earth mourn, and they shall see the Son of man coming in the clouds of heaven with power and great glory.

Looking up into the sky one would see an intense beam of light descending straight down from the sky with light streaming into it from the east and west forming a cross shape in the sky. I can guarantee there will be aloud noise at this time as is prophesied.

Mat 24:31 And he shall send his angels with a great sound of a trumpet, and they shall gather together his elect from the four winds, from one end of heaven to the other.

Mat 24:33 So likewise ye, when ye shall see all these things, know that it is near, even at the doors.

Mat 24:34 Verily I say unto you, This generation shall not pass, till all these things be fulfilled.

Mat 24:35 Heaven and earth shall pass away, but my words shall not pass away.

Mat 24:36 But of that day and hour knows no man, no, not the angels of heaven, but my Father only.


We have already addressed the big crunch so let's look at signs in the heavens and one of the most ominous is gamma ray bursts.

Three times a day our sky flashes with a powerful pulse of gamma rays, invisible to human eyes but not to astronomers' instruments. The sources of this intense radiation are likely to be emitting, within the span of seconds or minutes, more energy than the sun will in its entire 10 billion years of life. Where these bursts originate, and how they come to have such incredible energies, is a mystery that scientists have been attacking for three decades. The phenomenon has resisted study--the flashes come from random directions in space and vanish without trace.




MARTIN REES: The amount of power you can get from a star is limited, according to Einstein's famous formula E=mc² and if you know m, the mass, which you do know for stars, then we know the maximum amount of energy which you could get by any conceivable process.
NARRATOR: Once they knew there was a finite size to these explosions they could then work out how far away they were. When they plugged in the numbers they realized that these explosions must be happening in our very own Galaxy. Any further away and E=mc² would be broken. The explosions would be bigger than was physically possible for any star to produce and so they scoured the Galaxy to find out what kind of star could be causing these bursts of gamma rays and before long they thought they'd found the culprit. Neutron stars are amongst the most powerful objects in our Galaxy. They are so dense that they have a gravitational pull of such strength that if anything strays too close it is dragged onto the star with extreme force.
CHIP MEEGAN (NASA, Marshall Space Flight Centre): A neutron star is typically just a few miles across and will have a mass as great as the Sun, so the densities are just enormous. If you dropped a marshmallow onto a neutron star it would have the energy of an atomic bomb because the gravity is so powerful.
NARRATOR: Neutron stars seem to contain enough energy to produce these gamma ray bursts. The only question was: what was actually triggering them?
CHIP MEEGAN: There were a number of ideas relating to neutron stars specifically. The idea was you dropped something onto the neutron star and it releases a lot of energy. One idea was an asteroid falling on a neutron star.
NARRATOR: It soon became the accepted theory that neutron stars fired off these bursts of gamma rays if something collided with them. The mystery seemed to be solved. Now they had the answer everyone began to speculate about the possible impact of these bursts on Earth. It began to dawn on them that if these explosions were coming from our own Galaxy in effect they were occurring right next door to us.

CHIP MEEGAN: If a burst did go off in our own Galaxy it would be quite spectacular, it would be extremely bright anywhere in the Galaxy and if we were close enough I suppose it could do quite a bit of damage. Some people have hypothesized that major extinctions are the result of gamma ray bursts in our own Galaxy.

Author: Ron Cowen Issue: July 10, 1999 Brightest lights may herald the birth of black holes For a few brief shining moments, a gamma-ray burst radiates more light than anything else in the universe does. Then the light vanishes, never to be seen again. For 3 decades, astronomers have puzzled over the nature of these fleeting powerhouses. A growing body of evidence now suggests that the brightest events in the cosmos signify the birth of its very darkest inhabitants. Gamma-ray bursts may herald the formation of the dense, collapsed stars known as black holes. "After 30 years, the mystery of gamma-ray bursts has been partially resolved," says Tsvi Piran of Hebrew University in Jerusalem and New York University. He described recent progress at a cosmology meeting last May at the Fermi National Accelerator Laboratory in Batavia, Ill. A super massive black hole--the remains of millions to billions of stars at the center of a galaxy--can produce an extensive assortment of fireworks. Astronomers similarly suspect that stellar black holes--bodies formed from a single star--may produce a wide assortment of gamma-ray bursts lasting from one-hundredth of a second to several minutes. Material falling onto a stellar black hole accelerates rapidly and releases a huge amount of energy. According to models that astronomers have been tinkering with for years, such energy is more than enough to power a gamma-ray burst. Recent observations have made such theories more compelling. It takes a massive star to make a stellar black hole. Over the past 2 years, astronomers have seen several gamma-ray bursts in places where such behemoths, several times the mass of the sun, are likely to be common. Massive stars never stray far from their birthplace during their brief lives and typically die a catastrophic death. By pinpointing the location of the long-lived bursts--those that last 5 seconds or more--the X-ray satellite BeppoSAX has for the first time enabled other telescopes to home in on the embers of these fiery flashes. Lasting from hours to days, these afterglows reveal that the bursts come from distant galaxies, many of them with the bluish tinge and dusty appearance typical of regions that churn out young stars, including massive ones, at an enormous rate. In just a few million years, a massive star exhausts its nuclear fuel, losing the fight against its own gravity. If the star ranges in mass between 8 and 30 times that of the sun, gravity abruptly crunches its core into a dense cinder known as a neutron star. At the same time, a shock wave hurls the dying star's outer layers into space. Astronomers call that explosion a supernova. Extremely massive stars, however, lack the oomph to launch a supernova shock wave. There's simply too much matter falling onto the core to allow the wave to form. Instead, gravity continues crushing the condensed core until not even light can escape its grasp. A black hole is born. In a model developed by Stan Woosley of the University of California, Santa Cruz and his colleagues, this failed supernova, or collapsar, produces a gamma-ray burst that lasts about 20 seconds. Woosley begins with a doomed star, 35 times as massive as the sun, that loses its outer envelope of hydrogen gas. The envelope, about two-thirds of the star's original mass, may be blown off by fierce winds from the star itself or snared by the gravity of a nearby companion. According to-the model, gravity promptly collapses the core to a rapidly spinning black hole, and overlying material begins to rain down upon it. Because the star had been rotating, the in falling gasses do not crash directly onto the hole but form a disk around its equator. Within this super hot, highly magnetized disk of gas, nuclear reactions generate a rush of subatomic particles called neutrinos. In a matter of seconds, some of the energy either stored in the disk's magnetic field or carried by the neutrinos creates jets of electrons, neutrons, and protons that shoot out from the poles. Over the next 10 seconds, the jets, which carry as much energy as would be radiated by a billion billion suns, burrow through the remains of the star and blow it apart. Having failed to generate an ordinary supernova, this sequence of events now produces an explosion with more than 10 times the punch. This souped-up version of a supernova is what Woosley and Bohdan Paczynski of Princeton University variously call a collapsar or hyper nova. Each jet breaks free of the surface and travels at nearly the speed of light for a distance comparable to 100 times the length of the solar system. At this point in their journey, the jets encounter interstellar material. As clumps of electrons in a jet collide with each other or slam into the material, they emit a high-energy flash of radiation--the gamma-ray burst. Piran, a theoretical astronomer, has been independently studying gamma-ray bursts. Although he subscribes to the outlines of Woosley's scenario, he suggests that the burst comes entirely from collisions between clumps of particles within each jet. Piran and Woosley agree that the visible-light afterglow results from collisions between a jet and interstellar material that occur farther down the road, after the jet has spread out and slowed. The gamma-ray bursts described by the model last for about 20 seconds, says Woosley. Some bursts, however, last for several minutes. For these longer flashes, Woosley and Andrew MacFadyen, also of Santa Cruz, invoke a variation on the black-hole theme. Woosley described the work at a May workshop on gamma-ray bursts and supernovas at the Space Telescope Science Institute in Baltimore. It's another case in which failure leads to success. In their latest simulations, Woosley and MacFadyen begin with another heavyweight, a star about 25 times as massive as the sun, that has also lost its outer envelope. At the end of this heavyweight's life, its core collapses to become a neutron star, and an outgoing shock wave plows into the outer layers of the star. For this star, however, the supernova shock is a tad too wimpy. It lacks the punch' to blow the entire star to smithereens. Instead of completely exploding, after about 200 seconds some of the material, the equivalent of five suns, comes crashing down on the core. Material shot out farthest by the supernova shock takes several minutes to fall back. The weight quickly becomes more than the neutron star can bear, and it turns into a black hole. As before, the in falling gas forms a disk around the newly minted black hole. This time, however, the matter spirals more gradually into the hole--and the energy from the neutrinos is not enough to make jets. Instead, says Woosley, magnetic fields associated with the disk may do the trick. Wound up by the spinning black hole, the magnetic field snaps like a rubber band. Some fraction of that torrent of energy generates the jet, Woosley proposes. Because the matter continues to fall onto the black hole for several minutes, this process "creates a powerful jet that can stay on for a long time--hundreds of seconds," he says. Overtaking the shock wave from the supernova, the jets spread out and send their own shock waves hurtling through the star's outer layers, ensuring that the star will explode. These longer-lasting jets may account for gamma-ray bursts that sustain their fireworks for several minutes, Woosley says. Such models may seem contrived, but recent observations have boosted their credibility. For nearly a year, a group of astronomers from the California Institute of Technology in Pasadena studied the region of the sky where BeppoSAX detected a gamma-ray burst on March 26, 1998. After 3 weeks, the visible-light afterglow had faded, and Joshua S. Bloom and his colleagues figured that the faint light remaining was background light from the burst's home galaxy. But when they looked again 9 months later, the light was gone. A galaxy cannot simply wink out of existence, but a supernova can. Bloom and his colleagues propose that the light they saw had been coming from a supernova, which only became visible after the burst's glowing embers had died. Both the color of the light and its intensity several months after the burst suggest that the supernova--and the gamma-ray burst--came from a galaxy about 7 billion light-years from Earth. Unlike gamma-ray bursts, supernovas much farther away aren't bright enough to be seen. This could explain why astronomers have been able to link a supernova or hyper nova explosion with only a few of the bursts they have observed. To explain the connection between another supernova, dubbed 1998bw, and a gamma-ray burst detected on April 25, 1998, Woosley invokes yet another variation of the black hole-hypernova model. In this case, the jet is either poorly focused or loses most of its energy in exploding the star it travels through. Consequently, the gamma-ray burst it produces is an unusually weak one. This could explain why the April 25 burst was considerably fainter than several of the other gamma-ray bursts that telescopes have recorded. Astronomers caution, however, that although supernova 1998bw lies Within the same general region as the burst and exploded about the same time, its position may not be identical to that of the burst. With all this talk of jets comes an intriguing possibility. Each day, NASA's Compton Gamma Ray Observatory detects, on average, one gamma-ray burst. If the bursts are in fact produced by highly focused jets, we may be seeing only those shot toward Earth. "We only see a burst when we're looking straight down the jet," says Woosley. "A lot of gamma-ray bursts go unobserved because the gun isn't pointed at us." The actual rate at which bursts occur could be 100 times higher, he calculates. By the same token, if astronomers are seeing focused gamma rays, then each burst would have much less total energy than is often estimated. In the past, they have assumed that a gamma-ray burst radiates energy equally in all directions and that the amount recorded near Earth represents but a tiny fraction of the total energy. If the new models are correct, afterglows should be visible much more often than the bursts themselves. That's because the jets widen as they slow down and create the afterglow. Indeed, says Piran, finding "orphan afterglows"--fading optical and radio-wavelength embers with no obvious parent gamma-ray burst in sight--would help to verify that jets play the key role that astronomers suggest. Not everyone is enamored with the models. Although Paczynski invented one version of the hyper nova theory, he says he's not convinced that it or any other burst model is the right one. Finding and studying afterglows "was a real breakthrough, but I don't take seriously any of the models of the gamma-ray bursts themselves," he says. "I don't want to argue about the details because really we don't understand [the theories]." Bloom's work linking a supernova to a gamma-ray burst is "likely to be correct," says Paczynski, but with only a handful of observations, "we don't have a pattern yet." Piran takes a more optimistic view. "It's very easy to say we don't know," he says, "but we've come a long way." Paczynski and Piran agree that a deeper understanding of gamma-ray bursts is on the horizon. Early next year, NASA will launch a spacecraft that promises to locate the position of gamma-ray bursts--both long and short ones--to unprecedented accuracy. It comes equipped with an X-ray telescope to pinpoint the bursts with 50 times the accuracy of BeppoSAX. Moreover, the craft, known as the High Energy Transient Experiment-2 (HETE-2), will relay that information to ground-based telescopes within seconds, so they can focus on the bursts before they vanish. Several theorists have proposed that the very short bursts, those that last less than a few seconds, involve pairs of massive stars. Woosley notes that it's difficult for a black hole formed by the collapse of a single star to generate such short bursts. Instead, pairs of neutron stars that merge and make a black hole could generate these flashes. Such pairs tend to have migrated away from star-forming regions. With HETE-2, "we shall see the host galaxies [of bursts] in much more detail," says Paczynski, and then determine whether the bursts observed are actually in the star-forming regions. "The diversity of bursts from all over the sky will enormously improve the statistics of the afterglow, and you will catch more optical flashes early on," he says. "I cannot tell ahead of time what we will learn from that, but certainly it will provide many, many hints about what is going on."






Are gamma ray bursts produced by neutron stars colliding with black holes?


We still do not know for certain what these gamma ray bursts are caused by, but astronomers are learning more about them each year as the evidence continues to mount. As of the final end of mission count, the NASA Gamma Ray Observatory called BATSE has now seen 2704 of them from all over the sky.

There are still two camps of astronomers who describe them either as 'local phenomena related to some unknown class of objects in the halo of the Milky Way galaxy', or that they are 'cosmological' phenomena produced by objects over a billion light years from the earth'. Because a number of these bursts have now been tracked to distant galaxies, it is now becoming clearer that cosmologically very distant galaxies produce them. I haven't heard any other 'local arguments' so I think that that entire discussion has now ended. The bursts can be grouped into bright and faint ones, and the distribution of the energy from the pulses shows that there is a distinct shift between the two groups of the kind you would expect if the faint group were statistically farther away in the universe than the bright ones. Both groups are at billions of light years distance. Because the bursts last less than a second, one popular mechanism involves the collision of two neutron stars in a binary system. The binary system continued to shrink in size due to the loss of gravitational energy until the two stars came within their mutual tidal influences and literally tore each other apart. The only problem is that this is believed to be a messy process that would last longer than a few minutes -- a typical orbital time, and leave behind lots of hot matter capable of being observed via its X-ray signature. Gamma ray bursts, however, leave behind very little X-ray energy after the burst so this doesn't seem to add up. Neutron stars can also collide with black holes, and this could also produce a brilliant pulse of energy.

At the present time, enough bursts have now been detected to all but rule out the possibility that they originate in the vast halo of our Milky, and strengthen the argument that they are 'cosmological'. One possibility is that they are produced in very distant galaxies by the black hole - neutron star collision scenario. Because there isn't much of a range in brightness for these pulses, they are believed to come from a rather limited range of distances; similar to a large sphere of space with a rather distinct outer boundary. It is very peculiar to say the least!



Gamma-Ray Bursts

On February 28 of 2001, we were lucky. One such burst hit the Italian Dutch Beppo-SAX satellite for about 80 seconds. Its gamma-ray monitor established the position of the burst --prosaically labeled GRB 970228--to within a few arc minutes in the Orion constellation, about halfway between the stars Alpha Tauri and Gamma Orionis. Within eight hours, operators in Rome had turned the spacecraft around to look in the same region with an x-ray telescope. They found a source of x-rays (radiation of somewhat lower frequency than gamma rays) that was fading fast, and they fixed its location to within an arc minute.

Never before has a burst been pinpointed so accurately and so quickly, allowing powerful optical telescopes, which have narrow fields of view of a few arc minutes, to look for it. Astronomers on the Canary Islands, part of an international team led by Jan van Paradijs of the University of Amsterdam and the University of Alabama in Huntsville, learned of the finding by electronic mail. They had some time available on the 4.2-meter William Herschel Telescope, which they had been using to look for other bursts. They took a picture of the area 21 hours after GRB 970228. Eight days later they looked again and found that a spot of light seen in the earlier photograph had disappeared. There is more. On March 13 the New Technology Telescope in La Silla, Chile, took a long, close look at those coordinates and discerned a diffuse, uneven glow. The Hubble Space Telescope later resolved it to be a bright point surrounded by a somewhat elongated background object. Many of us believe the latter to be a galaxy, but its true identity remains unknown as of this writing. If indeed a galaxy--as current theories would have--it must be very far away, near the outer reaches of the observable universe. In that case, gamma-ray bursts must represent the most powerful explosions in the universe.


Confounding Expectations


For those of us studying gamma-ray bursts, this discovery salves two recent wounds. In November 1996 the High Energy Transient Explorer (HETE) spacecraft, equipped with very accurate instruments for locating gamma-ray bursts, failed to separate from its launch rocket. And in December the Russian Mars '96 spacecraft, with several gamma-ray detectors, fell into the Pacific Ocean after a rocket malfunction. These payloads were part of a carefully designed set for launching an attack on the origins of gamma-ray bursts. Of the newer satellites equipped with gamma-ray instruments, only Beppo-SAX--whose principal scientists include Luigi Piro, Enrico Costa and John Heise--made it into space on April 20, 1996.

Gamma-ray bursts were first discovered by accident, in the late 1960s, by the Vela series of spacecraft of the U.S. Department of Defense. These satellites were designed to ferret out the U.S.S.R.'s clandestine nuclear detonations in outer space--perhaps hidden behind the moon. Instead they came across spasms of radiation that did not originate from near the earth. In 1973 scientists concluded that a new astronomical phenomenon had been discovered. These initial observations resulted in a flurry of speculation about the origins of gamma-ray bursts--involving black hole, supernovae or the dense, dark star remnants called neutron stars. There were, and still are, some critical unknowns. No one knew whether the bursts were coming from a mere 100 light-years away or a few billion. As a result, the energy of the original events could only be guessed at. By the mid-1980s the consensus was that the bursts originated on nearby neutron stars in our galaxy. In particular, theorists were intrigued by dark lines in the spectra (component wavelengths spread out, as light is by a prism) of some bursts, which suggested the presence of intense magnetic fields. Electrons accelerated to relativistic speeds when magnetic-field lines from a neutron star connect they postulated, emit the gamma rays. A similar phenomenon on the sun--but at far lower energies--leads to flares.

In April 1991 the space shuttle Atlantis launched the Compton Gamma Ray Observatory, a satellite that carried the Burst and Transient Source Experiment (BATSE). Within a year BATSE had confounded all expectations. The distribution of gamma-ray bursts did not trace out the Milky Way, nor were the bursts associated with nearby galaxies or clusters of galaxies. Instead they were distributed isotropically, with any direction in the sky having roughly the same number. Theorists soon refined the galactic model: the bursts were now said to come from neutron stars in an extended spherical halo surrounding the galaxy.

One problem with this scenario is that the earth lies in the suburbs of the Milky Way, about 30,000 light-years from the core. For us to find ourselves near the center of a galactic halo, the latter must be truly enormous, almost 600,000 light-years in outer radius. If so, the halo of the neighboring Andromeda galaxy should be as extended and should start to appear in the distribution of gamma-ray bursts. But it does not. (Special models in which the neutron stars beam in the same direction as their motion can, however, overcome this objection.)

This uniformity has convinced most astrophysicists that the bursts come from cosmological distances, on the order of three billion to 10 billion light-years away. At such a distance, though, the bursts should show the effects of the expansion of the universe. Galaxies that are very distant are moving away from the earth at great speeds; we know this because the light they emit shifts to lower, or redder, frequencies. Likewise, gamma-ray bursts should also show a "red shift," as well as an increase in duration.

Unfortunately, BATSE does not see, in the spectrum of gamma rays, bright or dark lines characterizing specific elements whose displacements would betray a shift to the red. (Nor does it detect the dark lines found by earlier satellites.) In April astronomers using the Keck Telescope in Hawaii obtained an optical spectrum of the afterglow of GRB 970228. It is smooth and red, with no telltale lines. Still, Jay Norris of the National Aeronautics and Space Administration Goddard Space Flight Center and Robert Mallozzi of the University of Alabama in Huntsville have statistically analyzed the observed bursts and report that the weakest, and therefore the most distant, show both a time dilation and a red shift. There are, however, other (controversial) ways to interpret these findings.


A Cosmic Catastrophe


One feature that makes it difficult to explain the bursts is their great variety. A burst may last from about 30 milliseconds to almost 1,000 seconds--and in one case, 1.6 hours. Some bursts show spasms of intense radiation, with no detectable emission in between, whereas others are smooth. Also complicated are the spectra--essentially, the colors of the radiation, invisible though they are. The bulk of a burst's energy is in radiation of between 100,000 and one million electron volts, implying an exceedingly hot source. (The photons of optical light, the primary radiation from the sun, have energies of a few electron volts.) Some bursts evolve smoothly to lower frequencies such as x-rays as time passes. Although this x-ray tail has less energy, it contains many photons. If originating at cosmological distances, the bursts must have energies of perhaps 1051 ergs. (About 1,000 ergs can lift a gram by one centimeter.) This energy must be emitted within seconds or less from a tiny region of space, a few tens of kilometers across. It would seem we are dealing with a fireball.

The first challenge is to conceive of circumstances that would create a sufficiently energetic fireball. Most theorists favor a scenario in which a binary neutron-star system collapses [see "Binary Neutron Stars," by Tsvi Piran; SCIENTIFIC AMERICAN, May 1995]. Such a pair gives off gravitational energy in the form of radiation. Consequently, the stars spiral in toward each other and may ultimately merge to form a black hole. Theoretical models estimate that one such event occurs every 10,000 to one million years in a galaxy. There are about 10 billion galaxies in the volume of space that BATSE observes; that yields up to 1,000 bursts a year in the sky, a number that fits the observations.

Variations on this scenario involve a neutron star, an ordinary star or a white dwarf colliding with a black hole. The details of such mergers are a focus of intense study. Nevertheless, theorists agree that before two neutron stars, say, collapse into a black hole, their death throes release as much as 1053 ergs. This energy emerges in the form of neutrinos and antineutrinos, which must somehow be converted into gamma rays. That requires a chain of events: neutrinos collide with antineutrinos to yield electrons and positrons, which then annihilate one another to yield photons. Unfortunately, this process is very inefficient, and recent simulations suggest it may not yield enough photons. Worse, if too many heavy particles such as protons are in the fireball, they reduce the energy of the gamma rays. Such proton pollution is to be expected, because the collision of two neutron stars must yield a potpourri of particles. But then all the energy ends up in the kinetic energy of the protons, leaving none for radiation. As a way out of this dilemma, Peter Meszaros of Pennsylvania State University and Martin J. Rees of the University of Cambridge have suggested that when the expanding fireball--essentially hot protons--hits surrounding gases, it produces a shock wave. Electrons accelerated by the intense electromagnetic fields in this wave then emit gamma rays.

A variation of this scenario involves internal shocks, which occur when different parts of the fireball hit one another at relativistic speeds, also generating gamma rays. Both the shock models imply that long afterglows of x-rays and visible light should follow gamma-ray bursts. In particular, Mario Vietri of the Astronomical Observatory of Rome has predicted detectable x-ray afterglows lasting for a month--and also noted that such afterglows do not occur in halo models. GRB 970228 provides the strongest evidence yet for such a tail. There are some problems, however: the binary collapse does not explain some long-lasting bursts. Last year, for instance, BATSE found a burst that endured for 1,100 seconds and possibly repeated two days later.

There are other ways of generating the required gamma rays. Nir Shaviv and Arnon Dar of the Israel Institute of Technology in Haifa start with a fireball of unknown origin that is rich in heavy metals. Hot ions of iron or nickel could then interact with radiation from nearby stars to give off gamma rays. Simulations show that the time profiles of the resulting bursts are quite close to observations, but a fireball consisting entirely of heavy metals seems unrealistic. Another popular mechanism invokes immensely powerful magnetic engines, similar to the dynamos that churn in the cores of galaxies. Theorists envision that instead of a fireball, a merger of two stars--of whatever kind--could yield a black hole surrounded by a thick, rotating disk of debris. Such a disk would be very short-lived, but the magnetic fields inside it would be astounding, some 1015 times those on the earth. Much as an ordinary dynamo does, the fields would extract rotational energy from the system, channeling it into two jets bursting out along the rotation axis.

The cores of these jets--the regions closest to the axis--would be free of proton pollution. Relativistic electrons inside them can then generate an intense, focused pulse of gamma rays. Although quite a few of the details remain to be worked out, many such scenarios ensure that mergers are the leading contenders for explaining bursts.

Still, gamma-ray bursts have been the subject of more than 2,500 papers--about one publication per recorded burst. Their transience has made them difficult to observe with a variety of instruments, and the resulting paucity of data has allowed for a proliferation of theories.

If one of the satellites detects a lensed burst, astronomers would know for sure that bursts occur at cosmological distances. Such an event might occur if an intervening galaxy or other massive object serves as a gravitational lens to bend the rays from a gamma-ray burst toward the earth. When optical light from a distant star is focused in this manner, it appears as multiple images of the original star, arranged in arcs around the lens. Gamma rays cannot be pinpointed with such accuracy; instead instruments that have poor directional resolution currently detect them. Moreover, bursts are not steady sources like stars. A lensed gamma-ray burst would therefore show up as two bursts coming from roughly the same direction, having identical spectra and time profiles but different intensities and arrival times. The time difference would come from the rays' traversing curved paths of different lengths through the lens.

To further nail down the origins of the underlying explosion, we need data on other kinds of radiation that might accompany a burst. Even better would be to identify the source. Until the fortuitous observation of GRB 970228--we are astonished that its afterglow lasted long enough to be seen--such "counterparts" had proved exceedingly elusive. To find others, we will need to locate the bursts very precisely.

Watching and Waiting


Since the early 1970s, Kevin Hurley of the University of California at Berkeley and Thomas Cline of the NASA Goddard Space Flight Center have worked to establish "interplanetary networks" of burst instruments. They try to put a gamma-ray detector on any spacecraft available or to send aloft dedicated devices. The motive is to derive a location to within arc minutes, by comparing the times at which a burst arrives at spacecraft separated by large distances. From year to year, the network varies greatly in efficacy, depending on the number of participating instruments and their separation. At present, there are five components: BATSE, Beppo-SAX and the military satellite DMSP, all near the earth; Ulysses, far above the plane of the solar system; and the spacecraft Wind, orbiting the sun. The data from Beppo-SAX, Ulysses and Wind were used to triangulate GRB 970228. (BATSE was in the earth's shadow at the time.) The process, unfortunately, is slow--eight hours at best.

Time is of the essence if we are to direct diverse detectors at a burst while it is glowing. Scott Barthelmy of the Universities Space Research Association at the NASA Goddard Space Flight Center has developed a system called BACODINE (BAtse COordinates DIstribution NEtwork) to transmit within seconds BATSE data on burst locations to ground-based telescopes. BATSE consists of eight gamma-ray detectors pointing in different directions from eight corners of the Compton satellite; comparing the intensity of a burst at these detectors provides its location to roughly a few degrees but within several seconds. Often BACODINE can locate the burst even while it is in progress. The location is transmitted over the Internet to several dozen sites worldwide. In five more seconds, robotically controlled telescopes at Lawrence Livermore National Laboratory, among others, slew to the location for a look. Unfortunately, only the fast-moving smaller telescopes, which would miss a faint image, can contribute to the effort. The Livermore devices, for instance, could not have seen the afterglow of GRB 970228 (unless the optical emission immediately after the burst is many times brighter, as some theories suggest). Telescopes that are 100 times more sensitive are required. These mid-size telescopes would also need to be robotically controlled so they can slew very fast, and they must be capable of searching reasonably large regions. If they do find a transient afterglow, they will determine its location rather well, allowing much larger telescopes such as Hubble and Keck to look for a counterpart.

The long-lasting, faint afterglow following GRB 970228 gives new hope for this strategy. The HETE mission, directed by George Ricker of the Massachusetts Institute of Technology, is to be rebuilt and launched in about two years. It will survey the full sky with x-ray detectors that can localize bursts to within several arc minutes. A network of ground-based optical telescopes will receive these locations immediately and start searching for transients. Of course, we do not know what fraction of bursts actually exhibit a detectable afterglow; GRB 970228 could be a rare and fortuitous exception. Moreover, even an observation field as small as arc minutes contains too many faint objects to make a search for counterparts easy. It would be marvelous if we could derive accurate locations within fractions of a second from the gamma rays themselves. Astronomers have proposed new kinds of gamma-ray telescopes that can instantly derive the position of a burst to within arc seconds. To further constrain the models, we will need to look at radiation of both higher and lower frequency than that currently observed. The Energetic Gamma Ray Experiment Telescope (EGRET), which is also on the Compton satellite, has seen a handful of bursts that emit radiation of up to 10 billion electron volts, sometimes lasting for hours. Better data in this regime, from the Gamma Ray Large Area Space Telescope (GLAST), a satellite being developed by an international team of scientists, will greatly aid theorists. And special ground-based gamma-ray telescopes might capture photons of even higher energy--of about a trillion electron volts. At the other end of the spectrum, soft x-rays, which have energies of up to roughly one kilo electron volt (keV), are helpful for testing models of bursts and also for getting better fixes on position. In the range of 0.1 to 10 keV, there is a good chance of discovering absorption or emission lines that would tell volumes about the underlying fireball and its magnetic fields. Such lines might also yield a direct measurement of the red shift and, hence, the distance. Sensitive instruments for detecting soft x-rays are being built in various institutions around the world.

Even as we finish this article, we have just learned of another coup. On the night of May 8, Beppo-SAX operators located a 15-second burst. Soon after, Howard E. Bond of the Space Telescope Science Institute in Baltimore photographed the region with the 0.9-meter optical telescope at Kitt Peak; the next night a point of light in the field had actually brightened. Other telescopes confirm that after becoming most brilliant on May 10, the source began to fade. This is the first time that a burst has been observed reaching its optical peak --which, astonishingly, lagged its gamma-ray peak by a few days. Also for the first time, on May 13 the Very Large Array of radio telescopes in New Mexico detected radio emissions from the burst remnant. Even more exciting, the primarily blue spectrum of this burst, taken on May 11 with the Keck II telescope on Hawaii, showed a few dark lines, apparently caused by iron and magnesium in an intervening galactic cloud. Astronomers at the California Institute of Technology find that the displacement of these absorption lines indicates a distance of more than seven billion light-years. If this interpretation holds up, it will establish once and for all that bursts occur at cosmological distances.

In that case, it may not be too long before we know what catastrophic event was responsible for that burst--and for one that might be flooding the skies even as you read.



Was the Big Bang a black hole?

Einstein's theory of general relativity, and his field equation for gravity that derives from general relativity, describes the shape of space-time subject to the influence of the matter and energy contained within it. The famous 'black hole' solutions describe what happens to space-time near gravitational singularities contained within space-time. In other words, they describe what happen to the geometry of space as a star collapses inside its own event horizon...the mathematical surface surrounding the star within which light cannot escape. The Big Bang, however, is a phenomenon in which space-time was created so that there is no exterior space-time in which to define an event horizon! Because the universe has a finite age, surrounding every observer there is a horizon to the observable universe beyond which we may not receive information. Our horizon is at a distance of 15-20 billion light years, and any gravitational influences farther away than this will have no effect upon us today. In billions of years to come, this horizon will continue to expand outwards in space until it eventually engulfs all of the space contained in our universe. This horizon is similar to the event horizon of a black hole only in that it represents an information limit beyond which we cannot know what is going on. It does not, however, hide from us a singularity in space-time such as the one inside a black hole. Some people have noted the parallel between the 'singularity' at the Big Bang out of which the universe and space-time emerged, and the singularity within a black hole. Since matter and energy emerged from the Big Bang singularity, it bares some resemblance to the so-called 'white hole' solutions in general relativity, which are time-reversed versions of black holes in which matter/energy are continuously being emitted rather than gravitationally attracted. In the Big Bang situation, this parallel would require that we live inside the event horizon of the cosmological singularity. The problem is, again, one of exterior spaces. Einstein's field equation describing these white hole solutions still requires the pre-existence of an exterior space-time within which this white hole singularity occurs. The Big Bang solution does not presuppose the existence of an exterior space-time in which the entire space-time that makes up our universe came into existence. The solution itself merely describes the unfolding of space-time from this incomprehensible Cosmological Singularity which is, mathematically, of a different class than any black hole or white hole singularity.


There are two different types of Big Bang models for the universe, and right now, astronomers cannot decide from their observations just what kind of a universe we live in. The first of these is the so-called 'Open and Infinite Universe'. In this universe, the Big Bang gave every scrap of matter in the universe more than enough speed that the gravitational force of all this matter acting on itself is not sufficient to slow the expansion down. The universe will expand indefinitely. In the second kind of universe the so-called 'Closed, Finite Universe', the Big Bang gave matter quite a lot of speed, but this speed is not sufficient for matter to expand forever. Eventually, the expansion will slow down, come to a stop, and then turn around into a collapsing universe. Neither of these two models look anything like a black hole because matter is ultimately free to travel as far as its energy will take it. The only limit we see is in the case of the Closed FInite Universe where matter is ultimately destined to collapse to a 'Big Crunch'.

There is one sense, however, in which the universe does have a black hole-like property to it. All modern cosmological models predict that there is far more space in the universe than what we can see right now. If a light signal was beamed in our direction near the start of the Big Bang some 15 billion years ago, the farthest that light signal could travel since then is 15 billion light years. This means that, surrounding the Milky Way there is a spherical horizon whose radius is 15 billion light years that contains all the objects and points in the universe that we can now have seen since the Big Bang. BUT, if we travel to where a distant galaxy is located some 5 billion light years from us today, we can draw exactly the same 15 billion light year horizon around that galaxy. The only thing is that there will be galaxies that can be seen from this new vantage point that we cannot see in the Milky Way because the light hasn't gotten here yet. But it will get here in a few billion more years.

This information horizon is a feature of ALL cosmological models that have a starting point in time. In some ways, this horizon has some of the properties of a black hole's 'event horizon', but the physics is different. So, the answer to this question is that the Big Bang is not really similar to a black hole, even though both of these systems have an information horizon, but for different physical reasons.

Since the universe must have been inside its own Schwarzschild radius after the Big Bang, how did it get out?

Although it is correct to use general relativity to predict what you might expect to happen, you have to make certain you use the correct 'solution'. All black hole solutions require the existence of a pre-existing space-time in which their particular 'solution' for matter is embedded. In the Big Bang theory, there was no pre-existing space-time, and moreover, the space-time that did come into existence was extremely dynamic with a constantly changing topology. At least this is what cosmologists seem to be saying these days. Any black holes that may have formed from local clumps of matter being compressed by shock waves would have favored black holes whose sizes were not much more than the local information horizon defined by the light travel time at that epoch. At 10-14 seconds after the Big Bang, the horizon size was 10-14 light seconds or 3 x 10-4 centimeters. According to Big Bang theory, however, the scale of the universe at that time was about 15 billion light years/1017 = 5 x 1011 centimeters or 3 million miles. Any black holes that could have formed by any coherent gravitational process would have been vastly smaller than the scale of the universe. In other words, the expansion of the universe was able to outrun the formation of any black hole as large as itself, and so the expansion continued unabated.


Wheeler: You could put it another way: Let's say there's an efficiency expert who's come to look over the Lord's shoulder. He says, "Why, Lord, you're wasting a lot of money on this universe. See, you've put one hundred billion (1011) stars in the Milky Way, and you've put one hundred billion (1011) Milky Ways in the universe -- that's ten billion trillion (1011) stars -- that's a mighty extravagant way to get one planet (the Earth) with life on it so there'll be somebody around to be aware of this universe. Now, Lord, we efficiency people want to cut you down, but we won't cut you down to one star. Instead of 10 billion trillion stars, we'll cut you down to one hundred billion stars -- that's enough to make one galaxy. This will be a great economy move."

The only problem is, according to general relativity, when you cut the amount of mass down by a factor of 100 billion, you also cut the size of the universe down by the same amount, just enough universe for one galaxy. You also cut down the time from the Big Bang to the Big Crunch from 100 billion years to just one year which isn't time enough to evolve even one star, let alone evolve life. Put it another way. There's no obvious extravagance of scale in the construction of the universe. The efficiency expert would have a right to complain if life had been created on several planets, in several parts of the universe, because then he could say that's more than you really need in order for somebody to be around to be aware of the universe. But, if you have life on one planet only (the Earth), then, it's not obvious that you're being extravagant.

The anthropic principle provides a new perspective on the question of life elsewhere in space. It puts in question the common view that the universe is a big machine; that man is unimportant in the scheme of things; that we're an accidental bit of dust that doesn't have anything to do with it all. From that point of view, it is not very important whether you're going to have life on a billion planets or on just one planet -- or no life at all. Life or no life still wouldn't matter in the scheme of the universe.

But, if we adopt this other perspective that Dicke suggests -- the anthropic principle -- then it's quite a different assessment that we make. Then the universe has to be such as to permit awareness of that universe; otherwise the universe has no meaning.

We are now nearer the Big Bang than the Big Crunch since the universe, as we observe it, is still expanding.

The anthropic principle looks at this universe, that universe and the other universe and rules out as mere meaningless machines all those in which awareness does not develop somewhere at some time. Stronger than the anthropic principle is what I might call the participatory principle. According to it we could not even imagine a universe that did not somewhere and for some stretch of time contain observers because the very building materials of the universe are these acts of observer-participance. You wouldn't have the stuff out of which to build the universe otherwise. This participatory principle takes for its foundation the absolutely central point of the quantum: No elementary phenomenon is a phenomenon until it is an observed (or registered) phenomenon.

"The universe starts with a Big Bang, expands to a maximum dimension, then recon tracts and collapses (to the Big Crunch); no more awe-inspiring prediction was ever made." Quotation from Charles W. Misner, Kip S. Thorne and John A. Wheeler in "Gravitation", W. H. Freeman, San Francisco, 1973, page 1196.

Werner Heisenberg's Uncertainty Principle

In 1927, Werner Heisenberg introduced the indeterminacy principle, or more commonly known as the uncertainty principle. It appeared in a paper showing how to interpret matrix mechanics in terms of the more familiar concepts of classical physics. Heisenberg proved that if x is the position coordinate of an electron (in a specific state), and p is the momentum of that electron, and that each have been independently measured for many electrons (in the specific state), then: {delta}x {delta}p >= h/2 where {delta}x is the precision of x, and {delta} p is the precision of the momentum coordinates, and h is Plank's constant (6.626176* 10 (sup -27)erg-second). Quantum Interdependency In layman terms, this means that it is physically impossible to measure both the exact position and the exact momentum of a particle at the same time. The more precisely one of the quantities is measured, the less precisely the other is known. Because of the small value of h in everyday units, this principle is only significant on the atomic scale. It is important to note that the uncertainties of {delta} x and {delta} p arise from the quantum structure of matter, and are not due to imperfections in the measurement instruments. Serway discusses one experiment introduced by Heisenberg, which helps clarify this idea, with the following illustration. To see an electron, and thus determine it's position, you might use a powerful light microscope. For the electron to be visible, at least one photon of light must bounce off of it, and then pass through the microscope into your eye. A problem occurs here, as the photon transfers some unknown amount of its momentum to the electron. Thus, in the process of finding an accurately position of the electron (by making {delta} x really small), the same light that allows you to see it changes the electron's momentum to an undeterminable extent (makes {delta}p very large).

Why the Uncertainty Principle is Good.

Heisenberg's uncertainty principle proved that Bohr's model of the atom is incorrect. Bohr's model of the hydrogen atom assumes that the electron in the ground state moves in a circular orbit of radius ( r) 0.529*10-10 m, and the speed of the electron in this state is 2.2*106 m/s. Given the exact radius, the uncertainty {delta}r in this model is zero. According to the uncertainty principle, the product, {delta} p{delta}r >= h/2, where {delta}p is the uncertainty in the momentum of the electron in the radial direction. Because the momentum of the electron is mv, we can assume that the uncertainty in its momentum is less that this value. That is, {delta}p < mv = (9.11*10(-31)kg)*(2.2*10(6) m/s) = 2.0 *10(-24) kg*m/s From the uncertainty principle, the estimated minimum uncertainty in the radial position would be: = h/ (2{delta}p) = 0.26*10(-10)m The uncertainty in position is so close to the size of the Bohr radius, thus proving that the Bohr model is incorrect.


Heisenberg Uncertainty Principle

An odd aspect of Quantum Mechanics is contained in the Heisenberg Uncertainty Principle (HUP). The HUP can be stated in different ways, let me first talk in terms of momentum and position.

If there is a particle, such as an electron, moving through space, I can characterize its motion by telling you where it is (its position) and what its velocity is (more precisely, its momentum). Now, let me say something strange about what happens when I try to measure its position and momentum.

· Classically, i.e., in our macroscopic world, I can measure these two quantities to infinite precision (more or less). There is really no question where something is and what its momentum is.

· In the Quantum Mechanical world, the idea that we can measure things exactly breaks down. Let me state this notion more precisely. Suppose a particle has momentum p and a position x. In a Quantum Mechanical world, I would not be able to measure p and x precisely. There is an uncertainty associated with each measurement, e.g., there is some dp and dx, which I can never get rid of even in a perfect experiment!!!. This is due to the fact that whenever I make a measurement, I must disturb the system. (In order for me to know something is there, I must bump into it.) The size of the uncertainties are not independent, they are related by

· dp x dx > h /(2 x pi) = Planck's const/( 2 x pi )

The preceding is a statement of The Heisenberg Uncertainty Principle. So, for example, if I measure x exactly, the uncertainty in p, dp, must be infinite in order to keep the product constant.

This uncertainty leads to many strange things. For example, in a Quantum Mechanical world, I cannot predict where a particle will be with 100 % certainty. I can only speak in terms of probabilities. For example, I can say that an atom will be at some location with a 99 % probability, but there will be a 1 % probability it will be somewhere else (in fact, there will be a small but finite probability that it will be found across the Universe). This is strange.

We do not know if this indeterminism is actually the way the Universe works because the theory of Quantum Mechanics is probably incomplete. That is, we do not know if the Universe actually behaves in a probabilistic manner (there are many possible paths a particle can follow and the observed path is chosen probabilistically) or if the Universe is deterministic in the sense that I can predict the path a particle will follow with 100 % certainty.

A consequence of the Quantum Mechanical nature of the world, is that particles can appear in places where they have no right to be (from an ordinary, common sense [classical] point of view)!

This notion has interesting consequences for nuclear fusion in stars. It also effects the theories of nucleosynthesis and draws many of their conclusions into question.

Billiard balls and electrons.

Again, the central issue is the nature of reality itself. Take a billiard ball and roll it from one side of the table to the other. In order for the ball to get from one side to the other, it has to move along a certain path. It must have passed over all the possible positions on this path and moved with a certain speed and direction (the combination of speed and direction is called velocity). We can actually calculate both the position and the velocity of the ball at any moment in its journey. This is part of what makes the ball real to us: it has to be somewhere and moving at the same time. With the aid of a physicist (one who holds the usual, now accepted, view of quantum theory) and the appropriate experimental paraphernalia, let’s use an electron instead of a billiard ball.

The first thing our physicist tells us is that we will not be able to see the electron in its flight. We will be able to see where it starts and where it hits, but we will not be able to predict exactly where it will hit beforehand as we could the billiard ball.

We suppose that the experimental machinery is not refined enough to permit this.

Not just that, the physicist replies, we cannot do it in theory. We can only predict where it will most likely hit. We have to use probability, he adds.

We are tempted to make a slightly sarcastic remark about the exactness of science, but then recall all the successes of the approach. So we shut up and ask what happens next.

You have a choice, he says. Would you like to know the position of the electron or would you like to know its velocity? You have to let me know ahead of time.

We are immediately suspicious. So, we say, why not both?

No way, he says. You can have one or the other but not both at the same time.

Why not? we wonder. Is it because our equipment is so crude compared to a delicate thing like an electron that if we try to measure one we automatically mess up the other?

Oh yes, there’s that, the physicist remarks offhandedly, but that’s not the real point. The fact is that if we choose to measure the velocity, for example, the electron does not have a location. What you mean is that it has a location but we just can’t measure it, we solicitously correct (scientists really ought to learn how to use language).

No, he replies, fixing us with an amused look, it really does not exist.

Now we are in an incredulous huff. You mean to tell me that if we choose to measure the electron’s velocity that it gets from here to there without going in between? Or that if we choose to measure where the electron is located at any particular moment it doesn’t have any speed or direction-and it still gets over there?

Unperturbed, our physicist responds:

You might say that. Of course, your problem is that you have some antiquated notions of "here" and "there"-so far as the subatomic world is concerned.

Now we’re indignant. What kind of philosophical claptrap is this? Do you actually teach this stuff to young people?

Our physicist is a veteran of many a skirmish with amateurs. He calmly replies: To be sure. It’s called Heisenberg’s Uncertainty Principle. The more certain we are of one aspect, the more uncertain the other becomes. There is some leeway. If we don’t insist on an accurate measurement of the velocity, then we get a tendency for the location to exist, and vice versa. Heisenberg envisioned a kind of intermediate reality in this situation, something somewhere between the massive reality of a billiard ball and the intellectual reality of ideas or images. It is like trying to nail down a warped board. Every time you nail down the reality of one end, the other pops up into unreality. But we could just nail each end part way. Then each would exist in this limbo of intermediate reality.

We have another name for this that comes right out of the bible "Faith the substance of things hoped for the evidence of things not seen "Hebrews 11:1 and again "Through faith we understand the worlds were framed by the word of God, so that things which are seen were not made of things which do appear "Hebrews 11:3.

Everything in this world is governed by the uncertainty principle until someone applies faith to the mix, only then whatever they believe they will receive.



The darkness of the night sky.

Olbers' paradox is "Why is the sky dark at night?". Olbers (and before him, others) assumed that both the average space frequency and luminosity of stars (and galaxies) is approximately constant throughout space and over time. Consider any large shell of matter of radius r and thickness dr. The light from this shell is 4Pi r2 dr n L$ where the number of stars per unit volume is n and the luminosity of a star is L. So the radiation measured at the center of the shell is n L dr, and does not depend on the radius of the shell. As we add up the contributions of more and more distant concentric shells (each of equal thickness), the radiation measured at the center seems to increase without limit. This is not quite right, since light from a distant star is intercepted by an intervening star, but we would expect the sky to be about as brilliant as the surface of a star. Any line of sight must sooner or later run into a star. This conclusion applies at any arbitrary point, and hence it applies everywhere.

We have a contradiction with the trivial observation that apart from the Milky Way, our own galaxy, the night sky is remarkably dark. Olbers' paradox is not resolved by allowing for interstellar dust since this absorbs and radiates energy. Possible resolutions are (A) the universe is young, so stars have only been shining for about ten billion years, or (B) the universe is of infinite age but expanding so as to avoid a state of thermodynamic equilibrium. Expansion ``cools off" the universe, due to the Doppler shift (which reddens light or reduces the energy of photons that are received from a receding source). Of course, the universe may be both young and expanding, but only hypothesis B requires expansion.

Age of the Universe


There are at least 3 ways that the age of the Universe can be estimated. I will describe

· The age of the chemical elements.

· The age of the oldest star clusters.

· The age of the oldest white dwarf stars.


The Age of the Elements

The age of the chemical elements can be estimated using radioactive decay to determine how old a given mixture of atoms is. The most definite ages that can be determined this way are ages since the solidification of rock samples. When a rock solidifies, the chemical elements often get separated into different crystalline grains in the rock. For example, sodium and calcium are both common elements, but their chemical behaviors are quite different, so one usually finds sodium and calcium in different grains in a differentiated rock. Rubidium and strontium are heavier elements that behave chemically much like sodium and calcium. Thus rubidium and strontium are usually found in different grains in a rock. But Rb-87 decays into Sr-87 with a half-life of 47 billion years (in our present time frame). And there is another isotope of strontium, Sr-86, which is not produced by any rubidium decay. The isotope Sr-87 is called radiogenic, because it can be produced by radioactive decay, while Sr-86 is non-radiogenic. The Sr-86 is used to determine what fraction of the Sr-87 was produced by radioactive decay. Plotting the Sr-87/Sr-86 ratio versus the Rb-87/Sr-86 ratio does this. When a rock is first formed, the different grains have a wide range of Rb-87/Sr-86 ratios, but the Sr-87/Sr-86 ratio is the same in all grains because the chemical processes leading to differentiated grains do not separate isotopes. After the rock has been solid for several billion years, a fraction of the Rb-87 will have decayed into Sr-87. Then the Sr-87/Sr-86 ratio will be larger in grains with a large Rb-87/Sr-86 ratio. Do a linear fit of Sr-87/Sr-86 = a + b*(Rb-87/Sr-86)

and then the slope term is given by b = 2x - 1

with x being the number of half-lives that the rock has been solid. When applied to rocks on the surface of the Earth, the oldest rocks are about 3.8 billion years old. When applied to meteorites, the oldest are 4.56 billion years old. This very well determined age is the age of the Solar System.



When applied to a mixed together and evolving system like the gas in the Milky Way, no great precision is possible. One problem is that there is no chemical separation into grains of different crystals, so the absolute values of the isotope ratios have to be used instead of the slopes of a linear fit. This requires that we know precisely how much of each isotope was originally present, so an accurate model for element production is needed. One isotope pair that has been used is rhenium and osmium: in particular Re-187 which decays into Os-187 with a half-life of 40 billion years (in our present time frame). It looks like 15% of the original Re-187 has decayed, which leads to an age of 8-11 billion years. But this is just the mean formation age of the stuff in the Solar System, and no rhenium or osmium has been made for the last 4.56 billion years. Thus to use this age to determine the age of the Universe, a model of when the elements were made is needed. If all the elements were made in a burst soon after the Big Bang, then the age of the Universe would be to = 8-11 billion years. But if the elements are made continuously at a constant rate, then the mean age of stuff in the Solar System is (to + tSS)/2 = 8-11 Gyr which we can solve for the age of the Universe giving

to = 11.5-17.5 Gyr.

The radioactive decay time is recorded as a constant and this assumes that 1 earth gravity has always been the norm we contend that the gravity effects have changed from near infinite g to our present 1g. There by altering the time frame immensely.

Radioactive Dating of an Old Star


A very interesting paper by Cowan et al. (1997, ApJ, 480, 246) discusses the thorium abundance in an old halo star. Normally it is not possible to measure the abundance of radioactive isotopes in other stars because the lines are too weak. But in CS 22892-052 the thorium lines can be seen because the iron lines are very weak. The Th/Eu (Europium) ratio in this star is 0.219 compared to 0.369 in the Solar System now. Thorium decays with a half-life of 14.05 Gyr, so the Solar System formed with Th/Eu = 24.6/14.05*0.369 = 0.463. If CS 22892-052 formed with the same Th/Eu ratio it is then 15.2 +/- 3.5 Gyr old. It is actually probably slightly older because some of the thorium that would have gone into the Solar System decayed before the Sun formed, and this correction depends on the nucleosynthesis history of the Milky Way. Nonetheless, this is still an interesting measure of the age of the oldest stars that is independent of the main-sequence lifetime method. A second star, CS 31082-001, shows an age of 12.5 +/- 3 Gyr based on the decay of U-238 [Cayrel, et al. 2001, Nature, 409, 691-692].

A later paper by Cowan et al. (1999, ApJ, 521, 194) gives 15.6 +/- 4.6 Gyr for the age based on two stars: CS 22892-052 and HD 115444.

The Age of the Oldest Star Clusters


When stars are burning hydrogen to helium in their cores, they fall on a single curve in the luminosity-temperature plot known as the H-R diagram after its inventors, Hertzsprung and Russell. This track is known as the main sequence, since most stars are found there. Since the luminosity of a star varies like M3 or M4, the lifetime of a star on the main sequence varies like t=const*M/L=k/L0.7. Thus if you measure the luminosity of the most luminous star on the main sequence, you get an upper limit for the age of the cluster:

Age < k/L(MS_max)0.7

This is an upper limit because the absence of stars brighter than the observed L(MS_max) could be due to no stars being formed in the appropriate mass range. But for clusters with thousands of members, such a gap in the mass function is very unlikely, the age is equal to k/L(MS_max)0.7. Chaboyer, Demarque, Kernan and Krauss (1996, Science, 271, 957) apply this technique to globular clusters and find that the age of the Universe is greater than 12.07 Gyr with 95% confidence. They say the age is proportional to one over the luminosity of the RR Lyra stars which are used to determine the distances to globular clusters. Chaboyer (1997) gives a best estimate of 14.6 +/- 1.7 Gyr for the age of the globular clusters. But recent Hipparcos results show that the globular clusters are further away than previously thought, so their stars are more luminous. Gratton et al. give ages between 8.5 and 13.3 Gyr with 12.1 being most likely, while Reid gives ages between 11 and 13 Gyr, and Chaboyer et al. give 11.5 +/- 1.3 Gyr for the mean age of the oldest globular clusters.

In the case of luminosity and theoretical temperature all these theories need to factor in the primordial singularity and it's gravitational effects on the information they have.


The Age of the Oldest White Dwarfs


A white dwarf star is an object that is about as heavy as the Sun but only the radius of the Earth. The average density of a white dwarf is a million times denser than water. White dwarf stars form in the centers of red giant stars, but are not visible until the envelope of the red giant is ejected into space. When this happens the ultraviolet radiation from the very hot stellar core ionizes the gas and produces a planetary nebula. The envelope of the star continues to move away from the central core, and eventually the planetary nebula fades to invisibility, leaving just the very hot core, which is now a white dwarf. White dwarf stars glow just from residual heat. The oldest white dwarfs will be the coldest and thus the faintest. By searching for faint white dwarfs, one can estimate the length of time the oldest white dwarfs have been cooling. Oswalt, Smith, Wood and Hintzen (1996, Nature, 382, 692) have done this and get an age of 9.5+1.1-0.8 Gyr for the disk of the Milky Way. They estimate an age of the Universe which is at least 2 Gyr older than the disk, so to > 11.5 Gyr.

The luminosity argument is also countered by the presence of the primordial singularity for the light of all galaxies and stars is fighting it's way out of the gravity well of the primordial singularity! Not only that but presently we are accelerating away from these light sources and that also makes them fade.

The experts would have you believe that there is something in the theory of physics that YOU do not understand and that the experts do. They will always say that any apparent confusion emerges from the novice's not understanding the physics, from their point of view, the correction of the relativistic model is only a slight one so long as we do not look too far out in space. Because the expansion speed is linear with distance, it doesn't matter if you look at a distant quasar or a moderately nearby galaxy. It doesn't even matter if you 'look' at galaxies you cannot even see yet because they are so far away. The linearity of the expansion speed with distance means that ALL of these distance-speed combinations will scale exactly the same way to give you the same value for the expansion age of the universe. This is where we believe they are in error they assume there is no primordial singularity present or any time dilation, and conclude expansion speed is linear with distance. But even without the existence of a singularity there would have been severe gravitational time dilation in the first moments after the big bang. In the beginning of the universe the time portion of the speed calculations would not be linear hence the statement don't look too far out in space (or into the past).

When we consider a residual primordial singularity two times the observed mass of the universe the linearity goes out the window. This is why we have to re-think the entire cosmology story.


The Final Word as of February 2003

In the Biblical description we see a perfect description of scientific evidence;

Gen 1:1 Space-time continuum is created,

Gen 1:2 There is darkness and void and God moved

Gen 1:3 Photons come into being and their laws

Gen 1:6 Protons come into being and their laws

Gen 1:9 Heavier matter forms with their laws

Then we have the biological and geological developments that are also in agreement.

Psalms 90:4 King David states the ultimate time equivalence factor of his day as being One Day with GOD is as a Thousand with us at that particular moment from creation.

Then we have in the closing of the book a description of the return to the singularity and the crossing of the event horizon, the sun goes out, the moon goes out, and the stars fall toward the earth.

This is the exact sequence that would occur according to our modern science.




1 See, e.g., Andreason, N. C. 1987, Creativity and Mental Illness: Prevalence Rates in Writers and Their 1st Degree Relatives, American Journal of Psychiatry, 144, 1288 – 1292. Jamison, K. R. 1988, Manic Depressive Illness and Accomplishment: Creativity, Leadership and Social Class. In F. K. Goodwin & K. R. Jamison (Eds.), Manic Depressive Illness, Oxford England: Oxford University Press. Prentky, R. 1989, Creativity and psychopathology: Gambling at the seat of

madness, In J. A. Glover, R. R. Ronning & C. R. Reynolds (Eds.), Handbook

of Creativity, New York, Plenum. Rothenberg, A. 1990, Creativity and madness: New findings and old stereotypes, Baltimore, Johns Hopkins University Press.

2 Freud, S. (1901/1960) The Psychopathology of Everyday Life, In J. Strachey

(Ed.), The Standard Edition of the Complete Works of Sigmund Freud, vol. 6,

London, Hogarth.

3 Alcock, J. E. 2005, A Textbook of Social Psychology, 6th edition, Pearson

Prentice Hall, Toronto, p. 404 –410.

4 Ellenberger, H. 1970, The Discovery of the Unconscious, New York, Basic Books.

5 See, e.g., Turner, J., S., and Helms, D., B., 1995, Life Span Development, 5th

Edition, Harcourt Brace College Publishers, Toronto, Library of Congress # 93 - 80877, p. 3 and 378. Berger, K. S., 1998, The Developing Person Through the Lifespan, 4th edition, Worth Publishers, New York, p. 65. Santrock, J.W., Mackenzie-Rivers, A., Leung, K. H., and Malcomson, T., 2005,

Lifespan Development, 2nd Canadian edition, McGraw-Hill Ryerson, Toronto,

p. 535 – 536.

6 Statistics Canada 2006,

Santrock, MacKenzie-Rivers, Leung & Malcomson (2003), Life-Span

Development, 1st Canadian Edition, McGraw-Hill Ryerson, Toronto.

7 Darwin, C. 1996, The Origin of the Species, Oxford University Press, Oxford,

pages 391 – 396.

8 Harris Poll # 52, 2005, Harris Poll Explores Beliefs about Evolution,

Creationism, and Intelligent Design, Skeptical Inquirer, 29, December 2005,

pages 56 – 58.

     9 See e.g., Darwin, C. 1996, The Decent of Man, Great Books, Encyclopedia Britannica Inc., Chicago, page 302. Charles Darwin, On the Origin of Species (Cambridge, Mass.: Harvard University Press, 1964), p. 82-87.

10 Krauss, L., M., 2001, Atom, An Odyssey from the Big Bang to Life on Earth…and Beyond, Little, Brown and Company, New York, p. 69 – 70 & 279. Silk, J., 2001, The Big Bang, 3rd Edition, W. H. Freeman and Company, New York, p. 73, 376,455 & 108.

11 Kane, G., 1995, The Particle Garden, Addison Wesley Publishing Company,

p. 154 - 157.

12 Bold, Harold, C., The Plant Kingdom, 4th edition, 1977, Prentice Hall Inc. Publisher, Englewood Cliffs, New Jersey, Library of Congress, QK48.B59, pages 296 - 297.

13 Orr, Robert, T., Vertebrate Biology, 1976, Saunders Co. Publisher, Library

of Congress QL605 075v, pages 10 - 11.

14 See, e.g., Mcdougall, J., D., 1976, Fission-Track Dating, Scientific

American, 235 (6): p. 114 – 122. Fleming, S., 1977, Dating in Archaeology: A

Guide to Scientific Techniques, St. Martin’s Press, New York. Johnston, F., E.,

and Shelby, H., 1978, Anthropology the Biocultural View, Wm. C. Brown

Company Publishers, Dubuque, Iowa, Library of Congress #77 - 77077, p. 194 –

195. Harris, M., 1980, Culture, People, Nature, An Introduction to General

Anthropolgy, 3rd Edition, Harper and Row Publishers Inc., pages 50 and

102Stashi, E. & Marks, J., 1992, Evolutionary Anthropology, An

Introduction to Physical Anthropology and Archeology, Harcourt Brace

Jovanovich College Publishers, New York, p. 504 - 528. Jones, S., Martin, R., and

Pilbeam, 1994, The Cambridge Encyclopedia of Human Evolution, Cambridge

University Press, UK, p. 383 - 385.

15 Lawrence, R.J., and Despres, 2004, C., Futures of Transdisciplinarity, doi,

10.1016/j.futures, Elsevier,, Futures 36, p. 397 – 405.

16 See e.g., Freedman, D., 1990, Parallel Universes: The New Reality, From

Harvard’s Wildest Physicist, Discover Magazine, page 52.Kaku, M., 1994,

Hyperspace, Oxford University Press, Oxford, p. 245. Thorne, K., S., 1994, Black

Holes and Time Warps, Einstein’s Outrageous Legacy, W.W. Norton and

Company, New York, p. 96 – 120 & 483 - 512. Schacter, D. L. 1995, Memory

Distortion: How Minds, Brains, and Societies Reconstruct The Past, Harvard

University Press, Cambridge Massachusetts. Drosnin, M., 1997, The Bible Code,

Simon and Schuster, Rockefeller Center, New York, p. 76 & 123. Greene, B., R.,

Morrison, D., R., & Polchinski, J., 1998, StringTheory, The National Academy of

Sciences, Vol. 95, p. 11039 -11040. Wilson E. O. Consilence the Unity of

Knowledge, 1999, Vintage Edition, Random House. Lidsey, J., E., 2000, The

Bigger Bang, Cambridge University Press, p. 1 – 2. Wheeler, J., C., 2000, Cosmic

Catastrophes, Supernovae, Gamma-Ray Bursts, and Adventures in Hyperspace,

Cambridge University Press, p. 218 – 220. Hawkings, S. W., 2001, The Universe

in a Nutshell, Bantam Books, p. 30 – 33 & 148 - 153. Kaku, Michio, 2001, A

Theory of Everything? A net paper/presentation by a professor of Theoretical

Physics, City College of Yew York. Strogatz, S. 2003, How Order Emerges from

Chaos in SYNC The Universe, Nature and Daily Life, Hyperion Books, New

York.Greene B., 2005, The Fabric of the Cosmos Space Time and the Texture of

Reality, Vintage Books Edition, Random House. The Story of the Great Attractor, University of Illinois,

(HTTP:// Day M., 2005

17 Furguson, G. and Takane, V. (1989), Statistical Analysis in Psychology and Education, 6th edition, New York, McGraw Hill.

18 The Holy Bible, 1935, Self Pronouncing Edition, Conformable

to the 1611 King James Edition, The World Publishing

Company, New York.

19 Schroeder, G.L., 1997, The Science of God, Broadway Books,

Dell Publishing Group Inc., pages 60 & 67.

20 See e.g., Bold, Harold, C., The Plant Kingdom, 4th edition, 1977,

Prentice Hall Inc. Publisher, Englewood Cliffs, New Jersey,

Library of Congress, QK48.B59, pages 296 - 297. Orr, Robert, T., Vertebrate Biology, 1976, Saunders Co. Publisher, Library of Congress QL605 075v, pages 10 - 11. Harland, W., Armstrong, R., Cox, A., Craig, L., Smith, A., Smith, D.

(1989) A Geologic Time Scale, New York, Cambridge University

Press. Jensen, O. 2002-01-13 McGill University,


Campbell, N. A., Reece, J. B., Mitchell, L. G., 1999, Biology,

Addison Wesley Longman Inc. New York, Pages 446 – 450.

21 Fechner, G., T., 1860, Elemente der psychophysik, Leipzig: Breitkopf

& Hartel, English Translation of Volume 1 by H. E. Alder, New York, Holt, Rhinehart and Winston, 1966. Stevens, S. S., 1962, The Surprising Simplicity of Sensory Metrics, American Psychologist, 17, pages 29 – 39.


22 Montoya, C., P., and Mackay, G., 2000, Quantum Logging, a mini-

Lecture, Sponsored by the University College of the Cariboo

Cultural Events Committee. Montoya, C. P. and Mackay, G. 2003/2004,The Unified Primary Perspective, a miniseries connecting science and religion, sponsored by the Office of Chaplain,University College of the Cariboo. Montoya, D. E., Mackay, G., & Montoya C. P., 2005, The Migratory Theory of Genetic Fitness: Re-worked Darwinian Evolution, sponsored by the Office of the Chaplin, Thompson Rivers University.

23 Ashley-Farrand T., 2002, Ancient Power of Sanskrit Mantra and Ceremony, Volumes 1 – 3, 2nd Edition, Saraswati Publications, LLC.

24 See e.g., Mayan Calendrics. Dolphin Software. 48 Shattuck Square #147 Berkeley, CA. 94704. 1989 &1993. Meeus, Jean. Astronomical Tables of the Sun, Moon and Planets. Willmann- Bell Publishers. Richmond, VA. 1983. Michelsen, Neil F. The American Ephemeris for the 21st Century. ACS Publications. San Diego, CA. 1982, 1988. Ridpath, Ian (ed.). Norton's 2000.0: Star Atlas and Reference Handbook. Longman Group UK Limited. 1989. Schele, Linda and Freidel, David. A Forest of Kings: The Untold Story of the Ancient Maya. William Morrow and Company, Inc. New York. 1990. Schele, Linda; Freidel, David; Parker, Joy. Maya Cosmos: Three Thousand Years on the Shaman's Path. William Morrow and Company, Inc. New York. 1993. Tedlock, Dennis. The Popol Vuh: The Definitive Edition of the Mayan Book of the Dawn of Life and the Glories of Gods and Kings. Simon & Schuster. New York. 1985.

25 See e.g., Lawrence, J. L. and Despres C., Futures of Transdisciplinarity, 2004, Futures 36, Elsevier, p. 397 – 405. Manfred A. Max-Neef, Foundations of Transdisciplinarity, 2005, Ecological Economics 53, Elsevier, p. 5 – 16. Renato, M. E. Sabbatini and Cardoso S. H., 2002, Interdisciplinary Science Reviews, Vol 27, No. 4, I o M Communications Ltd., p. 303 –311. Rapport, D. J., 1998, Transdisciplinarity Education: Where, When, How? Ecosystem Health, Vol. 4, No. 2, Blackwell Science Inc., p. 79 – 80. Horlick-Jones, T. and Sime, J., 2004, Living in the Border: knowledge, risk and Transdisciplinarity, Futures 36, Elsevier, p. 441 – 456. Hamberger, E., 2004, Transdisciplinarity: A Scientific Essential, Annals New York Academy of Sciences, 1028: 487-496. Senghaas, D., 1976, Peace Research and the Analysis of the Causes of Social Violence: Transdisciplinarity, Bulletin of peace Proposals, Vol. 7, No. 1., p. 64 – 70. Valente, T. W., Gallaher, P., and Mouttapa, M., 2004, Using Social Networks to Understand and Prevent Substance Use: A Transdisciplinary Perspective, Substance Use & Misuse, Volume 39, Nos. 10–12, p. 1685-1712.

26 Wald, G., 1954, "The Origin of Life," Scientific American, August, pages 19 – 21. Folsome, C., 1979, Life: Origin and Evolution, Scientific American Special Publication, page 38. Dember, W., N. and Warm, J., S., 1979, Psychology of Perception, 2nd Edition, Holt, Rhinehart and Winston, New York, page 25.

Mosedale, F. E., 1979, Philosophy and Science, The Wide Range of Interaction,

Prentice-Hall Inc., Englewood Cliffs, New Jersey, page 7.Kolb, B. & Whishaw,

I.Q., 1980, Fundamentals of Human Neuropsychology, 1st Edition, W. H.

Freeman and Co. San Francisco, page 16. Quiring, R., Kloter, U., Gehring, W.,

1994, Homology of the eyeless gene in Drosophilia to the small eye gene in mice

and Aniridia in Humans, Science 265:785-789. Fagan, B., M., 1998, People of the

Earth, An Introduction to Pre-history, Lindbar Corporation, New York, pages 40

& 115 – 129. The Undiscovered and Undiscoverable Essence: Species and Religion after Darwin, The Journal of Religion 22: p. 12 – 20, University of Chicago.

27 Popper K. R., 1959, The logic of scientific discovery, New York: Basic


28 Mendis, L., 1995, Reason’s for Head and Heart, Dr. Lalith Mendis Publisher,

Sri Lanka.

29 Hoyle, F., Wickramasinghe, C., 1981, Evolution from Space: A Theory of Cosmic Creation, ISBN – 0 – 671 – 49263 – 2. Hoyle, F., Wickramasinghe, C.,1986, The Case for Life as a Cosmic Phenomena, Nature, 322: p. 509-511.

30 Quiring, R., Kloter, U., Gehring, W., 1994, Homology of the eyeless gene

in Drosophilia to the small eye gene in mice and Aniridia in Humans, Science

265:785-789. Schroeder, G.L., 1997, The Science of God, Broadway Books,

Dell Publishing Group Inc., pages 60 & 67.

31 Montoya, C. P., Transdisciplinarity, psychology and primary theories of

origin,"Putting Science and Religion in its Place: New Visions of Nature," An

International Symposium with the University of California Santa Barbara, Ian

Ramsey Centre, St. Anne’s College, University of Oxford, 13th – 16th July 2006.

Montoya, D.E., and Montoya, C.P., The Migratory Theory of Genetic Fitness,

"Putting Science and Religion in its Place: New Visions of Nature," Ian Ramsey

Centre, St. Anne’s College, University of Oxford, 13th – 16th July 2006. This

original research paper was included in the oral symposium presentation:

Transdisciplinarity, psychology and primary theories of origin, for contrast

purposes and the paper was distributed at the conference.

32 Johnson, G. B., 2000, The Living World, 2nd Edition, McGraw Hill, USA, p. 304 – 307.

33 Paul, A., Schuerger, A.C., Popp, M.P., Richards, J.T., Manak, M.S., and Ferl, R.J., 2004, Hypobaric Biology: Arabidopsis Gene Expression at Low Atmospheric Pressure, Plant Physiology, 134: p. 215-223. McCarthy, S., 2006, Space Farm, The Canadian Foundation for Innovation’s Electronic Magazine, Issue 16, May/June.

34 Berger, K., S., The Developing Person Through the Life Span, 1998, 4th Edition, Worth Publishers, p. 102 – 103.

35 Koichibara, H., 1987, Tomatomation, Unesco Courier, March Issue. Gilmore, E., 1988, Sunflower over Tokyo, Popula, p. 75.

36 Carlson, D., 2000, Sonic Creation Illustrated, Vol. 7, #2, p. 24-31.

37 Sykes, B., A Future Without Men: Adam’s Curse, 2004, Norton and Company, New York, p. 294 – 296.

38 Camilli, G., Hopkins, K. D., 1978, Applicability of Chi Square to 2 X 2 Contingency Tables with Small Expected Cell Frequencies, Psychological Bulletin, 88: p.163 – 167.

39 Kauffman, S. A., 1995, At Home in the Universe: The Laws of Self-Organization and Complexity, Oxford: Oxford university Press, p. vii – viii.

40 Bak, P., Tang, C., and Weisenfeld, K., 1987, Physical Review Letters, vol. 59, p. 381-384.

41 Kimura, M., 1968, Nature, Vol. 217, p. 624-626.

42 Margulis, L., & Sagan, D., 2002, Acquiring Genomes a Theory of the

Origins of the Species, Basic Books, New York.

43 Alvarez, L., & Alvarez, W., 1980, Science, Vol. 208, p. 1095-1108.

44 Aviezer, N., 2001, Fossils & Faith, Understanding Torah and Science, KTAV Publishing House, Inc., Chapter 17, Hoboken, New Jersey, p. 221-238.

45 Hoyle, F., 1999, Mathematics of Evolution, Acorn Enterprises, LLC, Memphis, Tennessee.