The New Omega Point Theory

First let’s look at Tipler’s new Omega Point Theory (OPT) as modified since the discovery of cosmic acceleration ,(probably) caused by a cosmological constant, in 1998. A cosmological constant (dubbed “dark energy” in the media) in its simplest form is constant forever and drives the constituents of the Universe apart – eventually our Local Group of Galaxies would be all the mass left this side of the cosmic horizon. This result destroyed the original OPT which required a closed universe, so Tipler at first questioned the cosmic acceleration data, but then had an insight – what is happening to the Higgs field as mass is being converted into energy by intelligence in the cosmos? Perhaps the acceleration is caused by the Higgs field not being in its ground state – which the Standard Model of particle physics seemed to require? And just how does intelligence manage to turn mass into energy?

There’s such a process available known as reverse baryogenesis which can turn protons and neutrons into energy – a process that liberates more than 100 times the energy released by nuclear fusion. Tipler suggests that this is how Christ’s body was turned into a “spiritual body” – he turned it into a burst of nearly massless neutrinos, then reformed it again to appear to the disciples, as recorded in the Gospel of Luke.

But why the elaborate use of high energy physics by God? Tipler believes that God, as Jesus, has demonstrated baryogenesis so we (or our computer descendents) know it exists in an accessible form, so we (or they) will one day use it to convert enough mass-into-energy to cause the Higgs field to achieve its ground state. That’s necessary for the Universe to then collapse into the Omega Point – which Tipler identifies with God.

Why do the laws of physics require the Omega Point? According to Tipler, left to itself the Universe will end badly. How so? When very massive stars die they explode by their cores imploding, and some imploding cores become black holes. A Black hole is actually a rather simple object because all the many and varied particles that were once a star are crushed into a singularity that wipes out their individuality – a kind of super-particle forms. The singularity is trapped behind an Event Horizon (what we see as a “Black Hole”) and nothing escapes it – except there’s a catch. Energy can leak from the Event Horizon and so the Black Hole slowly decays away, ultimately erasing all memory of what fell into it. This leaves the development over time of all those particles incomplete – a property of all matter and energy, called unitarity, is violated. This might not sound like much but it actually means in practice that, for example, particles of light (photons) could end up with ridiculously large energies in such trivial situations as turning on a microwave oven. Since the Earth doesn’t melt down unitarity must be preserved somehow.

If our Universe were to expand forever and all the information of particles and their unitarity was lost down black holes, then the laws of physics will eventually breakdown and the world will end… badly. But what if black holes never decay away? To do so, and to never form information destroying event horizons and singularities, then the future development of the entire Universe has to be directed in a certain way – and that’s what the OPT is all about. By guiding how mass becomes energy and how the Universe then collapses into the Final Singularity – which doesn’t violate unitarity because it is the End of Time and all information is preserved by it – then Life has a very big cosmic role. And to fulfill that role, according to Tipler, Life must become Omnipotent, Omniscient and Omnipresent i.e. it must end in God, the Omega Point.

As I’ve already said Tipler believes that God, the Omega Point, has sent information to us via Jesus Christ, but just how? According to Tipler’s mathematics of our Universe there are three Singularities – the Beginning of Time, the End of Time, and the All Time Singularity which joins the Beginning and End. Or, in Christian terms, the Spirit, the Father and the Son, respectively. Thus the Son – the All Time Singularity which parallels and bounds all moments in all Unvierses of the Multiverse – is joined to the Future and the Past, and thus shares in their knowledge, and provides the means to guide the Universe(s) from the Past to the Future. And one such piece of knowledge is how to reverse baryogenesis.

And how has Jesus given us this knowledge? As well as resurrecting his old body into a new one there may well be physical evidence of that event. Tipler believes that traces of the Resurrection might still be present on the Shroud of Turin, itself a controversial relic. There’s a lot of historical evidence for the Shroud’s authenticity and good physical reasons to doubt the 1988 radiocarbon dating was actually of the original material and not a Medieval patch. So Tipler’s endorsement of the Shroud isn’t an irrational lapse, as some have claimed, but a reasonable proposal – if the Shroud really does preserve traces of Jesus’ blood and evidence of a macroscopic sphaleron causing reverse baryogenesis, then it’s an incredibly important historical artifact.

On the other hand Jesus also makes possible ethical use of the awesome power of mass-energy conversion, through his teachings and moral example – sacrificial living and dying for other people. A world armed with total mass-energy conversion is a very dangerous place indeed, unless we can achieve a higher ethical civilisation. The real possibility of such power might just need the direct intervention of Jesus himself. Or so Tipler believes.

The Physics of Xianity Part 1

I’m yet to read Tipler’s “The Physics of Christianity” past Chapter 1, which he offers online at his homepage, but over the past few years he has published a number of articles which give the basic outline of his argument – that Christianity’s basic claims are open to scientific verification or refutation, and case from the demands of physics means Christianity is true, the one true faith in fact. Just how the facts do support Xian dogma requires his Omega Point Theory to interpret them – God is the Singularity at the end of the Universe, in fact at both ends and not just our Universe, but all the Many-Worlds required by quantum theory, as explicated by Everett-Wheeler.

Personally I have problems with that initial claim – that the Everett-Wheeler Multiverse is the ONLY valid interpretation of Quantum Mechanics – as John Cramer’s Transactional Interpretation is as good and requires just one Universe. Also Ashfar’s experiments on Two-Slit interference of light still haven’t been explained away by Many-Worlds theorists.

That being said let’s give a summary of what his modification of his original OPT is now telling us about the Multiverse and Xian dogma.

• Universal acceleration is caused by the Higgs field being in a false vacuum. Reverse baryogenesis, employed by intelligent life, will eventually return it to its ground state allowing cosmic collapse into the Omega Point.
• Jesus performed many of his miracles through the reverse baryogenesis process, even if the miracle is strongly suppressed probabilistically speaking – how? Because the consistency of physics requires it.
• Jesus was born of a Virgin, Mary, as a parthenogenic XX male.
• Jesus was Resurrected via the conversion of his mass-energy into neutrinos, and reformed into a new “spiritual” body.
• This process – macroscopic sphaleron tunnelling – left traces on the Shroud of Turin as an apparent chemical reaction.
• Research on the process, using the Shroud perhaps, will give us the power of reverse baryogenesis – needed to colonise the Universe, but threatening humanity’s continued existence.
• Humanity is also threatened by the rise of human-level robots in the next 50 years. The combination of reverse baryogenesis and androids is the cause of the “Great Tribulation” that Jesus prophesised would proceed his direct intervention.
• ETIs exist, but beyond our Hubble horizon and their use of the macroscopic sphaleron tunnelling process independently of us will collapse the Cosmos.
• SETI will be able to confirm the existence of ETIs by building immense telescopes to image the home-planets of aliens billions of years in the past.

Tipler’s ideas can be found in several documents available online.

Firstly, his basic physics ideas can be found at the Internet Physics PrePrint archive…

Feynman-Weinberg Quantum Gravity and the Extended Standard Model as a Theory of Everything

Intelligent Life in Cosmology

Closed Universes With Black Holes But No Event Horizons As a Solution to the Black Hole Information Problem


The Ultimate Future of the Universe, Black Hole Event Horizons, Holography, and the Value of the Cosmological Constant

Genesis: How the Universe Began According to Standard Model Particle Physics

Secondly his arguments with respect to the dogmas of Xianity can be found here…

The Omega Point and Christianity

…plus some associated comments by Wolfhart Pannenberg.

Finally, the first chapter of the new book is available online at Tipler’s webpage…

…which summarises many of his recent comments elsewhere.
I’ll go into more detail in Part 2

Gl581c… habitable?

The big news from the ESO is the discovery of three planets around a red-dwarf star, Gliese 581, just 20.5 light years away – and one of the planets, Gl 581c, is “in” the star’s habitable zone and small enough to be (roughly) Earth-like.

Maybe not so habitable. According to this article by Barrie Jones, Nick Sleep and David Underwood, the habitable zone of Gl 581 is from 0.113 AU to 0.224 AU – BUT the new planet in the news is at a radius of just 0.073 AU i.e. it’s more like Venus and less like Earth. Albeit a probably wet Venus – a scaldingly hot ocean under a massive atmosphere of water vapour. Not nice real estate until improved by a reflective soletta sending some heat back at the star. Call it an ETP… Easily Terraformable Planet.

It’s fellow star system mate, Gl 581 d, is at 0.25 AU and somewhat more likely to be habitable, but it masses about 8 Earths, so it’s a bit big. And, of course, from my cursory survey of comments pages at different fora… well the usual crap is being bandied around e.g.

  • So many stars, so many planets, there must be ETIs
  • Let’s not go and “pollute” this new world with our “impure” earthliness
  • Why use the money here on earth to help the poor before we go looking for new planets?
  • The usual truisms, humanity-hating, future-cringing and hopelessly idealistic BS.

    (a) ETIs might exist, or they might not. But if they do/did then why no visible signs? Astrophysical engineering is conspicuously absent in our Galaxy, plus many other Galaxies.

    (b) Life propagates Life wherever it goes. The “pollution” of human-life is in no way alien when you look at the behaviour of all large animal species – we change our environments. We might be so stupid as to foul our own nests BUT personally I see signs we’re getting better at cleaning up after ourselves. City-dwelling modern humans have a SMALLER ecological foot-print than their predecessors, a fact that Luddites and Tree-Huggers blithely ignore.

    (c) The last thing the poor of the world need is our money because they’re poor. They need us to buy stuff from them, not tie another pelican around their necks like so much foreign “aid” has been in the past. Community development is fantastically important, but it only helps if it builds up people to trade and compete with the rest of us. Mere hand-outs kill. And in the end they want to be like us – and part of being us is looking beyond this horizon. That’s why astronomy and space is so damned important. A “simple life” impoverishes all.

    My $0.02 worth.

    Solar for a Mag-Beam

    The Mag-Beam propulsion concept was developed by Robert Winglee and his fellow researchers as a means of propelling quick interplanetary shuttles without mucking around with rockets. Basically it’s a big plasma rocket turned into a big plasma gun, that fires a fast, hard ion-beam at a magnetically ensheathed shuttle vehicle. This gives the shuttle a big shove and sends it on its way – to be slowed down at the destination by another Mag-Beam, or a bit of adroit aerobraking if it’s slow enough.

    Problem is the Mag-Beam wants a lot of juice to do the job – for example, a 20 ton shuttle being accelerated to 20 km/s needs a 300 MW Mag-Beam firing at it for about 4 hours, which is a lot of battery mass (3,000 tons at 400 W.hr/kg of battery.) Once the beam has fired the massive battery pack can be recharged via solar power over a few weeks or days before the Mag-Beam is needed again. But is the battery pack needed at all?

    Geoffrey Landis designed a Solar Power Satellite that beams 1 GW to the ground @ 33% efficiency and it massed just 1,300 tons which means its in-space power output is 3 GW – ten times the power needed by the Mag-Beam. Thus an in-space SPS power source for a 300 MW Mag-Beam need only weigh 130 tons. That’s a mass that could be launched in one piece by an Energia or Saturn V class launcher – like the new Ares launchers for NASA’s Return to the Moon. To do everything the Mag-Beam is required to do the power has to be delivered to multiple Mag-Beam plasma-gun stations. What made Landis’ SPS so light was that it remained in an orbit perpetually pointed at the Sun, so there was no need for a rotating power transfer collar from the array to the rectenna, and the same reflectors used for power gathering act as rectennae for power transmission in the Landis design. To keep that simple design, and power Mag-Beam stations in multiple locations, Power Relay Satellites – really just microwave wave-guide horns for changing a beam’s direction – might be needed. Or we might just bite the bullet and have a rotating connection between array and station. Both add mass, but it’ll be a LOT less than 3,000 tons of batteries. Plus PRSs can point at rectennae beaming power up from the ground to transfer back down to the ground at another location, allowing a PRS to do multiple roles and make money transferring power to areas of peak demand on the ground.

    Aside from propelling shuttles to Mars and elsewhere a Mag-Beam can also boost a sub-orbital vehicle ( a modified Virgin Galactic SpaceShip, for example) to orbital speed. The power required maybe higher, but as a shorter burst. If the sub-orbital vehicle can boost to a horizontal speed of ~ 1 km/s, then another 6.8 km/s is needed for low orbit. At an acceleration of 20 m/s^2 that’s a boost for 340 seconds, just under 6 minutes. In energy terms it’s the equivalent of 120 minutes of a 300 MW beam. Perhaps the higher power can be supplied from the ground?

    Confabulation, UFOs and Empty Tombs

    Hi All

    Kevin Randle is a long time UFO investigator and has just posted a really interesting discussion of how a witness’ description of a sighting evolved over time, apparently filling in extra details that never happened except in her memory.

    He quotes a quite famous study on how students reacted to the Challenger Space Shuttle disaster of 1986, and their recollections of the event some 3 years later. I’ll quote that segment directly…

    What happened to Erenberger [UFO witness] is not all that uncommon, even with events that are considered “traumatic”, and therefore, fixed in memory. Ulric Neisser, who was teaching at Emory University when the space shuttle Challenger exploded in 1986, realized that he had a perfect opportunity to study these sorts of “flashbulb” memories. The day after the disaster, he gave the students in his freshmen psychology class a short questionnaire about the events. He asked them where they were when they heard and how they had heard and other such questions. He then filed the questionnaires for three years.

    The students, when they were seniors, were given the same questionnaire about Challenger. He added an additional question, which was about the accuracy of their memories. According to the results provided by Neisser and graduate assistant Nicole Harsh, a quarter of the students didn’t have a single memory that was accurate. In one case, a student said that he had been at home, with his parents, when he heard, though the questionnaire, prepared the day after the explosion, revealed that he had been in class when he learned of the disaster.

    More important, however, was the reaction of the students to the proof that their memories were inaccurate. None disputed the accuracy of the statements they had made the day after the event, but one student, when confronted by the discrepancy between what she had written just hours after the explosion and with what she remembered three years later, said, “I still remember everything happening the way I told you. I can’t help it.” She was defending the memories that were clearly an invention in her own mind.

    (my parentheses)

    Human memory can be incredibly creative, though – importantly – some of the students had perfect recall. But others, as described, had invented memories that seemed utterly real and couldn’t shake the feeling of reality. But remember there was also a triggering event which created the base memory that was subsequently modified – something real happened (Challenger exploding, bright lights in the sky) and generated an intense emotional reaction, but the vagaries of memory created even more details, that never happened.

    So how is this relevent to the Empty Tomb, the cornerstone event of the Christian faith?

    A key finding of New Testament scholarship is what’s called The Priority of Mark, that the Gospel of Mark was used by both Matthew and Luke as their primary narrative source for Jesus’s life and teachings. But, interestingly, both added a lot more details to the final act of Jesus’s life, his “Passion”, or Arrest, Trial, Crucifixion, Burial, Empty Tomb, and After. But not the same details, and that’s the puzzle.

    Why are they so different? The standard excuse by Inerrantists (people who believe the Bible is 100% accurate) is that the Gospels were written from different points of view, like eye-witnesses to some event see it from different angles. There’s a lot that’s wrong with such a simplistic hand-waving explanation, but as we have seen memory can be quite creative. Mark is usually believed to be first of the three and it also tells us the least about after the Empty Tomb – just a brief vision of angels and a command to go to Galilee as Jesus had told the disciples to before his arrest.

    So, by analogy with Challenger and the UFO, the Empty Tomb might have been the trigger, utterly inexplicable, and emotionally loaded.

    Matthew tells us that not only did the women see angels, but they also saw Jesus, who reiterated the command. The disciples faithfully went to Galilee and saw Jesus from a mountain top, though some wondered/doubted. That sounds like the kind of embellishment that could follow from a misremembered event.

    Luke admits he wasn’t an eye-witness and is only gathering other people’s reports, trying to put the whole affair in order. So whose vague memories is he passing on to us? He knows nothing of appearances in Galilee and speaks of new appearances in Jerusalem itself, as well as on the road.

    Then there’s John, who has both a Jerusalem appearance (though no Emmaus Road encounter) and a Galilee appearance (which is oddly like a natural miracle, or two, in Mark.) And John is the only Gospel that claims to be from an eye-witness.

    Finally there’s a terse summary of appearances in Paul’s First Letter to the Corinthians but they don’t fit in too well with the Synoptics either.

    So what triggered all these reports of appearances of a Risen Jesus? And why could no one agree on what they remembered?

    Teacher of Righteousness

    The Dead Sea Scrolls pose many mysteries to scholars and interested lay-people alike. A persistent puzzle is the identity of the Teacher of Righteousness (ToR) who has been identified with a huge array of people, most (in)famously John-the-Baptist and Jacob/James, brother of Jesus. Contra those last theories most scholars have thrown up their hands and settle for ToR to remain nameless, mainly because there is no one in the 2nd Century BCE who fits the bill, and retains a name, in the scant records we have from that time.

    But the Scrolls themselves present us with quite a few names, most notably Zadok and Judah, as possible names for the ToR – albeit indirectly named via titles like “sons of Zadok” and “sons of Judah” for their followers. A case has been made for Zadok as the ToR and Judah as “the Priest”, a singualr term for the Scroll Community’s leader after the ToR. In that scenario both Zadok and Boethus (mentioned in Talmudic oral histories of the period) were disciples of a certain Antigonus of Socho (the Righteous “Shoot” before the ToR?) and split over religious differences – a pious way of putting the lust for power.

    Alternatively the ToR could be Judah, and “Zadok” his precursor. Michael Wise’s controversial book The First Messiah takes this view, though I fear the rest of his case (Judah self-identified as a Messiah) has clouded the merits of the basic identification. Stephen Goranson is another Dead Sea Scrolls scholar who makes the same identification, which he argues in this essay (in pdf format), which argues a few points contra Wise on the identities of other players in the Scrolls drama.

    A good lay-person’s review of Wise’s book can be found here, albeit from an religious sceptic point-of-view.

    Prof. James Tabor – one of the experts consulted for “The Jesus Family Tomb” documentary – examines Wise’s case in some detail, plus drawing parallels with Jesus’s disciples and the disciples of David Koresh… here.

    A transcript of Wise discussing his ideas can be found online here.

    Tim Bromage replies…

    Hi All

    I emailed Tim Bromage on the new ER1470 reconstruction and just how he got the new brain volume. His reply was interesting, though he clearly has issues with the paleo-anthro “mafia”.

    My email…

    Hi Tim

    I’ve been watching various science-bloggers discuss your recent poster at IADR and several have been critical (or jealous) of the media attention given to your new reconstruction, especially the new brain volume. Interestingly no one has mentioned Alan Walker’s reservations about Leakey’s original reconstruction, which he discusses in his book “The Wisdom of Bones” – as he puts it the facial angle was crucial to the perceived affinities of the skull, and he felt it was more Australopithecus and less Homo. Bernard Wood has seemingly backed that point of view by renaming ER1470 Australopithecus rudolfensis.

    To the point, John Hawks takes issue with your reconstruction’s brain volume here… http://johnhawks.net/weblog/fossils/habilis/er/bromage_1470_2007.html

    His is the only substantial criticism I’ve seen so far, so I’m curious: how did you arrive at the new brain volume?

    sincerely

    Adam Crowl

    …and Tim’s reply, which promises a full paper…

    Hello Adam:

    Indeed, all that you say is true. I have mentioned Alan’s views to all
    interviewers that have picked up on the poster, but it is no surprise
    that we have little impact on how the stories are written.

    What I have also found to be true is that ignorance is the paramount
    factor behind most, if not all, of the criticism I have heard.

    The reconstruction of the brain volume was trivial. Of course, the
    problem of determining this in 1972 was due mainly to the lack of
    biological principles. The craniofacial growth and development folks
    had certainly little to say, even the comparative mammalian (let alone
    primate) people. Never mind, this did not stop the reconstruction from
    becoming hardened at a quite high number, thus satisfying the need.
    This perspective, like Alan had warned, also found its way into the
    pitch of the face on the skull. Oh well.

    Biologically-based criteria now exist and determine where the face must
    be in tissue space, not where I or anybody else THINK it was. Having
    established this, it transpires, for good reason, that there is a
    statistically significant relationship between brain size and facial
    prognathism, which Francis Thackeray and colleagues observed (Francis is
    a co-author on the abstract and paper in preparation). The facial
    prognathism now determinable, it was a simple matter to subject the
    numbers to the formula for obtain brain volume (526cc +or-49).

    Thanks for your reasoned remarks. The problem with the field
    responsible for making the criticisms, is that the science is lacking,
    and lacking this, the only course of action is to climb the hill and
    start shooting off the cannons. I could hardly be more disinterested.
    This field has a desperate problem that I fear it will not recover from;
    it is obsessively protectionist and closed, and I will have nothing of
    it. I don’t need controversy to satisfy my ego. I could have cared
    less if no one reported on this poster. I do care about good science
    and open dialog however. Let’s see where the published paper takes us.

    Tim

    …so as you can see he has more faith in known developmental dynamics, than subjective reconstructions by paleoanthropologists with an axe to grind. I think that’s a fair call. The ensuing debate will be important to observe because it might be an injection of objective analysis into a rather “imaginative” field.

    Personally, from what I’ve read of other H. rudolfensis finds and putative H. habilis remains, we’re looking at a transitional population between Australopiths and Ergaster/Erectus. The Dmanisi hominids bridge the latter-half of the divide quite neatly and I think they share affinities with the much more poorly preserved Habilines, as well as being obviously Ergasts. So imagine the sequence like so…

    South Apes => Habilines => Dmanisi => Ergaster/Erectus => late Homo

    …but along the way there were side-branchings, like the Asian erects, the Hobbits, the Neanderthals. And, in the other direction, it wouldn’t surprise me if we discover that Pan and Gorilla branched off from the early South Apes, since the branching time has been brought forward to a mere 4 mya.

    Doubtless more surprises await.

    Outer Planet Mining

    Jupiter was touted as the major source of He3 fusion fuel for Daedalus, some 30,000 tons of it, but Jupiter’s gravity well is HUGE and far beyond the abilities of solid-core fission rockets, stretching the capabilities of gas-core rockets in terms of thrust-to-weight ratios. So what about the other gas-giants?

    Planet mass (Earths) radius (km) P-mag (sec) P-hydro (sec) Eq. velocity (km/s) Orbital vee (km/s) delta vee (km/s)
    Jupiter 317.838 71492 35727.3 35618 12.573 42.098 29.525
    Saturn 95.161 60268 38362.4 38196 9.871 25.088 15.217
    Uranus 14.536 25559 62064 61704 2.588 15.057 12.469
    Neptune 17.148 24764 57996 60120 2.683 16.614 13.931

    As you can see Uranus has the most forgiving gravity field, but Neptune and Saturn aren’t out of reach either, and Saturn has proximity to the Sun and Jupiter, for gravity assists, in its favour as well as Titan, a moon with a dense atmosphere and decent gravity. The Daedalus study assumed floating factories at the 0.1 bar level in Jupiter’s atmosphere serviced by gas-core automated shuttles, but if there’s enough need for them volatiles from the big planet atmosphere can be scooped and shipped up for processing at an off-world base.

    Scoop-ships could also allow starships to be self-fuelling. I have just received an issue of the October 1973 “Analog” – the one with a gorgeous Rick Sternbach cover of two Enzmann starships and the Cover article by G. Harry Stine, “A Program for Star Flight”. It’s quite a memorable article as Stine was arguing for a star flight program to begin c.1990, and the development of a massive in-space industrial base to support the effort. His initial phase would study the nearby stars with Lunar interferometers, then launch million-ton space-probes at 0.9c, and finally launch ten-ship fleets of Enzmann starships (roughly 12,000,000 tons each, mostly deuterium fuel.) Quite a major effort, but he optimistically costs it at $100 billion (in 1973$.)

    A few problems arise – 0.9c from Orion-style pulse drives is a touch unlikely, even with mass-ratios over 1,000 – but over all the concept is sound. Magnetic sails might change the approach, but the basic idea of attaching starships to huge masses of propellant, rather than big tanks, is a good one. However I have read that hydrogen and deuterium ice are mechanically like Jello and thus utterly useless as envisaged. Lithium-6 is a fusion fuel and pretty strong at cryogenic temperatures, so it might be the fuel of choice. Either that or carbon nanotubes might allow very, very light weight tanks to keep deuterium Jello in. The 12,000,000 ton starships probably mass just 120,000 tons empty (the design needs BIG mass ratios for speed), but the size Stine quotes is all wrong. Deuterium’s density is 0.16 relative to water, yet the fuel sphere is described as 1000′ across meaning a density of ~ 0.8, some five times denser than deuterium Jello. A sphere 1,710′ across will do nicely.

    Refining out 12,000,000 tons of deuterium from a gas giant will be quite a task. Since deuterium is about 1/2000th of the abundance of protium in Jupiter some 24 BILLION tons of hydrogen will be sifted through to collect the fuel. Bit of a tall order, but inevitable when you’re trying to fit 2,000 people on a starship that’s over 2,000′ long and push it through a delta-vee of 0.3 c.

    Actually I don’t know the empty mass of the Enzmann starship is 120,000 tons which is a touch frustrating since I’m paying attention to details here. What Stine does say is that the probes will hit 0.9 c with a mass ratio of ~ 1,000, and the starships will hit ~ 0.3 c (thus a delta-vee of ~0.6 c.) A bit of non-relativistic maths, and assuming the same exhaust velocity, means the starships have a mass-ratio of 100 (=1000^(0.6/0.9).) Now fusion reactions don’t make enough particles with sufficient energy to get that sort of exhaust velocity (about 0.13 c), nor are Orion pulse-drives 100% efficient (~25%?) More realistically an Enzmann starship will hit 0.08-0.15 c which is respectable for a fusion-drive.

    ER1470 Reconsidered…

    In 1972 Richard Leakey and his team discovered a curious skull, now catalogued as ER1470, and dated it to c. 2.6 million years ago. Leakey reconstructed it as a flat-faced, large-brained hominid and called it Homo habilis, though his co-worker Alan Walker doubted it was flat-faced, or even Homo [see The Wisdom of Bones by Walker and his wife Pat Shipman.] Now it seems Walker was on to something. Dr. Timothy Bromage has used the known developmental sequence of all mammal skeletons, especially the skull, to show that ER 1470 was reconstructed incorrectly, as reported by his University, NYU…
    Man’s Earliest Direct Ancestors Looked More Apelike Than Previously Believed …though the title is slightly misleading as there’s some doubt about ER1470′s ancestral status.

    Bromage presented his findings at the IADR meeting (March 23-27) in a poster, available here (warning 8.8 megabyte pdf) which is very interesting. The new ER1470 is much more apelike and has a brain capacity of just 526(+/-49) cc, versus the ~ 750 cc of Leakey’s original reconstruction. The new value is much the same as the lighter-built ER1813 skull, 510 cc, compared here with the old reconstruction of ER1470.

    BTW fossil hominids have odd sounding catalogue numbers, but what “ER” means is the location of the find, East Rudolph (i.e. Lake Rudolph in Eastern Africa.)

    Hiatus

    Study looms and blogging is eating into my assignment writing time, so expect very sporadic writings for a few months.

    One thing before I go is a topic Paul Gilster from Centauri Dreams mentioned – the hazards for an interstellar high-speed fly-by from interplanetary dust. Daedalus, for example, was to fly through Barnard’s Star’s system at 12.2 % of lightspeed, which gives a massive energy punch to any dust impacts. So I quantified the hazard – in the Sun’s Inner System interplanetary dust masses about 10^16 kg. If every bit was 1 mm cubes, density ~ 2, then the dust masses 2 x 10^-6 kg each, making for about 5 x 10^21 specks of dust. Sounds like a lot, but the volume of the inner 2 AU of the Solar System is 4/3*pi*(3.0E+11 m)^3 = 1.13 x 10^35 cubic metres – 22.6 trillion cubic metres per speck. That’s a cube 28.3 km on a side per speck.

    If Daedalus is 50 metres wide (that’s its dust-shield) then it punches a 2 AU long, 50 m wide hole through the dust and encounters about 26 dust specks – each one packing a punch of 1.33 gigajoules (a few kilos of TNT) which isn’t fun, but blown into plasma by the precursor shielding cloud it’s a survivable dose of energy, heating the beryllium shield to about 1000 K. The Daedalus sub-probes are much smaller and probably have a 50/50 chance of an encounter, but they have precursor shield systems too.

    For interest sake I also looked at some figures on mass in the Oort Cloud which Robert Zubrin uses in “Entering Space” – his results are seriously in error. He describes the Oort as containing 100 km cometoids about 10 AU apart – sounds spacious, but the Oort is 100,000 AU in radius, thus he’s describing 10^12 cometoids at 100 km in size. Then 1000 times more (i.e. 10^15) 10 km cometoids 1 AU apart each, then 1000 times that number of 1 km comets just 0.1 AU apart… and so on down to 10 cm chunks just 1,500 km apart. 7 orders of magnitude, giving a total mass of 7 x 10^30 kg (3.5 solar masses!) for comets of 500 kg/m^3 density.

    That’s ridiculous! Long period comet orbits wouldn’t be anything like stable, or long period with that much mass out there. In actual fact the total mass is usually quoted as 40 – 30 Earth masses – about 30,000 times less than Zubrin’s wild figures. Fortunately for us, or else we’d have a sky full of comets all the time and mass extinctions every ~ 3,000 years. The real Oort Cloud is very wide open spaces…

    Ok. Bye for now!