Forecasters predict morning drizzle on Titan – space – 11 October 2007 – New Scientist Space

Dreary mornings on Titan…

Forecasters predict morning drizzle on Titan – space – 11 October 2007 – New Scientist Space

…that lurid orange sky that Huygens showed us could really get to me after a while. A light rain would be a welcome change. When Bussard fusors are up and running a shuttle to Titan would take 11-13 weeks, much better than Cassini’s six-plus years.

Ultra-sized Constructions

Air-filled habitats 400,000 km across

Science fiction has explored super-sized habitats of various kinds – Dyson Spheres, Alderson Disks, Ringworlds, Orbitals, and so forth. All of these suffer from one basic fatal flaw – they can’t exist. Solid structures that size can not be made from any known materials and oftentimes the structures are dynamically unstable. For example, both the solid Dyson Sphere and Ringworlds orbit a central star at super-orbital speeds to generate spin-gravity – unless we’re talking a Bob Shaw Orbitsville made of gravity-generating unobtainium. Now a basic fact of physics that a shell/ring will feel no gravitational effects from masses within it and so that orbital position is unstable to small perturbations – eventually it will run into its star.

So what can be built instead? And how big can it get? Karl Schroeder’s Sun of Suns features a 5,000 km wide air-filled sphere made of carbon nanotubes – which is roughly the limit based on the strength of nanotubes. And carbon is the strongest material in great abundance in the Universe. Theoretically there could exist ridiculously strong “materials” made of higgsinos, monopoles and black holes, but all require some glossing over the difficulty of making the stuff and the unknown physics behind some of the claims. For example, the Solar Queendom stories of Wil McCarthy feature a sun-spanning ring made of collapsium – material made of stabilised mini-black holes. Essentially the black holes are arranged so they eliminate their mutual gravity and inertia and they recycle their Hawking radiation – all of which is very fringey physics. Fun, but dubious.

So I wondered just how big a gas filled object could get before it got into trouble with gravity. A volume filled with a gas at constant pressure and temperature (on average) has a limiting size known as the Jeans radius – the point at which the gravitational potential of the mass of gas equals twice its kinetic energy. In equations that’s:

3/5GM^2/R = 3NkT

where G is the gravitational constant, M is the gas mass (kg), R the volume’s radius (m), N the number of gas molecules, k Boltzmann’s constant, and T the gas temperature in Kelvin. Of course the mass, M, equals N*(mu), where (mu) is the molecular mass of the gas. With a bit of rearranging all sorts of interesting bits of data fall out of the equations – a gas sphere of Earth mix, pressure and temperature starts collapsing under its own gravity when it’s 34,761 km across. If we change the gas mix – say 50:50 helium/oxygen – and lower the molecular mass, the radius goes up. If we decrease the pressure the radius also goes up. For that heliox mix at 0.4 bar pressure the sphere is about 229,000 km in radius.

Imagine a sphere containing heliox at 0.4 bar pressure and 400,000 kilometres across – enough volume to fit almost two dozen Jupiters. Interesting thing is that the sphere doesn’t have to hold the gas in by brute strength if it’s thick enough – self-gravity of the sphere and the gas mass provides the counter-pressure. If we make it from diamond (the strongest carbon allotrope, density 3.5) then it only has to be 1,345 metres thick for its gravity and the gas’s gravity to provide sufficient counterpressure.

And the mass? Just 0.4 Earth masses for the shell, and 0.655 for the gas – thus just a bit more than an Earth mass. Such masses are ludicrously large for us mere mortals to contemplate, but for the postulated Post-humanity of current SF such a project may well appeal. And if it can be done, out in the Cosmos there may be Someone who has already done so.

Such an object would be opaque – even gas and diamond at such thicknesses is opaque, though look out for interesting refraction during an eclipse – and an ideal target for a transit search. If a civilization felt the need it might totally re-engineer its star system and populate it with potentially thousands of such objects, which would be entirely stable given suitably convoluted orbital design. A Dyson Swarm – as Freeman Dyson originally meant it – rather than a solid Sphere. Such a re-engineered system would stand out like a sore thumb to distant observers, thus providing one motive for the idea – getting the Galaxy’s attention. All the habitable volume – given suitable artificial stars within – would be another motive, but beings able to disassemble planets would hopefully have tamed the urge to runaway population growth.

The Choice

Alan Boyle posted some thoughts on the next 50 years of spaceflight at his MSNBC CosmicLog. The comments were what I’d expect from enthusiasts and cynics. A lot of doom & gloom, plus runaway enthusiasm.

I am not a moderate on the importance of spaceflight, interstellar colonisation.

Is there a middle ground? Or is there a BIGGER reason for spaceflight? How about the survival of the Universe?

Life is insignificant NOW, but in a few billion years our descendents could be shaping entire segments of the visible universe. And in a few trillion years, as the last stars die, they’ll be doing even more dramatic things to sustain Life.

One possibility is that, left to itself, the Universe will “crash” because all the quantum information that makes the laws of physics possible will be erased by black hole decay – if the Universe expands forever. Can we stop the expansion? One theory is that the current acceleration is caused by the Higgs field not being in its true vacuum state, due to the presence of baryonic matter. If Life uses baryons, via reverse baryogenesis, for power then the Higgs field will cancel out and the Universe will recollapse.

Now a Big Crunch sounds bad, but guided by Life shifting mass around on a cosmic scale, the recollapse can both provide energy for Life and a heat-sink to make that energy usable. And that infinite recollapse energy can power infinite experiential states – infinite subjective time for an infinite number of beings – between Now and the End Point.

Thus Life doesn’t have to end – if we set out and “conquer” the Universe. Don’t worry about wars between intelligent life elsewhere in the cosmos – They’re too far away for us to meet up until a few trillion years from now. Else They would be here by now, if they were closer than about 13 billion light years. And in a few trillion years we, and They, will know how to get on better than we do now.

Or else we don’t have a future.

A simple choice:

Everything – real Infinity for all of us;

or Nothing.

Bussard Fusor papers online

Oil can be made PURE GREEN by fusion power

Robert Bussard’s Polywell Fusor design is incredibly exciting for any space enthusiast because it promises REAL atomic power for spaceflight. Sure fission rockets have been “atomic power” in our minds since the late 1940s, but any fission reactor is such a cranky system and only offers thermal power generation options. Aneutronic fusion – the burning of Lithium-6, Helium-3 or protium-Boron-11 – offers something utterly different: direct energy conversion. In the case of p+B11 => 3He4 fusion/fission reactions the energy of the produced alpha particles can be turned into electricity at ~ 95% efficiency. This is a real breakthrough – or will be when Bussard gets proper funding – as electrical power can be used to heat reaction mass via relativistic electron beam guns. Ionised exhaust and no thermal contact with the reaction mass means the exhaust velocity can be pushed a lot higher, thus the vehicle can use just plain water as reaction mass.

Thanks to Askmar (Emerging Technology Marketing) Bussard’s papers are available online for all to see…

IEC Fusion at Askmar

…have a quick browse, but for specific breakdowns of Bussard’s estimated costings of

  • A 4000 person colony on the Moon
  • A 1200 person colony on Mars
  • A 400 person colony on Titan
  • …all for less than the NASA budget over the same time period, then check out this one:

    System Technical and Economic Features of QED-Engine Drive Space Transportation

    …QED being “Quiet Electric Discharge” or Monster Electron Guns blasting stuff into plasma Engines.

    The transformation of the world’s power economy is a little bit harder to discern. Firstly, to minimise replacement costs and timelines Bussard proposes using D-D fusion for power extraction via the neutron flux. Neutrons would heat water, and that would directly hook-up with regular steam-plant at a pre-existing coal/fission power-plant. All new power-plant could then use aneutronic fuels and minimise thermal losses by direct power conversion.

    All sorts of liquid fuels could be made using a fusor at some point in the production cycle, without any greenhouse emissions being needed. Ethanol or methanol are good options, but eventually battery/ultra-capacitor technology might make liquid fuels an irrelevance. However hydrocarbon engines are so power dense it’s hard to see just how successful that transition will be. Fusion heating could make extracting oil from very marginal deposits highly practical and fusion-powered atmosphere processes could draw-down carbon dioxide directly from the atmosphere. In fact it’s perfectly feasible to re-form hydrocarbon fuels directly from water-and-carbon-dioxide exhausts via fusion energy. Thus oil, as a high-density energy-storage medium, might never have to be abandoned, even if it does have to be made PURE GREEN via fusion power. After all oil is essentially a natural product, used by virtually all living things to store energy – we know it as “fats and oils” as part of a healthy diet.

    A suitably high energy density storage system would make small electric aircraft perfectly feasible, and a large aircraft could have on-board fusors and essentially infinite range. By extracting deuterium from the atmosphere an aircraft could stay aloft without refuelling, resupplied by small shuttle planes. Such a system might allow larger luxury air-vehicles to fly continuous routes, loading and unloading via shuttles from airports along the way.

    Once Robert Bussard’s work is completed with a working power reactor then we’ll know if fusion power will yet save the world.

Old Testament numbers explained

The Exodus out of Egypt of the Israelites was a pivotal event for the Jews as a people, and for Judaism/Christianity as religions. Robert Silverberg’s “Roma Aeterna” series posits an alt.history in which the Jews never left – at least until 1970 (our calendar) or 2723 by the Empire’s own calendar.

Aside from interminable difficulties correlating the stratigraphy of the event, the Bible’s numbers are rather suspect. In “Numbers” we get a count of 603,550 men, 20 or older. That would imply about 2.4 million people left Egypt – it’s hard to imagine them fitting in Ancient Egypt as its population is usually estimated at about 1 million. Even harder to believe is the Bible’s claim that the 7 nations Israel was to expel were “greater than Israel” – 21 million people could not have survived in Bronze Age Canaan.

In a 1967 issue of the Tyndale Bulletin Biblical scholar John Wenham posed a rather neat solution – the “numbers” used were also words and the meaning had drifted between the composing of “Numbers” (or its source/s) and the final Redaction of the Torah we have from Ezra/Nehemiah’s day. Thus the words meaning “commander of thousands” and “commander of hundreds” (themselves hyperbole) had been shortened to “thousands” and “hundreds” and thus the later scribes had written them as numbers. This has numerous examples and parallels in the often glaringly different figures between the parallel historical books – “Kings”/”Chronicles”, “Ezra”/”Nehemiah”, and the various old scribal traditions, like the LXX, the Samaritan and Masoretic texts.

Now that John Wenham paper is online…

Large Numbers in the Old Testament

…apparently it’s a condensed version of a longer paper, but by itself it’s quite worthwhile. Makes a lot of sense out of a whole lot of quite silly theorising about multi-millions crossing the Red Sea etc. etc. Wenham figures the adult men number about 18,000 and thus the total populace is 72,000. Still a BIG population to shift across the Sinai, but we know the Hyksos moved about 200,000 on a similar exodus after Ahmose destroyed their major fortresses.

Parallel universes are for real

David Deutsch and team have shown that the probabilistic nature of Quantum Mechanics can be explained by the branching off of Worlds during quantum measurements…

Parallel universes exist

…which is good news for Frank Tipler’s theology. What that news piece gets a bit muddled is what the Worlds distinguish between – gives the example of surviving a car crash (or not) as the quantum choice, but such macroscopic events would be fairly far apart as the quantum distinction between large objects in motion is a tiny, tiny amount and unlikely to affect the outcome.

How many Worlds are there? Lots. Here’s a quote from the Everett FAQ…

Q11 How many worlds are there?
The thermodynamic Planck-Boltzmann relationship, S = k*log(W), counts the branches of the wavefunction at each splitting, at the lowest, maximally refined level of Gell-Mann’s many-histories tree. (See “What is many-histories?”) The bottom or maximally divided level consists of microstates which can be counted by the formula W = exp (S/k), where S = entropy, k = Boltzmann’s constant (approx 10^-23 Joules/Kelvin) and W = number of worlds or macrostates. The number of coarser grained worlds is lower, but still increasing with entropy by the same ratio, i.e. the number of worlds a single world splits into at the site of an irreversible event, entropy dS, is exp(dS/k). Because k is very small a great many worlds split off at each macroscopic event.

Imagine an entropy of just 1 J/K – divided by Boltzmann’s Constant, k, we get 72.4 x 10^21 as the exponent. Thus the number of Worlds is W= e^(72.4×10^21), which is an utterly mind-numbingly large number for a pretty tiny entropy difference. Yet the mathematics of quantum mechanics – and even classic mechanics, via the Hamilton-Jacobi formulation – seems to require an endless splitting of Worlds to match the sheer number of distinguishable microstates.

James Kasting Online

HabZone gets expanded

In 1964 there was Stephen Dole’s “Habitable Planets For Man” which was the first informed guesstimate of habitable planets in the Galaxy. Dole’s planets had optically thin atmospheres and no modeling of their climate’s temporal evolution. The HabZone was from about 0.75 AU to 1.25 AU.

In 1978 Michael Hart presented the first evolution models of habitable planet atmospheres and discovered they were incredibly unstable – a very narrow band around the star allowed a long term stable atmosphere – but even Earth’s was due to become uninhabitable within a 100 million years. The HabZone had narrowed to just 0.95-1.01 AU – any further out and Earth became locked in ice; any closer in and the Earth became a greenhouse.

Then in 1993 James Kasting and his posse shook things up by accounting for climate stabilisation via carbonate weathering – and the habitable zone widened to 0.95 AU – 1.4 AU, perhaps even further. The year before Kasting & Ken Caldeira had extended the use-by for Earth’s biosphere to c. 1 billion AD. The odds for life in the Galaxy went up, and the mystery of the Fermi Paradox became an even bigger mystery.

Much to my surprise and joy most of James Kasting’s classic papers are available online…

James Kasting’s papers

…spanning his early 1980s papers to the present. Thank you James!

Notable papers are “The Lifespan of the Biosphere Revisited” (1992) and “Habitable Zones Around Main-Sequence Stars” (1993), but lots of other meaty stuff for planetology nerds like myself.

Random mutation, directed search

Fortune as ‘randomness’ or ‘chance’ – how the sticks fall – is a big part of biology’s insistence on the non-directed nature of mutation. And I don’t take exception with the idea that what we see in mutation isn’t directed at a solution in a trivial sense. Most mutations seemingly ‘just happen’. A lot of Christians are scandalised by the idea of ‘random mutation’ but biology is revealling more to the story than mere base-pair flips due to chemicals or UV photons.

What seems to occur is genomes in a population under stress collectively perform a search for a viable mutation, and as has been increasingly realised in molecular biology genes seem to be organised to mutate and still remain viable. Parts of genes seem to be able to hyper-mutate in response to external stresses affecting the organism, while other sections seem to resist mutation. In microbes this means the mutations will be inherited, but in multi-cellular organisms the separation of germline cells from soma-line cells is hard to cross. But it does seem to happen. Epigenetic modifications to offspring are being increasingly recognised as caused by the environmental experiences of the mother.

Thus mutation isn’t guided to a perfect solution – instead genomes seem optimised to allow enhanced rates of variation as a means of searching for a viable solution to a stressor. The collective behaviour of the population seems to work like a “learning machine” – what gets called a perceptron or neural network in computer science – which computes, as a distributed system, a new genome in response to environmental changes. Thus, as Greg Bear fictionalised in “Darwin’s Radio” and “Darwin’s Children”, there is Mind-like behaviour in evolution. There is also a lot of information transfer via viruses and horizontal gene-transfer, much like what Greg Bear imagined.

That’s what our current understanding seems to be telling us – not so much an external Intelligent Designer, separate to the process, but an immanent ‘Mind’ that searches out new solutions through the success or failure of individual genotypes-expressed-as-phenotypes. And that Mind uses more than just base-pair flips and random movements of blocks of DNA during recombination to get things happening. Gene expression can be modified even without changing the genes, mothers can modify the expression of their children’s genes, and there are plenty of other tricks in the Mind’s toolbox.

Of course for a Christian the question arises as to just how that Mind relates to God. I propose we see the natural Minds immanent in the evolutionary process as God’s agents – perhaps even “angels” – but they may not be self-reflective, and may merely be automatons. Programs if you will. But then we are but programs in the wetware we call our brains. We imagine ourselves to be separate individuals, but our minds share a common information sea we call “society” – much like the “pool of genomes” that a species population represents. If we are such tiny sub-programs in something bigger, and yet still conscious, then maybe They are conscious too. If so, then the case for them being Angels is stronger.

If so then we must ask: is the Angel of genus Homo a Fallen Angel? Is that why Satan is called the god of this Aeon and is able to say “all these kingdoms are mine“? Terribly Kabbalistic speculation and perhaps totally baseless, but the more I think on it, the more the idea gains credence. Howard Bloom’s two classic books on the concept of distributed intelligence in evolutionary processes are titled “The Lucifer Principle” and “The Global Brain”, both of which explore the role of hierarchies and social orders in evolution. One particular idea is that we each have our own “Inner Judge” that determines if we’re socially connecting, successfully or not. Failure to connect leads to “condemnation” and a failure to thrive. Recent work has shown that “loneliness” causes changes in our immune systems – we become more vulnerable to illness. People who accept the Inner Judge’s case basically lay down and die. Or actively suicide. All quite irrationally, but seemingly instinctively.

“Satan” means “Accuser” – the Inner Judge’s Prosecutor – and as Jesus put it “he was a murderer from the beginning.” In that light we don’t need to imagine a Devil as something exterior to us. Instead its power is from the evolutionary machinery that makes us NEED to be a part of a bigger social system. But the individual human ego doesn’t accept the condemnation of the Devil lightly – we can either turn to a different archetype (the Self=Christ, as Jung puts it) or we can embrace the Accuser, and accuse society back. People like Anton LeVey and Marilyn Manson consciously sided with the Satanic symbol and point out quite truthfully the flaws and failings of ostensibly Christian mainstream culture.

A great Christian thinker, Jacques Ellul, once pointed out that because Satan is the Accuser in God’s Court, then we must listen to his case against us and answer it squarely. Anything else, especially evasion, is claiming we’re without sin, and thus coming under the power of the Prince of Lies. Satan can only be defeated by truth, because everything else is from him. Knee-jerk reactions by Christians to challenges to their personal cultural idols… it only hurts themselves.

Deep Future for Mankind?

Doctor Who in its last three episodes for the 2007 season presents us with the ultimate fate of humanity. The Universe is collapsing into giant black holes and the last human survivors of the year 100 trillion (or so) have reverted to biological human form. They receive a call to a final refuge called “Utopia” and the Doctor helps them to escape one dying world at least. He notes in passing that the human race reverts to classical form time and time again, even after spending billennia as computer uploads and plasma clouds.

In “The Sound of Drums” and “Last of the Time Lords” we learn that the refugees on Utopia were ‘rescued’ by the Master, made into psychopathic cyborgs and brought back to 2007 to exterminate the current human species, to replace them and make an Empire ruled by the Master. So “We” morphed again.

But what might really happen?

Greg Benford does quite a good job of imagining how diverse genus Homo might become after a billion years in his “Beyond Infinity” – he has hundreds of human derived species being revived out of gene vaults after a long period of stasis self-imposed by Homo superior. I think self-directed evolution might mean that something like the human form can be retained into an arbitary futurity (like 100 trillion AD), but that by no means implies stasis at the molecular level. I personally can imagine lots of molecular level machinery gradually taking over from old-style DNA/RNA – which is a bit too labile for high radiation environments and odd chemistry. Life extension would demand something more stable since cancer and aging are both caused by breakdowns in DNA/RNA replication fidelity after too many cell generations.

“Smart matter” cellular machines might be required in the long run, addressable by a body’s operating system and able to undergo a degree of “morphing” into useful configurations – we would become our own tool-kits, assimilate silicates and metals, replace calcium minerals with carbon nanotubes, and augment biochemistry with higher energy density power-sources. Technology and biology will merge – if we are to survive in any kind of physicality in the billennia ahead.

That’s how I see our evolution panning out. Frank Tipler, in his “The Physics of Christianity”, believes we will all become Uploads, of necessity – even after the Second Coming of Jesus. Maybe. After all JC said “Flesh and blood can not inherit the Kingdom of Heaven” and “sons of the kingdom will be like the angels, neither marrying nor giving in marriage” – sounds much like a non-flesh based existence to me. But bodies will always be useful regardless of how portable the software we call “the soul” becomes. I think even our Uploaded descendants will still take classical human form as the need arises. There’s a lot of information processing potential in the dimers that make up cytoskeletal microtubules – our bodies are “stupid” because we don’t yet have an interface or an OS that can utilise that power.

My Problem with Tipler’s Thesis

My issue with Tipler – the Singularity. Not with the idea that God is the Singularity, as that has too many resonances with theology, both Christian and Kabbalistic. No problems with his Trinity concept either, as I am quite happy believing in God’s Triunity.

The real problem I have is when Tipler claims that Physicists are evading God by wanting to get rid of the Singularity from GR, particle physics and quantum cosmology.

The crazy thing is that Tipler mentions a few examples of singularities appearing in physics equations and how physics avoided them – for example the Hamilton-Jacobi equations of classical motion produce singularities when applied to wave motion, a ‘fault’ corrected when Schrodinger developed the full equations of quantum mechanics. One consequence of that successful singularity evasion is the Many-Worlds Multiverse that Tipler’s theories rely on heavily.

Another example is from the assumption that particles are infinitesimal points – that produces infinite field strength as the particle is approached infinitely closely. Feynmann ‘solved’ that by demonstrating shielding of the naked charge by virtual particles.

Another mentioned by Tipler is the extraction of infinite energy during gravitational interactions between particles – if there was no upper speed limit on interactions. But because lightspeed is that limit infinite interactions don’t occur.

Now if the success of modern quantum physics is based on “eliminating” singularities, then why is it an ‘evasion’ by physicists when they develop supersymmetry, String theory, Loop quantum gravity and so forth? Tipler, perhaps rightly, condemns all those theories as baseless theorising because they have no experimental backing (yet!)

But theoretical work surely precedes experimental work when we’re talking about such extremes of energy. Yet Tipler claims that physicists have religious issues with the idea of real Singularities in General Relativity and quantum cosmology – how can eliminating singularities in one area of physics be a successful method, yet be a manifestation of ‘the Gnostic heresy’ (as Tipler puts it) when it’s applied to cosmology?

Now I believe in Tipler’s Omega Point Theory and that means I have to accept a Singularity at the end of time, and the beginning, and All presents too. He could be right, but to excoriate modern physics as ‘Gnostic heresy’ because it seeks understanding beyond those cosmological singularities is a bit much.

And if the Large Hadron Collider produces supersymmetric particles? What happens then? Will Tipler abandon his faith in physics as he understands it?

I see a rather strong parallel with Aristotelian physics. Aristotle placed God, the Prime Mover, at the Cosmological boundary of the World – the outermost sphere of the heavens was propelled by the eternal, self-contemplating Spirit/Soul. Tipler’s Trinity is the cosmic boundary of the quantum Multiverse and governs its evolution, just like the Prime Mover’s perfect motion ‘governed’ the spheres below. Modern physics evolved because people were willing to think outside of the conceptual cage that Aristotelian beliefs created. Cosmological Singularities might be pointers to similar conceptual cages.

There are so many unanswered questions in the physics that Tipler wants us to adopt as proof of Christianity’s Trinity – why the particle masses of the Standard Model? What is mass? What is charge? What is information? What is consciousness?

None of these important questions even remotely gets an answer. No Christian should ever say that their belief gives them all the answers, and in light of that it shouldn’t condemn thinking outside what is known with certainty as ‘Gnostic heresy’ either. That’s the ‘kiss of death’ for meaningful scientific endeavour, which Christianity – in its worst moments – has been condemned for.

Tipler also discusses Darwinism and the idea of ‘randomness’ in mutation, applying similar terms of abuse. Personally I think he utterly misses the point of the word’s use in biology, which is no surprise coming from a ‘physics fundamentalist’. Biologists mean that ‘random’ variation/mutation is random because it doesn’t produce genetic changes that are perfectly adapted to the environmental pressure that is acting as the ‘selecting force’ on a population of genes. Genes change because of basic physics and chemistry NOT because an animal has to run faster, nor because a plant has to grow quicker on less light. Mutation occurs all the time in genomes and is assumed to be non-directed towards useful changes.

But that picture is inaccurate too because modern genomics research is showing that genes themselves can be deliberately mutated by a body when an organism is facing an environmental stress. Thus there are regions of ‘hypermutation’ which produce new antibodies when we have an infection, and hypermutation in microbes when they’re facing starvation. This doesn’t mean the mutations themselves are perfectly fit – they are, as far as human science can tell – random. But a population of genomes that are hypermutating dramatically increases the odds for a perfect adaptation to be found to an environmental stress.

So in the end I have so much to agree with Tipler on about cosmology and physics, but so much I disagree with about his personal struggles with modern physics. And as he freely admits that’s the real source of his personal career problems and – as I see it – personal bile-spitting that mars what would be a working theory of Christianity’s truths otherwise.