## Space Drives: Experiments & Theories

Eagleworks, the Johnson Space Center shoe-string “advanced propulsion lab” is now notorious for testing the theoretically impossible “EM-Drive”, the related Cannae Q-Drive, and several other propellantless propulsion devices. What’s more, their chief scientist Harold “Sonny” White has a theory of ‘Quantum Vacuum Plasma Propulsion” that’s has raised the ire of more orthodox physicists, because it posits that the quantum vacuum – the sea of virtual particles created by the various fields composing our world – can be used for ‘jet propulsion’, or distorted to produce propulsive effects.

The latest burst of hype – and this is not the first, as a bit of Googling will tell the reader – comes from the considered, measured bit of reportage that summarises the several year long discussion of the Q-Drive work by Paul March (aka Star-Drive) on the NASA Spaceflight Forum. Paul is the consulting engineer for Eagleworks and has long worked on speculative propulsion systems. He shares results and discusses well meant criticism of the Q-Drive/EM-Drive effort. Here’s the NSF News Summary:

Evaluating NASA’s Futuristic EM Drive (29 April 2015)

One of the co-authors is Dr Jose Rodal, who has worked indefatigably to refine the experimental testing by the Eagleworks crew and eliminate the (many) possible false-positives that might mimick thrust from a drive. While the reportage by the NSF team and Dr Rodal is measured and cautious, sadly the Nerdi-Verse has exploded with both incautious Boosters and dogmatic Nay-Sayers shouting and yelling about the concept, but neglecting to look at the facts, as presented on NSF over a considerable period of time.

Brian Wang’s Next Big Future has reported on the EM-Drive/Q-Drive effort for years and the forum arguments there have raged as well, with the well-meant sceptics led by GoatGuy, whose physics knowledge and clear writing is very welcome in an often fractious, noisy forum. This little post was the possible vanguard of the current Hype-Storm:

Magnetron powered EM-drive construction expected to take two months …in which the EM-Drive, with Sonny White’s computations, might produce 1250 newtons thrust from 100 kW of microwave power.

The basic device which has most recently produced positive results, in vacuum chamber tests, is based on Roger Shawyer’s EM-Drive, an earlier and equally controversial propellantless propulsion system. What particularly irks orthodox physicists (and mathematical physicists, like Greg Egan) is Shawyer’s claim that his EM-Drive works in a way that obeys relativity. That it violates conservation of momentum while doing so immediately hinted that Shawyer’s mathematical treatment of his concept was incomplete – as Greg Egan was quick to point out.

Unfortunately for Shawyer’s critics, positive results from his experiments, a Chinese team, and now Eagleworks, suggests that something is missing in our current best theoretical understanding of how the world works. But what? I made this comment recently on Facebook:

It’s arguable whether it could be called a “hyper-space drive”, but it’s not a bad title. Here’s why: currently Sonny’s warp-drive concept requires the existence of the 5th dimension aka “Hyper-space” to work. If the Q-Drive/EM-Drive thingie is also confirmed, and is genetically related to the warp-drive, then it too probably works by some sort of 5-D effect. It almost certainly doesn’t work via the dubious physics that Shawyer has invoked. The recent interferometer test which has produced data *suggestive* of a space-warp being generated via the modified Q-Drive rig would not work if plain vanilla General Relativity is 100% correct. There’s just not enough energy density in the test device to warp space in an observable way. To produce a warp – as Sonny has said all along – requires the *existence* of Hyper-space. IFF the warp really is a warp, and not experimental noise, then it’s evidence of Hyper-space. In some ways that’s an even more incredible experimental outcome than some minor “violation” of action-reaction laws.

…which I will expand on in the sequel to this post. Before we go there, let’s look at the landscape of “advanced propulsion”, with some annotated links:

Roger Shawyer’s EM-Drive site: SPR Ltd

Shawyer has written numerous papers over the last decade or so, and has had professionally translated the Chinese work that reported replication of his EM-Drive. Exactly what the Chinese space establishment make of this replication no one presently knows.

Mike McCulloch’s Physics from the Edge

Mike is a radical physicist with a theory of inertia, based on the Unruh effect, which might explain the EM-Drive’s positive results, as well as Dark Matter, Dark Energy and other cosmological mysteries. His EM-Drive paper is here: Can the EM-Drive be Explained by Quantised Inertia?

Eagleworks Lab is represented by a series of papers, some available on the NASA web-site:

Warp Field Mechanics 101 (2011) – This paper was presented at the 2011 100 Year Star-Ship at Orlando Florida, which I presented at as well.

Warp Field Mechanics 102: Energy Optimization – This one was presented by Harold White at the 2013 Starship Congress, organised by Icarus Interstellar. Sonny proposes (highly speculative) ways that an actual warp-drive could be created for very “low” negative energy amounts.

After this rather abstract paper I personally felt Sonny’s warp-concepts were interesting, but for the distant future. However another conference piece changed my mind. More soon. First, this now famous essay…

Anomalous Thrust Production from an RF Test Device Measured on a Low-Thrust Torsion Pendulum
An abstract for this paper, now liberated from an AIAA Pay-per-view. Confusingly, when this one first exploded into the Nerdi-Verse, only the abstract was reported on, causing many sceptics of the Drives to (incorrectly) assert that the tests had failed. What the tests had failed to do was be conducted in a vacuum chamber under vacuum conditions, which meant more attentive critics had reason for scepticism.

What hadn’t hit public consciousness, but hit me was an earlier conference paper on Sonny’s Q-Drive concept, based on old experimental work by Paul March on Jim Woodward’s Mach Effect Thruster (MET). I won’t discuss the MET here, as it’s a whole other effort with a different theoretical basis, but genetically the MET experimental effort is what brought engineer Paul March into advanced propulsion, and finally working for Eagleworks.

Here’s the conference paper: Advanced Propulsion Physics: Harnessing the Quantum Vacuum ,which exploded in my mind with the rather glorious prospect of flying to the Outer Planets in days, rather than years. Except the numbers didn’t quite work, as I mentioned to both Paul and Sonny at the time.

Some of Sonny’s earlier papers are linked at this early Crowlspace post: White Papers

At one point in time he worked with Eric Davis, uber advanced-propulsion guru, on the higher-dimensional aspects of the original warp-drive: The Alcubierre Warp Drive in Higher Dimensional Spacetime

Other theories of how the EM-Drive, or the related Cannae Q-Drive, might work have appeared since the current buzz began a couple of years ago. Fernando Minotti, an Argentinian physicist, has suggested one theoretical option using Scalar-Tensor theory, which is a mathematical alternative to General Relativity. In most physics tests, the two theories give identical results, but not all tests – the EM-Drive, and kin, might be one such example: Scalar-tensor theories and asymmetric resonant cavities

Minotti has pointed out a particular Scalar-Tensor theory of gravitation, developed by Jean Paul Mbelek, as the relevant theoretical basis. His work is represented on the arXiv here: Mbelek, Jean Paul

A more recent theoretical discussion has also appeared on the arXiv here: DEF: The Physical Basis of Electromagnetic Propulsion by Mario J. Pinheiro. Exactly what might come of his discussion is hard to judge. Only more experimental data can adequately guide us through so many theoretical choices.

## DNA oligomers via liquid crystal ordering

New study hints at spontaneous appearance of primordial DNA

The mystery of life solved?

Just the other day we had news of how the “stuff of life” was formed from basic chemical components – hydrogen cyanide and hydrogen sulfide – and now another suggestion on how nucleotides can form oligomers spontaneously. Nucleotides are the individual ‘-mer’ units of DNA or RNA polymers – polymers being long chain molecules made of many identical or similar sub-units. In the case of DNA each -mer has a nitrogenous base – Adenine, Thymine, Guanine and Cytosine, while RNA uses Uracil instead of Thymine – to which is attached a ribose sugar and a phosphate group or two.

RNA has long been known to act as an enzyme, a molecular machine, and forms most of that key molecular machine in all cells, the ribosome. When RNA is performing enzymatic processes it’s called a ribozyme – and the key enzymatic activity of ribosomes is carried out by RNA. Even DNA can act as an enzyme, though so far only in the lab: deoxyribozyme

The interesting question, for astrobiologically minded folks such as readers of this blog, is whether the newly observed spontaneous polymerising of RNA/DNA can lead to ribozymes able to catalyze their own formation? If such can be observed to form spontaneously, then Life-as-We-Know-it can happen anywhere with the right conditions. But – and this is a more subtle question – how different can it be from our exact version? For example, there are 20 amino acids used by our kind of life, but potentially hundreds or thousands that *could* be used. Then there’s the Genetic Code – the specific way that sequences of the DNA nucleotides are translated into the strings of amino acids that make up proteins. The specific three nucleotide code which “spells out” the amino acids in our proteins is just one of many possible codes, of which there are about ~1.5 x 1084 possibilities to choose from. Why this specific set? And why are deviations from this specific Genetic Code so rare in living things?

The fact that the Code is seemingly arbitrary, as we have successfully changed it in organisms, also poses the question: Was our Genetic Code chosen? If so, by Who?

## Smallest Life as We Know It

Diverse uncultivated ultra-small bacterial cells in groundwater

Nature has taken the praise-worthy step of making content available to a wider readership, via active links to ReadCube versions of the papers in media partners, like “The Huffington Post” in this case.

This particular paper reports on the characterisation of ultra-small organisms, which otherwise have proven impossible to culture. To make them available for study, samples were put into a rapid cryogenic freezer “snap freezing” them for electron microscopy. The results are fascinating, as the organisms are abundant and close to the smallest theoretical size for viable lifeforms.

A bare minimum for any living thing, based on DNA, is a cell-wall, a means of making proteins (i.e. a ribosome, which translates the DNA chain into amino-acid chains that then fold into proteins) and the raw materials to make them from. That minimum size is around 200 nanometres – 0.2 millionths of a metre. These new organisms are as follows in this table:

They’re so small that the volume taken up by their intra-cellular space and their cell-walls are of comparable size. A cubic micrometre is a volume of just 1E-18 cubic metres, or a water mass of just 1E-15 kilograms. Thus the smallest cell in the table masses ~4E-18 kg – in atomic masses that’s 2.4 billion amu. As the average atomic mass of living things is ~8 amu, that means a cell composed of just ~300 million atoms. The cell walls look crinkled because we’re seeing their molecule structure up-close.

Regular bacteria can contain thousands of ribosomes, but these newly characterised micro-bacteria contain ~30-50. This indicates that they reproduce very slowly. Their genomes are also very short. Escherichia coli, a common bacteria in the human enteric system, has over 4.6 million base-pairs in its genome (humans have 3.2 billion) – but these new species range in length from a million to just under 700,000.

Genomes under a million base-pairs usually mean the organisms are obligate “parasites”, living around or inside larger organisms to source genes and metabolites that they can’t make themselves. Larger genomes mean they’re able to make all the required proteins, although they may live very slowly since they have so few ribosomes.

A key question, in molecular biology and astrobiology alike, is how the simple biomolecules produced by chemistry [as reviewed in this recent blog-post] came together to produce the “simplest” organisms. If we imagine a ‘soup’ of simple biomolecules, massing ~300 amu each, then about 8 million of them are required to produce the smallest organism described above. Merely throwing them together will not produce a living organism – the probability is something like 10-8,000,000. This is the chief puzzle faced by “Origins of Life” researchers.

There have been many clever proposals, with simplified metabolic cycles controlled by ribozymes (RNA enzymes) using an RNA based genetic code being a contender for an even smaller, simpler life-form. From the other direction, we know that plausible chemistry can concentrate and link up monomers of RNA nucleotides, forming quite long oligomers of RNA. Surprisingly short sequences show ability to work as ribozymes, thus a ready pool of ribozymes could conceivably form. From such a pool, some self-replicating set could form, which evolution could amplify into ever greater degrees of complexity. Just which particular ribozymes started down that road in the case of terrestrial life we don’t yet know. We might never know, since we have no clear molecular fossils from so deep in our past. Or do we? There’s a hint of that RNA past in the structure of ribosomes themselves. As we learn more about those “molecular assemblers” that produce all living things from raw amino acids, we will gain more insights into the rise of Life from molecules.

A thing to remember is that living things, at a biomolecular level, are continually converting simple molecules into incredibly complex ones, via ‘simple’ chemistry all the time. And those complex molecules show a particular ‘intelligence’ of their own. A good example is protein folding – to work, proteins fold up into specific 3-D configurations. Just how they do so has puzzled physical biochemists for years. It can’t be via simply twisting around at random. To see why, consider a short protein just 300 amino acids long. If, in each connection between amino acids, there are just 2 possible positions they can take, then the total number of possible positions is (2)300 ~2E+90 – a ridiculously large number of options to search through one-by-one. Thus it is impossible to jiggle an amino acid string randomly into the correct configuration of a protein in the mere minutes that protein folding is observed to take. The process is non-random and the way the sequence folds up is driven by the electromagnetic properties of each amino acid, so that they’re driven to form a protein’s structure through their mutual attractions and repulsions. Proteins are also very resilient to random changes in the amino acid sequence – only very rarely do single amino acid mutations produce a totally malfunctioning protein. Fortunately for us, as we depend on properly functioning proteins!

## Making the Stuff of Life in One Batch

Life is built on chemistry, but the chemistry required to jump from the basic amino acids and small organic molecules expected on the Early Earth to the components of Life-As-We-Know-It – lipids, proteins and nucleic acids – has been obscure, until now. The scenario outlined in Nature Chemistry [doi:10.1038/nchem.2202] also explains why the various chemical components of our kind of life are so very similar. Even though they perform quite different roles, the building blocks are similar, produced originally by very similar chemical processes.

The key-point is the relatedness and the step-by-step creation of one component or another, by short chemical processes, from the basic materials. To have produced such serial chemistry would have required means of isolating the raw materials and products, then mixing them. The next image from the paper provides a hint of what would’ve been required, on some sun-drenched landscape, swept by occasional rains, in an atmosphere of (probably) H2, N2, CO2 and H2O…

## Deep Future Real Estate

Ibrahim Semiz & Salim Ogur, a pair of researchers in Turkey, have posed the possibility of extraterrestrial civilizations building Dyson Spheres around white dwarfs. It’s an astroengineering option for the long-lived civilization that wants a fixer-upper or is looking to renovate their own system after the sun has gone Red Giant.

Consider a white dwarf that has chilled to 1/2500th of the Sun’s luminosity. It’s habitable zone is at 0.02 AU, but to sustain a clement environment, the Shell has to be a bit further out at 0.04 AU, at the desirable thermal equilibrium temperature of ~280 K. More or less. That’s a radius of 6 million kilometres and a surface gravity of 3.75 m/s2 for a 1 solar mass white-dwarf. The habitable surface is on the outside. The habitat’s total area would be ~887,000 Earths, so it’s a substantial piece of real estate. To sustain a breathable atmosphere at 1 bar pressure a gas mass of 1.2E+25 kg is required – the mass of Neptune, though in the right mix of gases. While oxygen and carbon are fairly easy to source, the nitrogen might be more difficult, thus a heliox mixture might be required. Some nitrogen is still needed to make protein, but most of the atmosphere would be helium for fire suppression and reduction of oxygen toxicity risk.

Due to the intense gravity of a 1 solar mass white-dwarf star, mass falling onto would release ~27 TJ/kg from gravitational energy alone. By trickling mass on the star, very carefully, its luminosity could be sustained at 0.0004 solar for aeons before it ran into the Chandrasekhar Limit at 1.44 solar masses. Exactly how close one could get to that Limit, without triggering a C/O fusion conflagration and a Type Ia Supernova, is an important bit of astrophysics to learn before building the Sphere.

Such an object would “glow” in the infrared at ~280 K, 9 times the physical size of the Sun, and have a mostly helium spectrum. It’d look like a very odd infrared protostar from afar, compact and opaque to other frequencies.

## Extreme Relativistic Rocketry

In Stephen Baxter’s “Xeelee” tales the early days of human starflight (c.3600 AD), before the Squeem Invasion, FTL travel and the Qax Occupation, starships used “GUT-drives”. This presumably uses “Grand Unification Theory” physics to ‘create’ energy from the void, which allows a starship drive to by-pass the need to carry it’s own kinetic energy in its fuel. Charles Sheffield did something similar in his “MacAndrews” yarns (“All the Colors of the Vacuum”) and Arthur C. Clarke dubbed it the “quantum ramjet” in his 1985 novel-length reboot of his novella “The Songs of Distant Earth”.

Granting this possibility, what does this enable a starship to do? First, we need to look at the limitations of a standard rocket.

In Newton’s Universe, energy is ‘massless’ and doesn’t add to the mass carried by a rocket. Thanks to Einstein that changes – the energy of the propellant has a mass too, as spelled out by that famous equation:

$$E=mc^2$$

For chemical propellants the energy comes from chemical potentials and is an almost immeasurably tiny fraction of their mass-energy. Even for nuclear fuels, like uranium or hydrogen, the fraction that can be converted into energy is less than 1%. Such rockets have particle speeds that max out at less than 12% of lightspeed – 36,000 km/s in everyday units. Once we start throwing antimatter into the propellant, then the fraction converted into energy goes up, all the way to 100%.

But… that means the fraction of reaction mass, propellant, that is just inert mass must go down, reaching zero at 100% conversion of mass into energy. The ‘particle velocity’ is lightspeed and a ‘perfect’ matter-antimatter starship is pushing itself with pure ‘light’ (uber energetic gamma-rays.)

For real rockets the particle velocity is always greater than the ‘effective exhaust velocity’ – the equivalent average velocity of the exhaust that is pushing the rocket forward. If a rocket energy converts mass into 100% energy perfectly, but 99% of that energy radiates away in all directions evenly, then the effective exhaust velocity is much less than lightspeed. Most matter-antimatter rockets are almost that ineffectual, with only the charged-pion fraction of the annihilation-reaction’s products producing useful thrust, and then with an efficiency of ~80% or so. Their effective exhaust velocity drops to ~0.33 c or so.

Friedwardt Winterberg has suggested that a gamma-ray laser than be created from a matter-antimatter reaction, with an almost perfect effective exhaust velocity of lightspeed. If so we then bump up against the ultimate limit – when the energy mass is the mass doing all the pushing. Being a rocket, the burn-out speed is limited by the Tsiolkovsky Equation:

$$V_f=V_e.ln\left(\frac {M_o}{M_i}\right)$$

However we have to understand, in Einstein’s Relativity, that we’re looking at the rocket’s accelerating reference frame. From the perspective of the wider Universe the rocket’s clocks are moving slower and slower as it approaches lightspeed, c. Thus, in the rocket frame, a constant acceleration is, in the Universe frame, declining as the rocket approaches c.

To convert from one frame to the other also requires a different measurement for speed. On board a rocket an integrating accelerometer adds up measured increments of acceleration per unit time and it’s perfectly fine in the rocket’s frame for such a device to meter a speed faster-than-light. However, in the Universe frame, the speed is always less than c. If we designate the ship’s self-measured speed as $$V’_f$$ and the Universe measured version of the same, $$V_f$$, then we get the following:

$$V’_f=V_e.ln\left(\frac {M_o}{M_i}\right)$$

[Note: the exhaust velocity, $$V_e$$, is measured the same in both frames]

and…

$$V_f=V_e.\left(\frac {\left(\frac {M_o}{M_i}\right)^2-1}{\left(\frac {M_o}{M_i}\right)^2+1}\right)$$

To give the above equations some meaning, let’s throw some numbers in. For a mass-ratio, $$\left(\frac {M_o}{M_i}\right)$$ of 10, exhaust velocity of c, the final velocities are $$V’_f$$ = 2.3 c and $$V_f$$ = 0.98 c. What that means for a rocket with a constant acceleration, in its reference frame, is that it starts with a thrust 10 times higher than what it finishes with. To slow down again, the mass-ratio must be squared – thus it becomes $$10^2=100$$. Clearly the numbers rapidly go up as lightspeed is approached ever closer.

A related question is how this translates into time and distances. In Newtonian mechanics constant acceleration (g) over a given displacement (motion from A to B, denoted as S) is related to the total travel time as follows, assuming no periods of coasting at a constant speed, while starting and finishing at zero velocity:

$$S=\frac14 gt^2$$

this can be solved for time quite simply as:

$$t^2=\frac {4S}{g}$$

In the relativistic version of this equation we have to include the ‘time dimension’ of the displacement as well:

$$t^2=\frac {4S}{g}+\left(\frac {S}{c}\right)^2$$

This is from the reference frame of the wider Universe. From the rocket-frame, we’ll use the convention that the total time is $$\tau$$, and we get the following:

$$\tau=\left(\frac {2c}{g}\right).arcosh\left(\frac {gS}{2c^2}+1\right)$$

where arcosh(…) is the so-called inverse hyperbolic cosine.

Converting between the two differing time-frames is the Lorentz-factor or gamma, which relates the two time-flows – primed because they’re not the total trip-times used in the equation above, but the ‘instantaneous’ flow of time in the two frames – like so:

$$\gamma=\left(\frac {t’}{\tau’}\right)=\left(\frac {1}{1-\left(\frac {V}{c}\right)^2}\right)$$

For a constant acceleration rocket, its $$gamma$$ is related to displacement by:

$$\gamma=\left(\frac {gS}{2c^2}+1\right)$$

For very large $$\gamma$$ factors, the rocket-frame total-time $$\tau$$ simplifies to:

$$\tau\approx\left(\frac {2S}{c}\right).ln\left(2\gamma\right)$$

The relationship between the Lorentz factor and distance has the interesting approximation that $$\gamma$$ increases by ~1 for every light-year travelled at 1 gee. To see the answer why lies in the factors involved – gee = 9.80665 m/s2, light-year = (c) x 31,557,600 seconds (= 1 year), and c = 299,792,458 m/s. If we divide c by a year we get the ‘acceleration’ ~9.5 m/s2, which is very close to 1 gee.

This also highlights the dilemma faced by travellers wanting to decrease their apparent travel time by using relativistic time-contraction – they have to accelerate at bone-crushing gee-levels to do so. For example, if we travel to Alpha Centauri at 1 gee the apparent travel-time in the rocket-frame is 3.5 years. Increasing that acceleration to a punishing 10 gee means a travel-time of 0.75 years, or 39 weeks. Pushing to 20 gee means a 23 week trip, while 50 gee gets it down to 11 weeks. Being crushed by 50 times your own body-weight works for ants, but causes bones to break and internal organs to tear loose in humans and is generally a health-hazard. Yet theoretically much higher accelerations can be endured by equalising the body’s internal environment with an incompressible external environment. Gas is too compressible – instead the body needs to be filled with liquid at high pressure, inside and out, “stiffening” it against its own weight.

Once that biomedical wonder is achieved – and it has been for axolotls bred in centrifuges – we run up against the propulsion issue. A perfect matter-antimatter rocket might achieve a 1 gee flight to Alpha Centauri starts with a mass-ratio of 41.

How does a GUT-drive change that picture? As the energy of the propellant is no longer coming from the propellant mass itself, the propellant can provide much more “specific impulse”, $$I_{sp}$$, which can be greater than c. Specific Impulse is a rocketry concept – it’s the impulse (momentum x time) a unit mass of the propellant can produce. The units can be in seconds or in metres per second, depending on choice of conversion factors. For rockets carrying their own energy it’s equivalent to the effective exhaust velocity, but when the energy is piped in or ‘made fresh’ via GUT-physics, then the Specific Impulse can be significantly different. For example, if we expel the propellant carried at 0.995 c, relative to the rocket, then the Specific Impulse is ~10 c.

$$I_{sp}=\gamma_e.V_e=c.\sqrt{\gamma_e^2-1}$$

…where $$\gamma_e$$ and $$V_e$$ are the propellant gamma-factor and its effective exhaust velocity respectively.

This modifies the Rocket Equation to:

$$V’_f=I_{sp}.ln\left(\frac {M_o}{M_i}\right)$$

Remember this is in the rocket’s frame of reference, where the speed can be measured, by internal integrating accelerometers, as greater than c. Stationary observers will see neither the rocket or its exhaust exceeding the speed of light.

To see what this means for a high-gee flight to Alpha Centauri, we need a way of converting between the displacement and the ship’s self-measured speed. We already have that in the equation:

$$\tau=\left(\frac {2c}{g}\right).arcosh\left(\frac {gS}{2c^2}+1\right)$$

which becomes:

$$\frac {g\tau}{2c}=arcosh\left(\frac {gS}{2c^2}+1\right)$$

As $$V’_f=\left(\frac {g\tau}{2c}\right)$$ and $$\left(\frac {gS}{2c^2}+1\right)=\gamma$$, then we have

$$V’_f=I_{sp}.ln\left(\frac {M_o}{M_i}\right)=arcosh\left(\gamma\right)$$

For the 4.37 light year trip to Alpha Centauri at 50 gee and an Isp of 10 c, then the mass-ratio is ~3. To travel the 2.5 million light years to Andromeda’s M31 Galaxy, the mass-ratio is just 42 for an Isp of 10c.

Of course the trick is creating energy via GUT physics…

## More MathJax Testing

This one uses a Javascript to load MathJax direct. Seems easier than the plug-in.

In equation 1, we find the value of an
interesting integral:

$$\int_0^\infty \frac{x^3}{e^x-1}\,dx = \frac{\pi^4}{15}$$

or this:

$$\int_0^\infty \frac{x^3}{e^x-1}\,dx = \frac{\pi^4}{15}$$

or this:

$$\mathcal{\int_0^\infty \frac{x^3}{e^x-1}\,dx = \frac{\pi^4}{15}}$$

## Mission to Ceres

Ceres is in the news, thanks to the marvellous “Dawn” mission, which has seen a plucky little solar-powered ion-drive achieve orbit around two heavenly bodies on one tank of propellant. However the low power-to-mass ratio of the ion-drive means a multi-year journey, which is punishing for human crew and would-be colonists. A more reasonable design was proposed by James Longuski and his team at Purdue:

Abstract

A low-thrust trajectory design study is performed for a mission to send humans to Ceres and back. The flight times are constrained to 270 days for each leg, and a grid search is performed over propulsion system power, ranging from 6 to 14 MW, and departure V?V?, ranging from 0 to 3 km/s. A propulsion system specific mass of 5 kg/kW is assumed. Each mission delivers a 75 Mg payload to Ceres, not including propulsion system mass. An elliptical spiral method for transferring from low Earth orbit to an interplanetary trajectory is described and used for the mission design. A mission with a power of 11.7 MW and departure V?V? of 3 km/s is found to offer a minimum initial mass in low Earth orbit of 289 Mg. A preliminary supply mission delivering 80 Mg of supplies to Ceres is also designed with an initial mass in low Earth orbit of 127 Mg. Based on these results, it appears that a human mission to Ceres is not significantly more difficult than current plans to send humans to Mars.

I believe the basis for the above paper is the 2011 Student Project Vision here:

Project Vision

…which has this rather elaborate Crew Transfer Vehicle doing the heavy-lifting of carrying a crew to Ceres:

…which requires a bit of explanation:

Getting to Ceres is not easy. The major delta-vee budget is due to the plane change (Ceres is inclined to the ecliptic by 10.6 degrees) and the lack of high energy capture orbits, aerocapture or aerobraking at such a small object. Yet it’s not much more difficult than getting to Mars in some respects – if you include the landing delta-vee budget. The major enticement is the chance of abundant water ice and, perhaps, some sort of easy access to liquid water from cryovolcanic vents. “Dawn” has given us the mysterious White Spot, which is at least a kilometre above the crater floor it is in the middle of. Could it be a protusion of the water ice from below the asphalt black crust? Or something more exotic – an icy fumerole? There’s water vapour around Ceres, which hopefully “Dawn” will study in more detail.

The real crying need for such missions is multi-megawatt space-power supplies. Until that’s developed, such missions will remain paper studies.