Journey to Planet 9

Planet-9-Art-NEWS-WEB

Power, Distance and Time are inextricably linked in rocketry. When leaving the Earth’s surface this is not so obvious, since all the sound and fury happens for a few minutes, and silence descends once the rocket enters orbit, free-falling indefinitely, at least until drag brings it back down. For slow journeys to the Moon, Near Earth Asteroids, Mars, Venus etc. the coasting Hohmann Transfer orbits and similar low-energy orbits, are all typically “sudden impulse” trajectories, where the engines fire for a few minutes to put a spacecraft on a months long trajectory.

For trips further afield – or faster journeys to the nearer planets – the acceleration time expands to a significant fraction of the total journey time. Ion-drives and solar-sails accelerate slowly for months on end, allowing missions like “Dawn” which has successfully orbited two Main Belt objects, Ceres and Vesta, all on one tank of propellant. Given more power an electrical propulsion system can propel vehicles to Mars in 2-3 months, Jupiter in a year and Saturn in under 2. Exactly how good the performance has to be is the subject of this post.

Firstly, an important concept is the Power-to-Mass ratio or specific power – units being kilowatts per kilogram (kW/kg). Any power source produces raw energy, which is then transformed into the work performed by the rocket jet. Between the two are several efficiency factors – the efficiency of converting raw heat into electricity, then electricity into jet-power, which includes the ionization efficiency, the nozzle efficiency, the magnetic field efficiency and so on. A solar array converts raw sunlight into electricity with an efficiency of between 20-25%, but advanced cells exist which might push this towards 40-50%.

Let’s assume a perfect power source and a perfect rocket engine. What’s the minimum performance required for a given mission? The basic minimum is:

Power/Mass is proportional to (S^2/T^3)

That is the Power-to-Mass ratio required is proportional to the displacement (distance) squared, and inversely proportional to the mission time cubed. For example, a 1 year mission to Jupiter requires 1,000 times the specific power of a 10 year mission.

The minimum acceleration case is when acceleration/deceleration is sustained over the whole mission time. When acceleration is constant, it means a maximum cruise speed (i.e. actual speed of vehicle) of 2 times the average speed (defined as total displacement divided by total mission time).

Another result, from a mathematical analysis I won’t go into here, is that the minimum specific power mission requires a cruise speed that is 1.5 times the average speed and an acceleration+deceleration time, t, that is 2/3 the total mission time T.

Remember that kinetic energy is 1/2.M.V^2, thus specific kinetic energy per unit mass is 1/2.V^2.

The power required – which is work done per unit time – is a trade off between acceleration time and mission time. Say the mission time is 10 years. If all the acceleration is done in 1 year, then the cruise speed required is 1/0.95 times the average speed, but power is proportional to the speed squared divided by the acceleration time: P = (1/2).V^2/t = (1/2).(1/0.95)^2/1 ~ 0.55, whereas in the case of constant acceleration, the average specific power is (1/2).(2)^2/10 = 0.2. For the case of minimum power it’s (1/2)*(3/2)^2/(2/3*10) = 0.16875 – just 84.375% the constant acceleration case and ~31% the 1 year thrust time.

So what does it take to get to Planet 9? If we use the distance of 700 AU to Planet 9, and a total trip time of 10 years, that means an average speed of 70 AU per year. To convert AU/yr to km/s, just multiply by 4.74 km/s, thus 331.8 km/s is needed. Cruise speed is then 497.7 km/s and the specific jet-power is 1.177 kW/kg, if we’re slowing down to go into orbit. Presently there are only conceptual designs for power sources that can achieve that sort of specific power. If we take 20 years to get there, the specific power is 0.147 kW/kg, which is a bit closer to possible.

Vapor Core Reactor Schematic

Space reactor designs typically boast a specific electrical power output of 50 W/kg to 100 W/kg. Gas-core nuclear reactors could go higher, putting out 2,000 – 500 W/kg, but our applied knowledge of gas-core reactors is limited. Designs exist, but no working prototypes have ever flown. In theory it would use uranium tetrafluoride (UF4) gas as the reacting core, which would run at ~4000 K or so and convert heat to electricity via a magnetohydrodynamic (MHD) generator. Huge radiators would be required and the overall efficiency of the power source would be ~22%. In fact there’s a theorem that any thermal power source in space has its highest specific power when the Carnot efficiency is just 25%, thanks to the need to minimise radiator area by maximising radiator temperature.

More exotic options would be the Fusion-Driven Rocket or a space-going stellarator or some such fusion reactor design with a high specific power. In that case it’d be operated more as a pure rocket than powering an electrical rocket. Of course there’s the old Orion option – the External Nuclear Pulse Rocket – but no one wants to put *potential* nuclear warheads into orbit, just yet.

Planet IX?

Planet-9-Art-NEWS-WEB

https://www.caltech.edu/news/caltech-researchers-find-evidence-real-ninth-planet-49523

Presently the details are sketchy. A Neptune-ish size orb out past Neptune – semi-major axis about x20 Neptune, perihelion about 200 AU, and a period of roughly 10,000 years. The discovery paper is here: EVIDENCE FOR A DISTANT GIANT PLANET IN THE SOLAR SYSTEM

What would it be like? Odds are, if it’s one of Uranus or Neptune’s kin, then it’s not a ‘Super-Earth’. Instead it’ll be whatever concoction they are – several theoretical options are available, one of which is that they formed from mostly carbon monoxide ice. The CO then reacted with primordial H2 to make H2O and CH4 – the observed ‘ices’ in both. This could explain their depleted D/H ratios as compared to their supposed cometary building blocks. Some planet formation simulations do throw a fifth ‘Gas Giant’ into the Outer Dark, so it’s a live option.

Alternatively, it is a Super-Earth. If it was formed further out than the other Terrestrials, then it might’ve retained its primordial H2/He atmosphere. Too much of that and there’s no chance of liquid water, but if the surface pressure is under ~200 bar, then the hydrogen greenhouse effect will allow *liquid* water. An Ocean Planet is a real possibility. Perhaps the name ‘Poseidon’ should be considered. The ocean would be Stygian in its darkness, so maybe ‘Tartarus’ would be more apt.

Happy Birthday, Roy Batty

from here http://au.ign.com/articles/2016/01/08/happy-birthday-roy-batty

[from here]

“Blade Runner” was Ridley Scott’s reworking of Phil Dick’s “Do Androids Dream of Electric Sheep”. The original tale, published in 1968, was set in the futuristic date of 1992. By 1981 this seemed much too close in time, so Scott pushed it back to 2019…

Now 2019 seems not so futuristic – though we might have working “Spinners” by then. Just no “Off-World Colonies” nor “Replicants”. Not sure that’s a bad thing.

Stuhlinger Mars Ship Paper

Disney’s 1957 TV program “Mars and Beyond” introduced the world to a spacecraft design like nothing ever before seen – the “Umbrella Ship”.

Disney Tomorrowland episode “Mars and Beyond”

Disney Mars Fleet - Nyrath Redux

The ion-drive Atomic Umbrella Spaceship, so called for obvious reasons. The umbrella is a vast radiator surface for dissipating the heat from the reactor at the end of the long boom.

Disney Mars Fleet - Ignition

The original source was this paper: ELECTRICAL PROPULSION SYSTEM FOR SPACE SHIPS WITH NUCLEAR POWER SOURCE by Ernst Stuhlinger.

Most of the details are available elsewhere, largely due to Ron Miller’s “Dream Machines” compendium of fictional spacecraft. From the paper itself we get the following data:

Stuhlinger Mars Ship Specs

The asterisk denotes quantities I’ve derived. The payload, which includes the landing vehicle and crew habitat, is 20.5% of the launch mass, which is quite impressive. However the acceleration is very low, albeit optimized for the trajectory chosen. These days we wouldn’t want a crewed vehicle spending weeks crawling through the Van Allen Belts, but back when Stuhlinger computed his trajectory and even when the design aired, the Belts were utterly unknown. Now we’d have to throw in a solar radiation “storm shelter” and I’d feel rather uncomfortable making astronauts spend two years soaking up cosmic-rays in interplanetary space. Even so, the elegance of the design, as compared with the gargantuan Von Braun “Der Mars Projekt” for example, is a testament to Stuhlinger’s advocacy of electric propulsion.

Unpacking Media EM-Drive Reportage

Warp One

NASA tested an ‘impossible’ engine that travels faster than the speed of light | Daily Mail Online.

NASA has trialled an engine that would take us to Mars in 10 weeks

Notice how both have the obligatory “USS Enterprise” image – one from ST:TMP and one from J.J.Abrams’ reboot of the same.

The Daily Mail’s truncated headline is misleading – though the full headline not so much – as the EM-Drive is not an FTL-drive. Even if it might hint at how to create a warp-drive, thanks to the positive space-warp results, that doesn’t make it a warp-drive yet. Only a specific configuration, at much higher energy densities, is likely to create an Alcubierre Warp, in 5-D, proper. What the Mail does get right is to feature a video by Guido Fetta, inventor of the related Q-Drive. Dr Fetta has taken down his Cannae web-site, but the Internet Archive has kindly preserved it. The Mail has confused Dr Fetta’s efforts with Eagleworks’s own. While Eagleworks tested his Drive, and others, the testing invalidated the specific hypothesis that Dr Fetta was assuming in his design. As the Eagleworks paper from last year noted, the Cannae-design Q-Drive had notches in the dielectric disk, while the ‘null’ test article did not. Both produced positive results, thus providing experimental data against Fetta’s specific hypothesis of how the drive creates thrust.

The specific performance figures referenced by the Mail – 4 hours to the Moon and 100 years to Alpha Centauri – come from two quite different presentations and assumed performance levels.

The first is based on Paul March’s presentations from 2007 which looked at ~newtons per watt performance levels, which would allow fuel-cells to power a shuttle for a continuous acceleration transfer to the Moon. The acceleration would be about 1 gee. If that could be sustained, then the trip to Alpha Centauri would take 6 years, not 100.

The second figure is from the more conservative presentation in 2012 which assumed ~milligee acceleration could be sustained. While that acceleration is not as exciting as zooming to the Moon, it opens up the Solar System. The “70 days to Mars” figure quoted in the second article is where that comes from. Even far off Pluto can be reached in the same time it takes present day rockets to plod their way to Mars.

Warp-1

The Science Alert piece almost matches the Daily Mail for hype. And uses the same source for the performance figures. What sets the essay apart is this conclusion:

Of course, all of this [EM-Drive & Warp-Drive] requires a lot of gaps to be filled before we can even verify that results like these are possible. But it seems that we’re now in a position where the engine warrants further investigation.

“After consistent reports of thrust measurements from EM Drive experiments in the US, UK, and China – at thrust levels several thousand times in excess of a photon rocket, and now under hard vacuum conditions – the question of where the thrust is coming from deserves serious inquiry,” the NASASpaceflight authors conclude.

We couldn’t agree more.

Warp 1 - 3

More work required before we can hit Warp One, like the “Enterprise”.

Space Drives: Experiments & Theories

Eagleworks, the Johnson Space Center shoe-string “advanced propulsion lab” is now notorious for testing the theoretically impossible “EM-Drive”, the related Cannae Q-Drive, and several other propellantless propulsion devices. What’s more, their chief scientist Harold “Sonny” White has a theory of ‘Quantum Vacuum Plasma Propulsion” that’s has raised the ire of more orthodox physicists, because it posits that the quantum vacuum – the sea of virtual particles created by the various fields composing our world – can be used for ‘jet propulsion’, or distorted to produce propulsive effects.

The latest burst of hype – and this is not the first, as a bit of Googling will tell the reader – comes from the considered, measured bit of reportage that summarises the several year long discussion of the Q-Drive work by Paul March (aka Star-Drive) on the NASA Spaceflight Forum. Paul is the consulting engineer for Eagleworks and has long worked on speculative propulsion systems. He shares results and discusses well meant criticism of the Q-Drive/EM-Drive effort. Here’s the NSF News Summary:

Evaluating NASA’s Futuristic EM Drive (29 April 2015)

One of the co-authors is Dr Jose Rodal, who has worked indefatigably to refine the experimental testing by the Eagleworks crew and eliminate the (many) possible false-positives that might mimick thrust from a drive. While the reportage by the NSF team and Dr Rodal is measured and cautious, sadly the Nerdi-Verse has exploded with both incautious Boosters and dogmatic Nay-Sayers shouting and yelling about the concept, but neglecting to look at the facts, as presented on NSF over a considerable period of time.

Brian Wang’s Next Big Future has reported on the EM-Drive/Q-Drive effort for years and the forum arguments there have raged as well, with the well-meant sceptics led by GoatGuy, whose physics knowledge and clear writing is very welcome in an often fractious, noisy forum. This little post was the possible vanguard of the current Hype-Storm:

Magnetron powered EM-drive construction expected to take two months …in which the EM-Drive, with Sonny White’s computations, might produce 1250 newtons thrust from 100 kW of microwave power.

The basic device which has most recently produced positive results, in vacuum chamber tests, is based on Roger Shawyer’s EM-Drive, an earlier and equally controversial propellantless propulsion system. What particularly irks orthodox physicists (and mathematical physicists, like Greg Egan) is Shawyer’s claim that his EM-Drive works in a way that obeys relativity. That it violates conservation of momentum while doing so immediately hinted that Shawyer’s mathematical treatment of his concept was incomplete – as Greg Egan was quick to point out.

Unfortunately for Shawyer’s critics, positive results from his experiments, a Chinese team, and now Eagleworks, suggests that something is missing in our current best theoretical understanding of how the world works. But what? I made this comment recently on Facebook:

It’s arguable whether it could be called a “hyper-space drive”, but it’s not a bad title. Here’s why: currently Sonny’s warp-drive concept requires the existence of the 5th dimension aka “Hyper-space” to work. If the Q-Drive/EM-Drive thingie is also confirmed, and is genetically related to the warp-drive, then it too probably works by some sort of 5-D effect. It almost certainly doesn’t work via the dubious physics that Shawyer has invoked. The recent interferometer test which has produced data *suggestive* of a space-warp being generated via the modified Q-Drive rig would not work if plain vanilla General Relativity is 100% correct. There’s just not enough energy density in the test device to warp space in an observable way. To produce a warp – as Sonny has said all along – requires the *existence* of Hyper-space. IFF the warp really is a warp, and not experimental noise, then it’s evidence of Hyper-space. In some ways that’s an even more incredible experimental outcome than some minor “violation” of action-reaction laws.

…which I will expand on in the sequel to this post. Before we go there, let’s look at the landscape of “advanced propulsion”, with some annotated links:

Roger Shawyer’s EM-Drive site: SPR Ltd

    Shawyer has written numerous papers over the last decade or so, and has had professionally translated the Chinese work that reported replication of his EM-Drive. Exactly what the Chinese space establishment make of this replication no one presently knows.

Mike McCulloch’s Physics from the Edge

Mike is a radical physicist with a theory of inertia, based on the Unruh effect, which might explain the EM-Drive’s positive results, as well as Dark Matter, Dark Energy and other cosmological mysteries. His EM-Drive paper is here: Can the EM-Drive be Explained by Quantised Inertia?

Eagleworks Lab is represented by a series of papers, some available on the NASA web-site:

Eagleworks Laboratories: Advanced Propulsion Physics Research (2011)

Warp Field Mechanics 101 (2011) – This paper was presented at the 2011 100 Year Star-Ship at Orlando Florida, which I presented at as well.

Warp Field Mechanics 102: Energy Optimization – This one was presented by Harold White at the 2013 Starship Congress, organised by Icarus Interstellar. Sonny proposes (highly speculative) ways that an actual warp-drive could be created for very “low” negative energy amounts.

After this rather abstract paper I personally felt Sonny’s warp-concepts were interesting, but for the distant future. However another conference piece changed my mind. More soon. First, this now famous essay…

Anomalous Thrust Production from an RF Test Device Measured on a Low-Thrust Torsion Pendulum
An abstract for this paper, now liberated from an AIAA Pay-per-view. Confusingly, when this one first exploded into the Nerdi-Verse, only the abstract was reported on, causing many sceptics of the Drives to (incorrectly) assert that the tests had failed. What the tests had failed to do was be conducted in a vacuum chamber under vacuum conditions, which meant more attentive critics had reason for scepticism.

What hadn’t hit public consciousness, but hit me was an earlier conference paper on Sonny’s Q-Drive concept, based on old experimental work by Paul March on Jim Woodward’s Mach Effect Thruster (MET). I won’t discuss the MET here, as it’s a whole other effort with a different theoretical basis, but genetically the MET experimental effort is what brought engineer Paul March into advanced propulsion, and finally working for Eagleworks.

Here’s the conference paper: Advanced Propulsion Physics: Harnessing the Quantum Vacuum ,which exploded in my mind with the rather glorious prospect of flying to the Outer Planets in days, rather than years. Except the numbers didn’t quite work, as I mentioned to both Paul and Sonny at the time.

Some of Sonny’s earlier papers are linked at this early Crowlspace post: White Papers

At one point in time he worked with Eric Davis, uber advanced-propulsion guru, on the higher-dimensional aspects of the original warp-drive: The Alcubierre Warp Drive in Higher Dimensional Spacetime

Other theories of how the EM-Drive, or the related Cannae Q-Drive, might work have appeared since the current buzz began a couple of years ago. Fernando Minotti, an Argentinian physicist, has suggested one theoretical option using Scalar-Tensor theory, which is a mathematical alternative to General Relativity. In most physics tests, the two theories give identical results, but not all tests – the EM-Drive, and kin, might be one such example: Scalar-tensor theories and asymmetric resonant cavities

Minotti has pointed out a particular Scalar-Tensor theory of gravitation, developed by Jean Paul Mbelek, as the relevant theoretical basis. His work is represented on the arXiv here: Mbelek, Jean Paul

A more recent theoretical discussion has also appeared on the arXiv here: DEF: The Physical Basis of Electromagnetic Propulsion by Mario J. Pinheiro. Exactly what might come of his discussion is hard to judge. Only more experimental data can adequately guide us through so many theoretical choices.

DNA oligomers via liquid crystal ordering

Abiotic ligation of DNA oligomers templated by their liquid crystal ordering : Nature Communications : Nature Publishing Group.

New study hints at spontaneous appearance of primordial DNA

The mystery of life solved?

Just the other day we had news of how the “stuff of life” was formed from basic chemical components – hydrogen cyanide and hydrogen sulfide – and now another suggestion on how nucleotides can form oligomers spontaneously. Nucleotides are the individual ‘-mer’ units of DNA or RNA polymers – polymers being long chain molecules made of many identical or similar sub-units. In the case of DNA each -mer has a nitrogenous base – Adenine, Thymine, Guanine and Cytosine, while RNA uses Uracil instead of Thymine – to which is attached a ribose sugar and a phosphate group or two.

Anatomy & Physiology, Connexions Web site. http://cnx.org/content/col11496/1.6/, Jun 19, 2013.

RNA has long been known to act as an enzyme, a molecular machine, and forms most of that key molecular machine in all cells, the ribosome. When RNA is performing enzymatic processes it’s called a ribozyme – and the key enzymatic activity of ribosomes is carried out by RNA. Even DNA can act as an enzyme, though so far only in the lab: deoxyribozyme

The interesting question, for astrobiologically minded folks such as readers of this blog, is whether the newly observed spontaneous polymerising of RNA/DNA can lead to ribozymes able to catalyze their own formation? If such can be observed to form spontaneously, then Life-as-We-Know-it can happen anywhere with the right conditions. But – and this is a more subtle question – how different can it be from our exact version? For example, there are 20 amino acids used by our kind of life, but potentially hundreds or thousands that *could* be used. Then there’s the Genetic Code – the specific way that sequences of the DNA nucleotides are translated into the strings of amino acids that make up proteins. The specific three nucleotide code which “spells out” the amino acids in our proteins is just one of many possible codes, of which there are about ~1.5 x 1084 possibilities to choose from. Why this specific set? And why are deviations from this specific Genetic Code so rare in living things?

The fact that the Code is seemingly arbitrary, as we have successfully changed it in organisms, also poses the question: Was our Genetic Code chosen? If so, by Who?

Smallest Life as We Know It

Diverse uncultivated ultra-small bacterial cells in groundwater

Nature has taken the praise-worthy step of making content available to a wider readership, via active links to ReadCube versions of the papers in media partners, like “The Huffington Post” in this case.

This particular paper reports on the characterisation of ultra-small organisms, which otherwise have proven impossible to culture. To make them available for study, samples were put into a rapid cryogenic freezer “snap freezing” them for electron microscopy. The results are fascinating, as the organisms are abundant and close to the smallest theoretical size for viable lifeforms.

Ultra-small bacteria

A bare minimum for any living thing, based on DNA, is a cell-wall, a means of making proteins (i.e. a ribosome, which translates the DNA chain into amino-acid chains that then fold into proteins) and the raw materials to make them from. That minimum size is around 200 nanometres – 0.2 millionths of a metre. These new organisms are as follows in this table:

Table 1

They’re so small that the volume taken up by their intra-cellular space and their cell-walls are of comparable size. A cubic micrometre is a volume of just 1E-18 cubic metres, or a water mass of just 1E-15 kilograms. Thus the smallest cell in the table masses ~4E-18 kg – in atomic masses that’s 2.4 billion amu. As the average atomic mass of living things is ~8 amu, that means a cell composed of just ~300 million atoms. The cell walls look crinkled because we’re seeing their molecule structure up-close.

Table 2

Regular bacteria can contain thousands of ribosomes, but these newly characterised micro-bacteria contain ~30-50. This indicates that they reproduce very slowly. Their genomes are also very short. Escherichia coli, a common bacteria in the human enteric system, has over 4.6 million base-pairs in its genome (humans have 3.2 billion) – but these new species range in length from a million to just under 700,000.

Genome Data

Genomes under a million base-pairs usually mean the organisms are obligate “parasites”, living around or inside larger organisms to source genes and metabolites that they can’t make themselves. Larger genomes mean they’re able to make all the required proteins, although they may live very slowly since they have so few ribosomes.

A key question, in molecular biology and astrobiology alike, is how the simple biomolecules produced by chemistry [as reviewed in this recent blog-post] came together to produce the “simplest” organisms. If we imagine a ‘soup’ of simple biomolecules, massing ~300 amu each, then about 8 million of them are required to produce the smallest organism described above. Merely throwing them together will not produce a living organism – the probability is something like 10-8,000,000. This is the chief puzzle faced by “Origins of Life” researchers.

There have been many clever proposals, with simplified metabolic cycles controlled by ribozymes (RNA enzymes) using an RNA based genetic code being a contender for an even smaller, simpler life-form. From the other direction, we know that plausible chemistry can concentrate and link up monomers of RNA nucleotides, forming quite long oligomers of RNA. Surprisingly short sequences show ability to work as ribozymes, thus a ready pool of ribozymes could conceivably form. From such a pool, some self-replicating set could form, which evolution could amplify into ever greater degrees of complexity. Just which particular ribozymes started down that road in the case of terrestrial life we don’t yet know. We might never know, since we have no clear molecular fossils from so deep in our past. Or do we? There’s a hint of that RNA past in the structure of ribosomes themselves. As we learn more about those “molecular assemblers” that produce all living things from raw amino acids, we will gain more insights into the rise of Life from molecules.

A thing to remember is that living things, at a biomolecular level, are continually converting simple molecules into incredibly complex ones, via ‘simple’ chemistry all the time. And those complex molecules show a particular ‘intelligence’ of their own. A good example is protein folding – to work, proteins fold up into specific 3-D configurations. Just how they do so has puzzled physical biochemists for years. It can’t be via simply twisting around at random. To see why, consider a short protein just 300 amino acids long. If, in each connection between amino acids, there are just 2 possible positions they can take, then the total number of possible positions is (2)300 ~2E+90 – a ridiculously large number of options to search through one-by-one. Thus it is impossible to jiggle an amino acid string randomly into the correct configuration of a protein in the mere minutes that protein folding is observed to take. The process is non-random and the way the sequence folds up is driven by the electromagnetic properties of each amino acid, so that they’re driven to form a protein’s structure through their mutual attractions and repulsions. Proteins are also very resilient to random changes in the amino acid sequence – only very rarely do single amino acid mutations produce a totally malfunctioning protein. Fortunately for us, as we depend on properly functioning proteins!

Making the Stuff of Life in One Batch

Common origins of RNA, protein and lipid precursors in a cyanosulfidic protometabolism : Nature Chemistry : Nature Publishing Group.

Life is built on chemistry, but the chemistry required to jump from the basic amino acids and small organic molecules expected on the Early Earth to the components of Life-As-We-Know-It – lipids, proteins and nucleic acids – has been obscure, until now. The scenario outlined in Nature Chemistry [doi:10.1038/nchem.2202] also explains why the various chemical components of our kind of life are so very similar. Even though they perform quite different roles, the building blocks are similar, produced originally by very similar chemical processes.

Original caption: The degree to which the syntheses of ribonucleotides, amino acids and lipid precursors are interconnected is apparent in this ‘big picture’. The network does not produce a plethora of other compounds, however, which suggests that biology did not select all of its building blocks, but was simply presented with a specific set as a consequence of the (photo)chemistry of hydrogen cyanide (11) and hydrogen sulfide (12), and that set turned out to work. To facilitate the description of the chemistry in the text, the picture is divided into four parts:

(a) Reductive homologation of hydrogen cyanide (11) (bold green arrows) provides the C2 and C3 sugars — glycolaldehyde (1) and glyceraldehyde (4)—needed for subsequent ribonucleotide assembly (bold blue arrows), but also leads to precursors of Glycine, Alanine, Serine and Threonine.

(b) Reduction of dihydroxyacetone (17) (the more stable isomer of glyceraldehyde (4)) gives two major products, acetone (18) and glycerol (19). Reductive homologation of acetone (18) leads to precursors of Valine and Leucine, whereas phosphorylation of glycerol (19) leads to the lipid precursor glycerol-1-phosphate (21).

(c) Copper(I)-catalysed cross-coupling of hydrogen cyanide (11) and acetylene (32) gives acrylonitrile (33), reductive homologation of which gives precursors of Proline and Arginine.

(d) Copper(II)-driven oxidative cross-coupling of hydrogen cyanide (11) and acetylene (32) gives cyanoacetylene (6), which serves as a precursor to Asparagine, Aspartic acid, Glutamine and Glutamic acid. Pi, inorganic phosphate.

The key-point is the relatedness and the step-by-step creation of one component or another, by short chemical processes, from the basic materials. To have produced such serial chemistry would have required means of isolating the raw materials and products, then mixing them. The next image from the paper provides a hint of what would’ve been required, on some sun-drenched landscape, swept by occasional rains, in an atmosphere of (probably) H2, N2, CO2 and H2O…

Chemosynthesis Sequence
Original caption: A series of post-impact environmental events are shown along with the chemistry (boxed) proposed to occur as a consequence of these events.

(a) Dissolution of atmospherically produced hydrogen cyanide results in the conversion of vivianite (the anoxic corrosion product of the meteoritic inclusion schreibersite) into mixed ferrocyanide salts and phosphate salts, with counter cations being provided through neutralization and ion-exchange reactions with bedrock and other meteoritic oxides and salts.

(b) Partial evaporation results in the deposition of the least-soluble salts over a wide area, and further evaporation deposits the most-soluble salts in smaller, lower-lying areas.

(c) After complete evaporation, impact or geothermal heating results in thermal metamorphosis of the evaporite layer, and the generation of feedstock precursor salts (in bold).

(d) Rainfall on higher ground (left) leads to rivulets or streams that flow downhill, sequentially leaching feedstocks from the thermally metamorphosed evaporite layer.

Solar irradiation drives photoredox chemistry in the streams. Convergent synthesis can result when streams with different reaction histories merge (right), as illustrated here for the potential synthesis of arabinose aminooxazoline (5) at the confluence of two streams that contained glycolaldehyde (1), and leached different feedstocks before merging.