Chemistry, Colors, Images and Reality

I did a Twitter thread the other day that combined some chemistry with thoughts on the JWST images that have been released recently, and several people asked if I was going to expand it in to a blog post. So here we are! This is going to be partly about chemistry (particularly about spectroscopy), partly about astronomy, and partly about human color perception. That’s just to let anyone know that if this isn’t quite the blend you’re looking for today, it’s not going to get much better past this paragraph! For chemists, it’ll be a blend of stuff you know well and (I hope) some things that you haven’t thought about much before.

OK, let’s start with infrared/visible/ultraviolet spectroscopy. These are optical spectroscopic techniques that chemists start to learn about in their undergraduate classes, and many decades ago they were the cutting edge techniques for identifying and studying molecules and compositions. And they’re still valuable, not least because you don’t have to have the sample right in front of you to get them to work. Nuclear magnetic resonance (NMR) and mass spec are tremendous workhorses here in these days of modern times, as are high-performance liquid chromatography, X-ray crystallography, cryo-electron microscopy and others, but none of those can be performed from a distance. You have to have the sample in hand (or in the grip of a robotic sample handler) at the very least, and you may well have to do some specific prep work on it to get any useful data at all.

Those three spectroscopy techniques I mentioned above are often run in “absorbance mode”, where the appropriate light is passed through a sample. Different molecules will absorb particular parts of the light spectrum, and you can tell an awful lot about things by detecting what’s missing compared to the original light source. In the infrared region, molecules absorb light at frequencies that correspond to actual physical motions of the atoms and bonds – the IR absorptions have names like “stretching” and “wagging”, and that’s exactly what’s going on. Carbonyl groups (carbon double-bonded to oxygen) of all types have a famously strong absorption region, a stretch, that moves around a bit according to whether you’re looking at (or through!) an aldehyde, a ketone, or an amide. Nitriles (carbon triple-bonded to nitrogen) have a weirdly sharp spike of a stretching absorption in a part of the IR spectrum where not much else ever shows anything, so that’s diagnostic, and so on.

Visible light and ultraviolet (“you-vee vizz” spectroscopy, in the lingo) are more energetic wavelengths, and up there you stop seeing absorptions for bond motions. What comes on instead are absorptions of the electron “clouds” involved in chemical bonding. It’s a topic with complexities I’m not even going to try to address here, but a good example is the way that some organic compounds are highly colored to our eyes while others are just white/transparent. All those compounds I was using the other day as examples for calculating combustion analysis, for example (hexane, cyclohexane, benzene, pyridine) are clear liquids to the human eye, or at least had darn well better be if you’re going to use them for anything important. Nothing in their structures really absorbs light at visible wavelengths, so they’re transparent. These things absorb (if at all) out in the ultraviolet wavelengths, and we can’t see that stuff at all with the naked eye. But if you start making structures with more extended conjugated double-bond and aromatic structures, scattering some heteroatoms in for seasoning, you can end up with large “delocalized” electron clouds that can start absorbing visible light.

You start edging into this from the blue end of the spectrum as the absorption bands start dropping down out of the ultraviolet. And that, folks, is why so many gunky organic impurities in the chemistry labs are various shades of red, orange, and yellow: they are absorbing the blue end of the spectrum and passing the red end through, and that’s what our eyes get to see. If you have a thick mixture of stuff absorbing the whole bluish end of things, you have now made what our eyes perceive as red-brown, and we have plenty of that on offer, too. As that stuff goes down a chromatography column it tends to separate out into various red, orange, and yellow gorp bands. The colors seen in the clouds of Jupiter are surely related. Similarly, that’s why so few organic chemistry compounds are any sort of true blue color. For that, you’d need some electronic structure that passes the blue light but absorbs the whole red end of the spectrum, the lower-energy end, and you need an odd kind of carbon-chain-based molecule to give you that profile. The famous example is azulene, a small hydrocarbon that does indeed have a very weird arrangement of double bonds and is inarguably blue. You won’t see many of ‘em – indeed, good blue dyes and pigments have historically been rather rare and valuable.

So as you use your eyeballs and note the colors of things, you are in fact doing visible-light spectroscopy on the fly. Much of that is absorption-based, as detailed above, but there’s also emission spectroscopy, where things actually emit their own light. That brings up the question of what happens when a compound absorbs light, anyway: where does that energy go? In many cases, it just dissipates as heat (molecular motion), but if things are lined up right, that energy can be spit back out as. . .more light. For organic chemistry compounds, that absorption/emission process I’ve just described is fluorescence. There’s also luminescence, where particular sorts of structures are in high-energy states due to stained or energetically unfavorable bonds and relieve that by emitting light as well. 

Bright glowing colors on clothing, paint, and posters are fluorescence. Light is hitting the dyes and pigments in these goods and is being emitted again at different wavelengths (the impression that they’re glowing is not an illusion!) Meanwhile, fireflies, glowsticks, and things like jellyfish are luminescent: they’re producing higher-energy chemical intermediates that break down and give off light, and it takes chemical energy (some oxidizing reagent and a substrate in a glowstick, and food or more proximally ATP in a cell) to keep doing that. Both fluorescence and luminescence emit light at specific wavelengths, which takes us into the wonderful world of quantum mechanics (never very far away, to be honest). Fluorescent compounds, for their part, also absorb light at specific wavelengths in order to emit it again at different ones, which is why they tend to be brightly colored on their own.

There are all sorts of variations and subtleties – for example, energy can be transferred over really short distances between luminescent and fluorescent molecules in a funky radiation less process called FRET that we use a lot in assay setups, because it’s a great marker of when two (bio)molecules get close enough to each other. But in that case and all other assays built on fluorescence and/or luminescence, we’re looking at specific wavelengths/colors and adjusting our setups to get the best sensitivity and selectivity. These days you can get a really wide spectrum of dyes and engineered proteins that absorb and emit all up and down the range, which lets you produce (among other things) those spectacular multi-labeled fluorescent images of living cell structures, among other great stuff. 

But whole molecules aren’t the only things that emit light. Single atoms of individual elements can do it, too, under rather more energetic conditions than you’d ever find in a cell. In this case (atomic emission spectroscopy) it’s the electrons around individual atoms that are getting energized and sent up to higher levels, and they also emit specific wavelengths when they drop back down. You can do absorption spectroscopy on these things, too, because as above, the wavelengths that individual elements can absorb are also specific to their electronic structures: quantum mechanics at work right in front of you. That is what provides the colors in a fireworks display, quantum flippin’ mechanics. It’s the same thing that happens in a flame test demonstration: lithium atoms emit a bright pink/scarlet light when energized, copper and boron emit blue/green light, and sodium sends out a pair of very bright and absolutely diagnostic yellow lines. As a side note, that’s part of the problem that fireworks makers experience in trying to get a true, bright, brilliant blue color: getting a mixture that emits just that high-energy light and no other wavelengths to muddy things up is not so easy.

This all started to become apparent after Newton demonstrated spreading white light out into its color spectrum with a prism. von Fraunhofer’s improvements on that process showed that sunlight (and other sources of light) had weird dark lines in the bands of color, but no one knew what these “Fraunhofer lines” could be. In the 1850s, Kirchhoff and Bunsen (yeah, the burner guy) demonstrated that individual heated gases emitted light at specific wavelengths (corresponding to the elements within) and that the dark lines in sunlight were absorption bands corresponding to these “bright line” emissions. It became apparent, to widespread scientific amazement, that we could sit here on Earth and determine the elements present in the Sun by analyzing its light. And not just the Sun – other stars as well. 

It had been clear since humans first looked up at the night sky that some stars were different colors than others, but now we could start assigning these to different temperatures and elemental compositions. All stars are mostly hydrogen – to a good approximation, the whole universe is mostly hydrogen, which is why astronomers tend to call everything past helium a “metal”. But there are a lot of tiny variations that build up from subtle differences in the gas and dust clouds that formed those stars, and show up as they age and produce heavier elements in turn. As an example, we first found the element tellurium in stars (very distant ancient ones) only ten years ago. Emission spectroscopy has revolutionized our understanding of the universe, since we can take detailed spectra of bright objects from ridiculous distances – handy because everything out there is a ridiculous distance away from most everything else, and definitely from us. And it’s all a powerful, inarguable demonstration that the laws of physics and chemistry as we understand them seem to extend to the ends of the visible universe.

The JWST has tremendous spectroscopic instruments on board and has already started producing a monstrous flood of data that I hope continues for many years to come. The NIRSpec is a fearsome instrument, and the spectroscopes in the MIRI and FGS/NIRISS add even more capabilities. NIRSpec has an impressive arrays of microshutters and slits that allow it to take spectra of up to 100 objects in a given field simultaneously, for example, and remember that all of these instruments are collecting at wavelengths that are difficult-to-flat-out-impossible to observe here on Earth. So believe me, I love the imagine capabilities of the telescope, but the less-immediately-sexy spectroscopic ones are going to shake us all up eventually as well.

That was already demonstrated in the first imaging set, with the spectrum of the atmosphere of WASP 96b. That’s an exoplanet of a type that we’d never even dreamed existed before we started being able to spot these things – it’s big, although with a smaller mass than Jupiter. But its apparent size is much greater, because the thing is orbiting way up close to its star, far inside the orbit of Mercury in our own system. So it’s basically puffed up by the heat and radiation, and sits at a temperature of just over 1000C. You won’t find a single science-fiction story (to my knowledge) that ever postulated such a thing; conditions in such a planet’s atmosphere are really difficult to imagine or model. 

The NIRISS instrument (a slitless spectrometer that’s really good for isolated point sources) took several hours of data on this beast as it passed in front of its star: and that, folks, is IR absorbance spectroscopy done from over a thousand light years away and using a star as the light source behind the sample, which in this case is an entire planet’s atmosphere. The fact that we can carry this off gives me the shivers. A truly jealous God would probably smack us down for having the presumption even to try such a thing with His creation, while a more loving one might well be cheering us on. Measuring the absorbance changes as the planet moved across the face of its star identified water bands in its spectrum, which is pretty surprising considering that ridiculous temperature and how long it’s been baking next to the hearth. Our best observatories had already detected sodium in the atmosphere of the same planet, but had also concluded that it must be cloudless – but the JWST data argue otherwise. We have never had an instrument that could collect these data in this way, and we’re going to be collecting huge amounts of it. Just in the next year’s worth of observations we’ll have spectroscopy of the surfaces/atmospheres of dozens of different types of exoplanets: we get to see these places start to fill in, chemically, meteorologically, and geologically right in front of our eyes. It’s a great time to be alive to watch it happen.

Now, about those images. A constant question with any astronomical image is “Is that what it really looks like?”, and there are several answers to that one. Sometimes the answer is “Yeah, if only it were closer/brighter or if our eyes were more sensitive than they are”. That’s the case with those photos of the Milky Way stretching across the desert sky that you see. The night skies in such places are indeed a lot more impressive than anything you see in more populated areas, and I strongly recommend taking the opportunity to see them. But you shouldn’t expect them to look the way that they do in a long-exposure photograph, which is what all of those are. You’re seeing the real light in such shots, but you’re seeing more of it than your eyes could ever collect, because our retinas (and our visual cortices) are not optimized for long exposures.

The same goes for all astronomical photographs of any deep-sky objects. The Moon and the planets (and the Sun, naturally!) are bright enough for short exposures, but nothing else out there is, that’s for sure. You need a sensitive detector, a long exposure (or a whole series of longish ones to stack on top of each other) and a really good motorized mount to track these things as they slowly move across the sky with the Earth’s rotation. That’s why “astrophotography” is a pretty expensive word. You also need plenty of time and skill to process the resulting data. Over my time in amateur astronomy I’ve watched film die and be replaced by what is now far more capable digital imaging technology, but it comes with quite the learning curve when it’s time to deal with all those pixels you’ve collected.

So you’ll see a lot of photos of nebulae (the Great Nebula in Orion, the Trifid, the Lagoon and many, many others) where the dominant colors are a sort of pink/red and an electric blue. Those are, in fact, the real colors of these things. The red is the hydrogen alpha or HII emission line, 656 nanometers and change, and it’s a nice rich red color. Unfortunately, our retinas are not very sensitive to nice rich red colors! You can see the HII color in theory, but it has to be pretty darn bright, and deep-sky objects are not pretty darn bright (see below). The absolute spectral response of any detector is not going to be even across all wavelengths, and our eyeballs are no different. Add to that the processing done on the back end in the human brain, and you start to realize what a weird thing “color” is in the first place.

The familiar color spectrum that Newton clarified for us is not, for example, an even rainbow. Look at a real spectrum rather than a schematic of one: there’s a huge long red section, because our eyes and brain can’t tell much difference between everything from maybe 640nm wavelength to out past 700. It all looks red. Then there’s a brief slide through orange peaking around 590 or 600, an even briefer trip through pure yellow centered at maybe 560 to 570, followed by a much wider band of various sorts of green. Then you hit a relatively narrower zone of light blue, which pretty quickly starts to get purple mixed into it, and the shortest wavelengths all trail off into that purple/violet color and fade out (just as the longer ones disappear into fainter and fainter red). There’s nothing intrinsically “yellow” about 565nm light or “green” about 510nm, and there’s nothing that makes yellow relatively scarce in the spectrum, either: that’s just what our retina/brain combination assigns (nonlinearly) to the wavelengths. Different creatures map it all differently, and what’s more, they go off into wavelengths on each end that our eyes aren’t sensitive to at all. A famous example is how flowers look to us, compared to how they look to the more ultraviolet-sensitive systems that insects use. The JWST is an example at the infrared end. It can see objects that aren’t emitting any light our eyes can detect at all, so the answer (for some of its images) to the question “What does it really look like?” is “Nothing, it’s totally invisible”.

But the question is a fair one even for images that are mainly or completely in the visible spectrum, because many of them are done in one type or another of “false color”. The true colors of nebulae, for example, are spectacular but are produced from a pretty limited palette. You have (as mentioned) the red of the HII line, and right next to it there are a couple of equally red nitrogen lines. Past those, there’s an even deeper red (this one really outside the visible for us) from excited sulfur atoms, the SII line. The OIII line from excited oxygen atoms is in the blue-green region, and since that one’s in the visible range and we’re reasonably sensitive to it, you can buy visual filters for your telescope eyepieces that pass it and blocks the rest of the spectrum. It does indeed turn all the stars in the field of view into dimmer blue/green versions of themselves, but it also strongly improves the contrast of nebulosity, to the point where things that are just barely visible without the filter (or indistinguishable from dim stars, if they’re small) jump out at you with it attached. There are certainly H-alpha filters available as well, but they’re useless for visual observation alone because nothing in the sky (except the Sun) is bright enough for our eyes to respond to H-alpha light. There’s also an H-beta line out in the light blue/aqua part of the spectrum at about 486, and that’s a contributor, too. You can buy H-beta filters for visual observing, but there are many objects that don’t emit enough of it to make it worthwhile – on the other hand, there are a few others for which it gives the best view, so it’s your call on whether to buy one. Finally, some of the blue colors you see in many nebulae are the result not of emission, but of nearby starlight reflecting off of dust clouds. It’s blue for the same reason the sky is blue to us – the red light is scattered by the dust and particles, leaving the blue to come through. There are whole nebulae that are nothing but blue reflected light, like the ghostly faint streaky clouds among the Pleiades stars.

But through the telescope eyepiece, unfortunately, you’re not getting as colorful a spectacle. Stars themselves are naturally bright enough to give colors, and there are some great ones – deep red “carbon stars” that look like Christmas tree lights compared to their brighter surroundings, and great color-contrast double stars like Albireo, getting high up in the summer sky right now with its startling blue-and-gold. The deep-sky objects, though, simply aren’t bright enough (for the most part) to trigger much color vision. The big (and bright) Orion Nebula looks greenish to me, surely the result of OIII light, and so do some of the smaller brighter “planetary nebulae” from dying stars. But most of them are various shades of gray and off-white. Galaxies are invariably pale grey/white as well; no human eye can see colors from another galaxy’s stars and nebulae without photographic help.

Photographic help, that’s where the colors come in. When an astrophotographer (amateur or professional, backyard or Hubble or JWST) takes a deep-sky image, it’s always a composite of several different exposures. That lets you stack the images computationally and thus average away the inherent noise (and bring out more details), and it also gives you the chance to take different images through different filters. Often you’ll collect a lot of light through an H-alpha filter if you’re imaging a nebula region – after all, that’s where most of the light is coming from. But you will also collect at other wavelengths as well, like OIII, and often take some more or less unfiltered exposures for overall brightness/luminance data.

How you assemble all these is up to you! Many will remember the famous Hubble “Pillars of Creation” image from the Eagle Nebula. That, though, is not the color balance that the naked eye would see, even allowing for brightness and long exposure. That one uses the “Hubble Palette” to assign the Red, Green, and Blue channels (RGB) of a color image. Specifically, the Red channel is mostly SII light, with some HII mixed in – so far, so good, since those are both red to start with, although the SII is beyond our ability to really see it directly. But the Blue channel is nothing but OIII light, which is really more of a green to us. What about the Green channel, then? Well, for this palette that one’s mostly red HII light, with some OIII and SII as lesser components. Completely artificial! So what’s the “real” image?

If you mean “real” as in “what our eyes would see if they could”, then the answer is “Mostly shades of red/orange/pink”. Problem is, our visual systems are not good at picking up contrast details in that region, which is why we see the sunlight spectrum carrying a big ol’ smear of hard-to-distinguish red shades on one side of it. If you really want to be able to pick out the edges and curls and tendrils in nebulosity, you don’t want to use “true color” as much. The Hubble Palette is just one choice, arrived at by experimentation, for showing off what’s really in a deep-sky image. Here’s an example of a different sort of object in the Hubble colors as opposed to a good shot at “real eyeball” ones, and you can see that the latter image, while still impressive, is much less detailed to the eye.

The JWST images are even further from “real color”, of course, because they have to be: most of their spectral range is outside what the human eye can see, anyway! So if these were rendered in nothing but “What you’d see if somehow your eyeballs were re-engineered for long exposure but we left everything else the same” mode, everything would be sort of deep red on black, hard to pick out, and most of the image wouldn’t show up at all. The same goes for ultraviolet astronomy, radio astronomy, X-ray astronomy, and so on: what “colors” are these images, really? It’s a meaningless question; our retinas and our brains don’t assign colors to these wavelengths because they can’t even detect them. If you want to see what these images have to show you, you have to pick colors of your own to show them to their best advantage.

Honestly, it was the same with film cameras and it’s the same with your iPhone. I used to shoot a lot of Fuji Velvia slide film back in the 1990s, taking closeup shots of nature subjects (flowers, insects, and so on), and the colors were great. But if you were foolish enough to shoot a portrait of your friends with Velvia, you had to be sure not to show it to them, because (among other things) it could bring out red and pink gradations in Caucasian skin tones like crazy. Some folks looked like they had sunburn or impetigo. iPhone shots now have trouble dealing with the hue and saturation differences that we can see in (for example) a red rose – the sensor and the algorithms in the background tend to just blow things out and make it more of an undifferentiated red (which can often be rescued, more or less, with tweaking later on). Pictures of snowy scenes, whether taken with a film camera in 1985 or an iPhone or digital SLR earlier this year, often will come out looking kind of dim and gray (and that went double for the film-camera era when you had prints made, because that’s another set of exposure choices laid on top of the ones that produced the negative). If you let the algorithms pick your exposures or your displayed color balances, they will tend to take the main parts of your image towards the middle. Middle green for a forest or lawn, middle blue for a shot with a lot of sky in it, middle white for a scene full of snow. And middle white is. . .gray. If you want that to actually be stark snowy white, you have to fix that up yourself.

And astrophotographers likewise have to fix their own images up themselves. I’m all for letting them pick the color palettes that bring out what there is to see in their images – especially for the things that start with no colors at all!