Which reminds me of a time a dozen years ago now that my boss was leaving to accept a faculty job elsewhere. Our trusty group of three walked into his office one day and asked him what he did all day, so that we could split those tasks between us and carry on.
"Um," he said. "Gulp."
As I recall, nothing much that was useful came out of that meeting.
Anyway…
I'm a scientist. I did a PhD in Wisconsin in 1985, with a thesis title of "The Soft X-ray Background as a Blast Wave Viewed from Inside". Basically I do astrophysics, mostly in the x-ray band. I started out as a theoretician; did a 2-year post-doc at the University of Virginia, and decided that space flight hardware is where the money is, and convinced a couple folks in Wisconsin to hire me on to projects like that: building stuff, and analyzing data.
I did a few papers using data from IUE, the International Ultraviolet Explorer, a very small telescope with an ultraviolet spectrometer aboard; basically a fore-runner or pathfinder or something for the Hubble (some of its instruments, at least). That was half of my job, more or less until Hubble was launched. The guy I worked for was part of the team for the Goddard High-resolution Spectrograph (GHRS), and had some observing time because of that. We had to trim the planned program several times when it became apparent that the telescope couldn't be focused because of spherical aberration in the mirror. It was both exciting and sad. But not as sad as another guy in the department there, who'd been Principal Investigator for the High Speed Photometer (HSP), which wasn't very useable with the degraded mirror, and was removed to make room for the corrective optics. Basically, they got nothing.
The other half of my job in those days was working with the hardware team on an attached Space Shuttle payload, the Diffuse X-ray Spectrometer. This was developed by the lab I'd worked in as a grad student (my advisor was their pet theoretician), as a sounding rocket payload for proof-of-concept.
A diversion:
The aforementioned Soft X-ray Background is a diffuse glow of x-ray emission that covers the sky and fills in the space between the stars and galaxies and such that are visible in x-rays. The team in Wisconsin (and several other teams) had mapped the emission using a series of sounding rocket payloads, that basically took images of the sky in various x-ray energy (or wavelength) bands. Think of it as a color picture with R, G, and B broad-band colors.
We think the x-rays come from highly excited atoms in space. There are nowadays two competing ideas: one is that there's a volume of hot gas (about a million Kelvin) more or less centered on the solar system with a radius of about 100 parsecs (300 light-years) and about one supernova's worth of thermal energy in it. You can see the cavity in other wavelength bands because of an absence of absorption towards stars out to roughly 100 parsecs in most directions.
The other idea is that it's coming from the solar wind. The sun spits out gas, mostly H and He, but the heavier elements get ionized 5 to 10 times (if they have that many electrons to give). Neutral atoms from interstellar space (oh, yeah, there's a mostly neutral cloud filling a space about 10 parsecs across right in our vicinity… weird, eh?) stream into the solar system, ignoring all the magnetohydrodynamic shocks and stuff at the edges of the heliosphere. Then you can get reactions like this:
H0 + O+8 --> H+ + O+7*,
where the asterisk denotes the fact that the final ion ends up in a highly excited state. The electron can then cascade down to the ground state, emitting x-rays along the way.
The idea of DXS was to obtain a spectrum of the x-rays in the 0.1 to 0.25 keV energy band (85 down to about 44 angstrom wavelengths) where elements like iron, magnesium, silicon, and sulfur should be emitting, so we could figure out what the temperature is, whether it's thermal or charge-exchange, or what.
Anyway. Building space-flight hardware is much like other engineering projects, except everything has to be highly reliable and work far from help in a hostile environment. We had a team (same guys, give or take, who built the HSP for Hubble) of mechanical, electrical (both digital and analog), software, and systems engineers, basically to take the sounding rocket payload and turn it into an attached shuttle instrument.
Since mechanical and electrical engineers can only just barely communicate, and since there are gadgets like pressure transducers and strain gauges that are mechanical on one end and have wires hanging out the other, there's a problem. The PI's solution to that problem was me: throw a wise guy with a physics degree into the mix. Granted, I don't know shit about engineering, but I do kinda know about the physics behind it, and I'm pretty good at thinking on my feet, and translating. That skill also turned out to be handy for negotiating between the science folks and the software people.
The instrument flew on the STS-54 mission of the Space Shuttle Endeavour in January, 1993, just prior to Bill Clinton's inauguration. We controlled it from a windowless room at the Goddard Space Flight Center (GSFC) in Greenbelt, MD.
The mission was a qualified success. There was a roughly 24 hour period (of the six-day mission) when the instrument wasn't working, so we reshuffled all the shiftwork, leaving me running the 6pm to 6am shift while the PI consulted with the brain trust to resuscitate the instruments. Which we managed to do, get them running, and obtain sufficient data to show that, in fact, the diffuse soft x-ray background has emission lines in its spectrum (not a surprise, but it was the first time anybody'd actually seen them). Here is a little article I wrote about it shortly after the mission. (Credit for web design goes to reel_life.)
Figuring out what that means beyond that it probably comes from one of the two ideas outlined above is a really hard problem, it turns out, exacerbated by the fact that nobody has either measured or calculated in detail what highly ionized atoms of silicon, sulfur, magnesium, iron and so forth should actually *do*, in terms of emitting x-rays.
So in 1995 that job was mostly done except for the data analysis and interpretation, it turned out that there was a project to build a high-spatial resolution general purpose x-ray telescope and fly it, and they were hiring.
That telescope became the Chandra X-ray Observatory. I went to work for them at the Smithsonian Astrophysical Observatory (SAO), which shares a building, a director, a library, and some other stuff with the Harvard astronomy department, to form the Harvard/Smithsonian Center for Astrophysics. In some senses this is The Place To Be for astrophysics in the US. It's often the case that I can walk down the hall and knock on the door of the world's expert on some obscure bit of astronomy or astrophysics.
My part of the action before launch was to participate in the ground calibration, where we put the mirror assembly on its side in a big vacuum tank at the Marshall Space Flight Center, shone real x-rays through it from a source 600 meters away, and measured the results at the focal point. This was to verify a very fancy ray trace code that several of my colleagues had built to predict the performance of the telescope. We had an array of x-ray detectors to use for the calibration, including a microchannel plate camera, some proportional counters (basically geiger counters that were tuned differently) and some solid-state detectors (germanium in our case). Trying to calibrate anything to 1% accuracy is seriously a bitch. There was an attempt to build detectors as nearly identical as possible, put one (or more) in the beam beside the telescope, and one at the focal point of the telescope, and divide the signals, hoping the instrumental details would all cancel.
Ten years later, no 13 now, we're still trying to get that to happen.
Getting the absolute sensitivity of an astronomical instrument is tricky. In the optical, a lot of stuff is tied to the brightness of the star Vega, which by definition is apparent magnitude zero in whatever optical band you pick. But how bright is Vega, in real units? We set out to create an absolute standard in x-rays. I think we accomplished that, to an accuracy certainly better than 10%, and maybe as good as 3%. The details are ugly and still controversial.
Chandra was launched from the Space Shuttle on STS-93 in July 1999. I got to go to Cape Canaveral and watch. A night shuttle launch is well worth seeing; lights up the whole county. The VIP stand is 3 miles from the launch pad, near a little museum which houses one of the few remaining Saturn V rockets.
Around the time of launch, the guy on the calibration team who was tasked with organizing the calibration of the prime camera, a CCD camera called ACIS (Advanced CCD Imaging Spectrometer) decided he'd rather be in academia. So my boss looked at the dozen or so scientists in her group and asked the fateful question: "Who in the Calibration Group knows about instruments?" You might think a properly staffed program, especially a 2 Gigabuck program, would never have to ask that question, but hey. My number came up.
So as a day job, I calibrate ACIS.
What does that mean, exactly? I like to summarize it by saying that we make numerical models of the telescope, so that people can tell the difference between astrophysics and instrumental hallucinations.
Another aside might be in order.
ACIS is a CCD camera, not unlike the ones in digital cameras nowadays. When used as optical detectors, a CCD accumulates charge in each pixel in proportion to the amount of light it sees during what are often very long exposures in astronomical uses, and then reads the whole image out at once and digitizes it. For x-ray use, the array is read out often (every 3.2 seconds by default). The idea is to have only one x-ray photon in any area of the chip. The amount of charge deposited is proportional to the energy in the x-ray photon. The on board software then looks at the image, snips out 3x3 pixels around each local maximum, and sends the information to the ground. There's not enough bandwidth to send the whole images, so just sending snippets near what might be x-ray events has to suffice.
The basic data product is a list of "events" (which may be x-rays, cosmic rays, or some other kind of "background"). Each has a time of arrival (when it was read out on the satellite), a bunch of coordinates from raw chip coordinates to computed sky coordinates, the raw pulse heights in the nine pixel island, their sum, and the energy derived from that. Probably some other stuff as well; I forget. In addition, the user gets lots of "engineering data" about the state of the satellite, where it was pointed, the temperatures, voltages, currents, etc. etc. etc., most of which is used by the data "reduction" software (it makes the data set larger) but can otherwise be ignored.
The camera has 10 CCD chips, roughly a square inch each. These are arranged in two arrays, one 2x2 chips for imaging, and one 1x6 chips for use as a readout for the grating spectrometers (which disperse x-rays in proportion to their wavelengths).
So many pixels (10 million), so little time. We have an onboard calibration source; it's a small amount of radioactive iron-55 which fluoresces lines of Manganese, Titanium, and Aluminum which have known energies. We use those data to calibrate the energy response of the CCD chips.
We also observe various sky sources periodically for various reasons. One of my personal favorites is the supernova remnant 1E0102-72.3 in the Small Magellenic Cloud (a companion galaxy to the Milky Way). It's been the Astronomy Picture of the Day poster child a few times. E0102. It's bright, and it has a simple x-ray spectrum: the x-rays come from gas that's almost exclusively oxygen and neon that was ejected during the supernova explosion and is now being shocked to temperatures of about two million Kelvin.
As you can imagine, with 10 megapixels to characterize, there needs to be a lot of automation. The full-fledged data reduction software is written by professionals, most of them on contract with us. (You want a job?) But there's a fair amount of fiddly little scripting (to get tool A to talk to tool B) or minor programming work (algorithm development, proof of concept, etc) that we do ourselves, in whatever language comes to hand. In my case that's usually perl, C, Fortran, IDL (the Interactive Data Language; see also GDL. Not to be confused with the Interface Design Language), or S/Lang. Or sometimes several different languages (C-shell scripts to run a bunch of tools end to end, for example).
The ultimate product of the calibration is a so-called Calibration Database, which contains various products such as a bad pixel list, the quantum efficiency of the CCD camera (probability of a photon being detected, given that it hits the front surface of the filter) as functions of time, position, photon energy. The response function, which is the distribution of pulse heights you get for photons of a given energy (it's not a perfect spectrometer, so this has a finite width, which again depends on the position and photon energy, and changes slowly with time over the years).
Lately I've been working on "trap maps" which characterize the charge transfer inefficiency (CTI), i.e. the amount of charge that's lost clocking the signal across the chip to the readout. We have some software to correct some of the effects of CTI, and it feeds on one of these trap map files.
Supposedly, I have 30% of my time to do my own science, either with Chandra data or on any other subject in astrophysics I might choose (and can get funded). It doesn't work out that way, but in the summer of 2008 a couple of us split the time and attentions of an undergraduate intern from Michigan who did some interesting work setting limits on how much energy in the Cygnus Loop (a.k.a. Veil Nebula) supernova remnant blast wave can be going into accelerating charged particles to cosmic ray energies. We published this paper on that work.
< I have a belt and a singlet now. | See? This is what happens! > |