(The following editorial first appeared in the June 1998 issue of Analog Science Fiction and Fact. Thirty-five others, on a wide range of topics, are collected in the 2002 Tor book Which Way to the Future? A different one (usually one not available in the book) will be posted here a few times a year. And, of course, brand-new ones appear in each issue of Analog.)
Not everything a person learns in school is in the official curriculum. I've often remarked, for example, that one of the most disappointing things I learned as a graduate student in physics was just how much research is done not because it's particularly interesting or important, but simply because somebody needs a publication and a search of the existing literature turns up something that could be done and hasn't been.
That's a true observation, and an important lesson for anyone contemplating a scientific career. Such a career will not, in general, be one of continuous excitement and glamor and groundbreaking discoveries. Much of iteven for most of those lucky enough to achieve those things on a few occasions in their careerswill consist of long periods of tedious drudgery, frustration, and work that does nothing more than fill in a few gaps in an incomplete picture of the universe. That realization might lead some to the conclusion that a research career is not for them, and others to look harder for really good research topics.
But there's another side to all this. Deciding that much research is routine or even trivial, motivated by nothing nobler than the desire to publish rather than perish, might do more than lead some potential scientists to choose their topics more carefully or to go into a different kind of work. It might lead those who support research to decide that much of it isn't worth supporting. There is already evidence of this tendency in the current fashion among industrial labs and funding agencies to drop support for basic research and put money only into those things that seem to have a clear and large potential for producing "useful results."
That, in the long run, could be one of the worst mistakes ever made. You can't always tell in advance which research will be particularly interesting or importantand the kinds that eventually prove to be the most significant may appear among the least promising until they're actually done. Fundamental breakthroughs, by their very nature, cannot be predicted or extrapolated. The experiments that you can confidently predict will have "useful results" are those in areas that you already understand pretty thoroughly.
Except that sometimes it turns out that you don't understand them as thoroughly as you thought you did. My own thesis research made extensive use of the Mössbauer effect, a peculiar phenomenon involving gamma ray absorption by atomic nuclei embedded in solids. Discovered in 1957, it quickly became an important technique for research in a wide range of studies involving nuclear and solid state physics and relativity. But it wasn't discovered because anybody predicted that it ought to happen and went looking for it.
It was discovered because Rudolf Mössbauer, then a graduate student, did what seemed a fairly routine experiment with nuclear resonance scattering. He expected the effect he was observing to decrease when he lowered the temperature of his samples, but instead it increased. At this point most of us would have spent a prolonged period kicking our equipment and trying to figure out why it wouldn't give the results it was supposed to. I suspect Mössbauer also did his share of that, but eventually went beyond it to figure out what was really happeningwhich turned out to be explainable in terms of existing theory, but only if he considered parts of the theory that had not at first seemed pertinent. Eventually his work led to both his doctorate and a Nobel prize, and a vast body of applications by other physicists and chemists.
Another example is familiar to almost anyone who's been following the highlights of recent sciencethat is, one of the eventual results is familiar, though the path leading to that result is full of surprises. I refer to the recent fairly wide consensus that the extinction of the dinosaurs was attributable, at least in large part, to the impact of a huge asteroid hitting Earth some 65 million years ago. "Almost everybody" now knows that. A good many people also know that the research leading to that conclusion grew out of the discovery that there was too much iridium in a thin layer of clay laid down between the Cretaceous and Tertiary periods. Very few know that that research almost wasn't done because it seemed unlikely to be worthwhile.
The man who did it was physicist Frank Asaro (who also happens to be the father of science fiction writer Catherine Asaro), and I'm grateful to him for spending some of his valuable time telling me the story. The iridium measurements were originally suggested by astrophysicist Luis Alvarez and his geologist Walter, for reasons having nothing to do with either dinosaurs or asteroids.
Walter, in the late 1970s, had been studying sediments in the Appennines, trying to read the strata for information about paleomagnetismthe history of past reversals and other changes in Earth's magnetic field. He needed to determine how long it took to lay down that one-centimeter layer of clay, and Luis suggested that certain techniques of physics could be used to do so. In particular, he thought that a good clue would be the amount of iridium in the layer, since there isn't much natural iridium on Earth but it's deposited at a slow but steady rate from "cosmic" sources.
A good technique for measuring the iridium would be neutron activation analysis, and the senior Alvarez knew that Frank Asaro had done some notable work in that field. So he asked Asaro if he would do some neutron activation measurements on a sample of the K-T clay layer.
Dr. Asaro initially declined. He had already done some looking for iridium in what seemed similar contexts, without success. He didn't expect to find anything, and he was backlogged on "the work that paid the bills," so there didn't seem to be much point in spending time and resources on what would probably be a fruitless experiment.
However, the Alvarezes were very persuasive, and Asaro had recently acquired a new and more sensitive detector, and eventually he did catch up on the other work enough to take a look at that clay. And, to everybody's surprise, he found iridiumlots of iridium, relatively speaking. Not a lot by everyday standards, of course, but far more than any of the people involved could explain by the mechanisms they expected to be acting. So the initial impression was that "the experiment failed."
Failed, that is, in the sense that it didn't give the expected result. But as we now know, it succeeded in the far more important sense of giving an unexpected result that was reproduciblein a widespread way. There ensued a whole series of experiments, each designed to check the results of the one before it and/or answer a new question that was raised by the earlier results. There are many fascinating details along the way that nicely illuminate some of the quirks of how science really happens, but this is not the time or place to describe those. Suffice it for now to say that that chain of experiments, and the hypotheses formulated and tested to account for each set of results, eventually led to the hypotheses, now widely accepted, that a major asteroid hit the Earth 65 million years ago, and that the consequences of that included the extinction of the dinosaurs.
In telling me the story (and any errors in the retelling here are mine, not his), Frank Asaro modestly gave credit wherever it was due, even to the point of saying at one point that another research group "should have made this discovery. We just got lucky and asked the right questions."
That is a very telling remark for the point I'm trying to make here. The really big discoveries are commonly made by somebody who "got lucky and asked the right questions"but nobody's sure which questions are the right ones until they're answered. The common pattern so well illustrated by the Alvarez-Asaro research is that the later experiments were ones that the researchers knew were interesting because they were specifically intended to solve puzzles created by earlier results. But the seminal experiment, from which that whole line of inquiry grew, was one that didn't look particularly interesting or rewarding to the man who did it. It turned out to be extraordinarily interesting only after the fact, because it gave an unexpected answer to a question the experimenters didn't realize they were asking.
That's why we need even the kinds of research that I found so disillusioning in grad school. Since you can't know in advance which questions will have especially interesting or important answers, it's important to ask, and answer, as many of them as you can. Anywhere you can find an unexplored nook or cranny, and someone willing and able to look into it, doing so is a worthwhile and commendable pursuit. True, most of the contributions made that way will be incrementalmore like a brick laid in the edifice of knowledge than the foundation of a new wing. But buildings don't get built if too many bricklayers decide their bricks aren't worth the trouble of laying. Few scientists will ever have the opportunity to revolutionize their fields or have effects named after them. But anyone who formulates an intelligent question that hasn't been answered, and a way to answer it, is doing something with the potential for that kind of significance.
Yes, both researchers and funding agencies must give careful thought to which research is worth doing or supporting, and to making sure that experiments are designed in an intelligent way to get maximum information from the available resources. But the promise of immediately useful results should not become the sole criterion for deciding that. A ruthless insistence on doing only "useful" research may guarantee us a higher percentage of small payoffs. But in so doing, it may make it less likely than ever that we'll hit any really big ones.