1.
Feature
THE TEN THINGS EVERYONE SHOULD KNOW ABOUT SCIENCE
By Clive Cookson
Financial Times (UK)
November 24, 2007
http://www.ft.com/cms/s/0/116f20f8-9a2f-11dc-ad70-0000779fd2ac.html
You may be able to quote Shakespeare, but what are you like on Big Bang theory? Clive Cookson gives non-scientifically minded readers a leg up the tree of knowledge.
Can you distinguish molecules from atoms? Genes from genomes? Do you know what makes an experiment statistically significant? If not, do you care? Are you embarrassed by your scientific ignorance -- or almost proud of it?
Scientists have been complaining for decades that, while they would be ashamed to admit knowing nothing about Jane Austen's novels, literary colleagues can get away with total ignorance of relativity and quantum theory. As Larry Summers noted on his installation as Harvard University president in 2001, students rarely admit to never having read a Shakespeare play but find it “acceptable not to know a gene from a chromosome or the meaning of exponential growth.”
In Britain, it is socially acceptable for an arts graduate to say with a certain insouciance: “I failed chemistry GCSE” or “I scraped a C grade in maths.” But a scientist would be brave indeed to reply: “Well, I only got a D in English literature.”
In this science issue of the FT Magazine we do our bit to fill the ignorance gap, by explaining the ten key concepts that everyone needs to understand if they are not to feel an ignoramus when science comes up in conversation -- and if they are to have a handle on important developments reported in the news. This selection is intended to encompass the mainstream fields of biology, physics, and chemistry.
Inevitably, such a list is somewhat arbitrary. Concepts that just missed the top ten include risk, plate tectonics, and the laws of thermodynamics. But compiling this list was simpler and less contentious than, say, choosing the best 20th-century novels -- let alone the most important concepts in literature -- because fewer candidates are available for selection, and there is more agreement among scientists than literary critics about what really matters.
Ruthless simplification is required to squeeze complex subjects, on which libraries of textbooks have been written, into chunks of about 300 words. Non-scientific readers will, I hope, find the explanations reassuring rather than perplexing. A good reaction would be: “Ahh, I see . . . that's not as difficult as I thought.'' As well as providing a basic description of each concept, we have indicated why it matters, and how the field is likely to move forward. We also assess the level of fear it may induce in the uninitiated.
The important thing is not to be too earnest about science. The main reason that children turn off the subject is that it stops being fun, argues Natalie Angier, a leading U.S. science writer whose guide to scientific essentials *The Canon* will appear in the U.K. next year.
“Science should not be seen as a boring body of facts but as an exciting series of ideas,” says Angier.
“Science is not just one thing, one line of reasoning, or a boxable body of scholarship like, say, the history of the Ottoman empire. Science is a great ocean of human experience.”
In Britain, experts in science communications also say there is too much concern about knowledge for its own sake. Clare Matterson, who runs the extensive public engagement activities of the Wellcome Trust, the research charity, says: “It is not important for everyone to know everything about science, but we should all be able to ask the question: “How do you know?” The more relevant we can make science, the more people have a place to become interested and engaged.”
Beyond the intrinsic intellectual interest, there are myriad practical reasons why as many people as possible should have a basic knowledge of science. An obvious one is that a scientifically savvy population is less likely to fall victim to fraud and superstition, from astrology to quack cures. And when so many contemporary political issues (from global warming to embryo research) have a big scientific component, voters and politicians need to understand what is really at stake.
The icon of transformation from scientific ignorance to wisdom is the travel writer Bill Bryson. Shame about not knowing a proton from a protein, or a quasar from a quark, prompted him to spend three years researching and learning what he was missing. The result was A Short History of Nearly Everything, the best science book of the 21st century so far. If you do not have three years to spare, reading A Short History of Nearly Everything is an excellent short cut. And if you don't even have time for the book, this magazine is your quick fix.
EVOLUTION
Evolution through natural selection remains as valid today as it was 150 years ago when expressed with great elegance by Charles Darwin in The Origin of Species. The mechanism of evolution depends on the fact that tiny hereditable changes take place the whole time in all organisms, from microbes to people.
As a result of these random changes, each member of every new generation differs slightly from its predecessors. Most of the variations will have a neutral or negative effect on the organism's ability to live and reproduce, but occasionally a change enhances its ability to thrive in the environmental niche in which it finds itself. Such beneficial mutations tend to propagate through the population.
An important feature of Darwinian evolution is that it operates at the level of the individual. There is no mechanism for natural selection to change the species as a whole, other than through the accumulation of changes that lead to the survival of the fittest individuals.
The rate of evolution varies enormously between different types of organism and different environmental circumstances. It can proceed very quickly when the pressure is great, as, for example, with bacteria exposed to antibiotics, when drug-resistant mutations may arise and spread through the bacterial population within months.
Why does it matter?
Evolution is coming under renewed assault, particularly in the U.S., from fundamentalist Christians who want creationism to be taught in schools. Although evolution has had virtually unanimous support from professional scientists for at least a century, polls show that American public opinion still favours creationism.
What next?
Biologists still have to do a vast amount of work to pin down the history of evolution. Big questions to be answered include: how life started, why evolution accelerated rapidly during some geological periods and which factors gave rise to human intelligence.
Fear factor: sticky palms
GENES AND DNA
Darwin could not understand the biochemical mechanism of evolution, but 20th-century genetics has shown that the basic unit of heredity is the gene, which is made out of DNA. We have two copies of each of the 20,000 or so human genes, one copy inherited from each parent; if one is defective, the other can usually fill in for it.
As Francis Crick and James Watson famously discovered in 1953, DNA has a “double helix” structure: two interlinked spirals of biochemical units called nucleotides. There are four nucleotides, known by their initial letters G, A, C and T. In a molecular model of DNA, they look like a twisted stepladder.
The genetic code is the same in all living creatures. It translates the sequence of DNA nucleotides into amino acids, the corresponding building blocks of proteins. (Proteins are the biological molecules that do most of the work in our bodies.) Random mutations in DNA, together with the genetic mixing that takes place through sexual reproduction, make possible the variations that drive evolution.
The nucleus of every human cell contains six billion DNA nucleotides packaged into 46 chromosomes, which together make up the genome.
Why does it matter?
Now that the DNA sequence of the human genome is known, scientists are beginning to interpret the endless string of Gs, As, Cs and Ts -- and discover how our genes interact with our environment to make us the people we are.
What next?
The medical benefits of knowing the human genome are arriving more slowly than the enthusiasts led us to believe when the sequencing project was completed five years ago, but they are on their way. The destination is often called "personalized" or "individualized" medicine -- tailoring our lifestyle and treatments to our genes.
Fear factor: mild tremors
BIG BANG
For half a century the Big Bang has been the standard cosmological model of our universe. It holds that all matter and energy originated in a “singularity” -- a point of infinite density and temperature. Ever since the Big Bang, the universe has been expanding and cooling down.
Three main strands of evidence support Big Bang theory. First, galaxies are moving away from us at speeds proportional to their distance, suggesting expansion from a single point. Second, the universe is pervaded with “cosmic microwave background radiation,” presumed to be a faint afterglow of Big Bang energy. Third, the amounts of the most common chemical elements that astronomers observe in space correspond closely to the extrapolations of Big Bang theory.
What came before the Big Bang? There is no scientific way to find out but this has not stopped cosmologists, as well as philosophers and theologians, speculating. According to a popular hypothesis, there might be an infinite number of universes, each with slightly different laws of physics; a new universe could start from a singularity in an existing one.
And what does the future hold? One possibility is that everything will come together again in a Big Crunch, after countless billions of years.
But at the moment cosmologists believe it is more likely that our universe will expand for ever into a cold, desolate nothingness.
Why does it matter?
The past history, current structure, and future prospects of our universe have little impact on daily life on earth, but, intellectually, cosmology is one of the most exciting fields of science today. Recent discoveries in astronomy suggest that ordinary matter -- in the form of visible planets, stars, and galaxies -- makes up just 2 or 3 per cent of the universe. The rest, known as dark matter and dark energy, is a mystery.
What next?
Cosmology is one of the least predictable scientific pursuits. A new generation of telescopes, on the ground and in space, will provide the theorists with much more data over the next decade. How far this will enhance our understanding of the universe remains to be seen.
Fear factor: queasiness
RELATIVITY
If Charles Darwin and his theory of evolution have become the great symbols of 19th-century science, Albert Einstein and relativity play a similar role for the 20th century. Einstein's theory of relativity was published in two parts, both of which have had an immense influence on the subsequent development of physics and cosmology.
“Special relativity” (1905) showed that time and distance are not absolute but depend on the observer's motion. Key to special relativity is the famous formula E=mc2, where E is energy, m mass and c the speed of light. The formula implies that mass and energy can be converted into one another, that the speed of light in a vacuum is the same for all observers under all circumstances, and that nothing can travel faster than light (300,000km per second).
“General relativity” (1915) brought gravity into the theory, showing that heavy objects distort the fabric of space and time through their gravitational fields. General relativity passed its first public test during a solar eclipse in 1919, when telescopes showed light from distant stars "bent" by the sun's gravity, exactly as the theory predicted. Another prediction, confirmed much more recently, is the existence within galaxies of "black holes" from which no matter or light can escape because the force of gravity is so strong.
Why does it matter?
Like cosmology and the Big Bang, relativity underpins the intellectual framework of science.
But it has practical applications in space technology; for example, satellite navigation works because the Global Positioning System (GPS) takes account of relativity. And science-fiction writers need to invoke relativistic effects to make time travel possible.
What next?
No one knows. The great unmet challenge of theoretical physics is to combine relativity with quantum mechanics. The two theories still co-exist uneasily without a common base. One day another Einstein will come up with a grand theory to unite them.
Fear factor: palpitations
QUANTUM MECHANICS
Quantum mechanics grew up alongside relativity in the early 20th century.
If anything, quantum mechanics is even more far-reaching than relativity -- and even harder to explain. Two mutually contradictory quotes from famous physicists sum up its weirdness and complexity. Niels Bohr: “If quantum mechanics hasn't profoundly shocked you, you haven't understood it.”
Richard Feynman: “I think I can safely say that nobody understands quantum mechanics.”
Whereas the effects of relativity are felt mainly on the grand scale studied by astronomers and cosmologists, quantum mechanics is most important when things are extremely small. The first key idea in quantum theory is that energy and matter are not continuous but come in small, discrete packets: quanta.
The second is “wave-particle duality”: all subatomic particles can be regarded as waves as well as particles. Light itself is both a stream of particles -- photons -- and a series of waves.
The most famous consequence of wave-particle duality is the “uncertainty principle” originally formulated by Werner Heisenberg in 1927, which puts a limit on how much we can know about a quantum object. It is impossible to measure precisely and simultaneously a particle's position and momentum; the best we can do is define the statistical probability of where a particle such as an electron is likely to be.
What next?
Quantum effects are important in electronics and nanotechnology -- and they will become far more important as miniaturization proceeds.
Future developments?
The most important application in the medium-term future -- say, 30 years from now -- may be quantum computing; this would use quantum effects to produce computers far more powerful than today's silicon-based systems.
A much more distant practical prospect is teleportation -- instantly transferring matter from one place to another without having to travel through conventional time and space.
Fear factor: sweat and tears
RADIATION
Radiation has become one of the more frightening words in science because it is associated with dangers such as radioactive materials, nuclear accidents, and futuristic weapons. Though radiation can be deadly, there is nothing new about it; radiation is ubiquitous and life depends on it.
All of the many types of radiation consist of energy travelling through space. Electromagnetic radiation is essentially light waves, which can range in frequency along the ''spectrum'' from radio through visible light down to gamma rays. Particle radiation is made of neutrons, protons, or electrons.
An important distinction is based on the radiation's energy level. The strongest radiation is known as “ionizing,” because it can create ions by removing electrons from atoms. This includes X-rays, gamma-rays, and the subatomic particles emitted by radioactive isotopes as they decay. Less powerful radiation is “non-ionizing.” Although ionizing radiation is in principle more dangerous to health than non-ionizing radiation, energy level is not the only factor to take into account. Intensity or brightness matters, too. An intense source of non-ionizing radiation, such as a powerful laser light source, may be far more hazardous than a lump of radioactive mineral occasionally emitting ionizing particles.
Why does it matter?
Technology that uses radiation pervades modern industrial society, from broadcasting to X-ray machines. But fear of radiation is an important reason why some governments are finding it difficult to build new nuclear power stations.
What next?
Medical technology will benefit enormously over the next few years from new ways of using radiation to “see” into the human body.
Fear factor: knocking knees
ATOMS AND NUCLEAR REACTIONS
The atom is the basic building block of chemistry.
The name comes from the Greek *atomos*, meaning indivisible, though an atom can be split into even smaller particles. It has a nucleus made up of a positively charged protons and electrically neutral neutrons, surrounded by a cloud of negatively charged electrons. (The fact that protons and neutrons are made up of smaller subatomic particles, called quarks, matters little in the real world.) The chemical character of an atom depends above all on the number of protons in its nucleus -- its atomic number -- which defines it as a chemical element. The best-known representation of the elements, arranged by atomic number and denoted by a one- or two-letter symbol, is the periodic table originally drawn up by Dmitri Mendeleev in the 19th century.
Each element can exist as different isotopes, depending on how many neutrons there are. The nucleus can only remain stable up to a certain size. If it is too big, or if the balance between protons and neutrons is wrong, the atom will undergo radioactive decay and split into smaller pieces.
The simplest example is element number one, hydrogen. It has two stable isotopes, in which the nucleus contains either a proton on its own or a proton and a neutron; the third isotope (tritium) is an unstable and therefore radioactive combination of a proton and two neutrons.
Most elements up to number 83 (bismuth) have at least one stable isotope. Heavier elements such as uranium (92) and plutonium (94) exist only in radioactive forms. Nuclear reactions, which either join together light atoms (fusion) or split heavy ones (fission), can release vast amounts of energy -- either suddenly, in nuclear weapons, or more gradually, in power stations.
Why does it matter?
Nuclear energy has not lived up to its initial promise, half a century ago, when the first atomic power stations were opening and enthusiasts had a vision of nuclear-generated electricity “too cheap to meter.” But nuclear power is a key ingredient in the world's energy balance -- and, unfortunately, it seems that nuclear weapons are here to stay.
What next?
All nuclear power currently depends on fission. But the great hope is nuclear fusion, the subject of a $10bn experiment, International Thermonuclear Experimental Reactor (Iter), which is under construction in France.
Fear factor: dry mouth
MOLECULES AND CHEMICAL REACTIONS
On earth, most atoms do not exist on their own but are joined together with others as molecules. Or, using different terminology, most elements combine to form compounds. Chemistry is all about the reactions that make and rearrange the bonds between atoms.
Organic chemistry concentrates on carbon, which can form a greater variety of compounds than any other element. The most important molecules of life, proteins, and DNA, are based on long chains of carbon atoms linked to other elements -- particularly hydrogen, oxygen, and nitrogen.
All chemical reactions involve a change in energy. Most release energy, usually as heat; our bodies are warmed by organic reactions based ultimately on the oxidation of the food we eat. (A few reactions give off energy as light rather than heat -- a property exploited by fireflies and glow-worms.) On the other hand, “endothermic” reactions absorb energy from the environment (which is why commercial cold-packs can chill a drink within a few minutes).
Many reactions need a chemical push to get started. This is provided by a “catalyst,” a substance that speeds up a reaction without being consumed by it. Enzymes are the biological catalysts on which life depends.
Why does it matter?
We are all made of chemical compounds, and every aspect of biology runs on chemical reactions. Chemistry-based industries include oil and petrochemicals, pharmaceuticals and biotechnology, food processing and paints.
What next?
Although chemistry is a relatively mature science, chemists continue to discover different and more efficient ways to carry out reactions. These will synthesize new materials, from plastics to pharmaceuticals, while producing less pollution than today.
Fear factor: chattering teeth.
DIGITAL DATA
The world of computing, telecommunications, and electronics is going digital. Information, whether it is the human voice, a television picture, or a computer program, is stored and processed as strings of binary digits or "bits" (0s and 1s). The real world, in contrast, works on analogue signals. Its sights and sounds are not a series of numbers but vary continuously in space and time.
Converting the analogue world to digital signals involves some loss of information, because digitization means taking a sample of the original rather than transmitting the whole thing. But this loss is a price worth paying, because digital data is so much easier to transmit, store, and process electronically.
Think of a top-quality analogue recording of music on vinyl. It can provide an aural experience unmatched by a digital CD, as long as the record is new and unscratched. But frequent playing distorts and degrades the analogue signals on vinyl, whereas a CD with digital bits hardly loses any sound quality.
In broadcasting and telecommunications, the greater resistance of digital signals to fading, static, and distortion is even more important -- and so is the fact that digital transmissions take up less bandwidth than analogue. In practice all modern computing is digital and therefore any information fed into a computer must be digital too.
Why does it matter?
Consumer electronics are going or have gone digital. In the U.K., all television broadcasts will be digital by 2012 -- and old TV sets will be useless without an electronic box to convert the digital signals to analogue.
What next?
Millions of scientists, electronic engineers, and information technology specialists around the world are developing new ways to use and process digital data -- from super-fast computers to do-everything mobile devices.
Fear factor: dilated pupils
STATISTICAL SIGNIFICANCE
Researchers need a statistical method to tell whether apparent relationships are real or the result of chance. Does a new drug treat a disease better than a placebo? Does pre-school education enhance later academic performance? Is global warming increasing rainfall?
Mathematical techniques have been available since the 1920s for working out the probability that the outcome of an experiment is the result of a statistical accident rather than a real effect. This is denoted by the symbol p. The cut-off for accepting an outcome as genuine -- or “statistically significant” -- varies across the sciences but in biomedical studies the upper threshold of p is usually set at 5 per cent or 0.05; in other words, the probability that the result occurred by chance alone must be less than 1 in 20.
Of course lower p values increase confidence that the study has detected a real effect; p less than 0.001 is sometimes called highly significant. But it is important to remember that in this context “significant” is a statistical term. It does not necessarily mean that the result is significant in a more fundamental sense or indeed that the study was well designed and properly conducted.
There are many ways in which a statistically significant result can be misinterpreted. One is failure to take account of hidden factors not included in the statistical analysis, which bias the outcome. For example an investigation into the effect of religion on health found that church attendance was associated with a significant reduction in mortality; but a potential biasing factor, not considered in the study, was the fact that people at greatest risk of dying were too ill to attend services.
Why does it matter?
Statistical analysis -- if carried out well -- is the most rigorous and objective way to assess how well evidence fits theory.
What next?
Some critics claim that contemporary science places statistical significance on a pedestal that it does not deserve. But no one has come up with an alternative way of assessing experimental outcomes that is as simple or as generally applicable.
Fear factor: nervous twitching