Saturday, March 5, 2011

Top 20 Most Beautiful Female Actresses

So I already compiled a list of my favourite top 10 beautiful male actors, It is now time to countdown my list of beautiful actresses.

In no particular order:

Elizabeth Taylor

Best Performance(s): 'A Place In the Sun' and 'Cat On a Hot Tin Roof'

Audrey Hepburn


Best Performance(s) - 'Breakfast At Tiffanys' and 'Roman Holiday'

Ava Gardner [/B

Best performance - 'The Barefoot Contessa'

[B]Sofia Loren


Best performance - 'Two Women'

Natalie Wood


Best performance(s) - 'Rebel Without A Cause' and 'Splendor in the Grass'

Lauren Bacall

Best Performance - 'Dark Passage'

Eliza Dushku

Best performance(s) - 'BTVS' and 'Wrong Turn'

Alexis Bledel

Best performance(s) - 'Gilmore Girls' and 'Sisterhood Of the Travelling Pants'

Kristen Bell

Best performance(s) - 'Veronica Mars' and 'Forgetting Sarah Marshall' and 'Gracies Choice'

Evan Rachel Wood

Best performance(s) - hands down 'Thirten' and 'Running With Scissors' and 'Across the Universe'.

Kristen Stewart

Best performance(s) - 'Speak' and 'Into the Wild' and of course 'Twilight'

Angelina Jolie

Best performance(s) - 'Beyond Borders' and 'Girl Interrupted'

Sarah Michelle Gellar

Best performance(s) - BTVS and 'The Air I Breathe'

Natalie Portman

Best Performance(s) - 'Closer' and 'The Professional'

Rachel McAdams

Best performance - 'The Notebook'

Charlize Theron

Best performance - 'Monster'

Kirsten Dunst

Best performance(s) - 'Virgin Suicides' , 'Marie Antoinette' and 'Eternal sunshine of the spotless mind'.

Mila Kunis

Best performance - 'Forgetting Sarah Marshall'

Summer Glau

Best Performance(s) - 'Firefly' and 'Serenity'

Michelle Rodriguez

Best performance(s) - 'Girlfight' and 'Lost'


Jeez, it was a tad longer than I thought......






356
Vote

   

Isabelle Adjani is the most beautiful in the world. Period.



 
Diane Kruger
Modern Audry Hepburn



Highest-Paid Hollywood Actresses in 2010

1.Sandra BullockSandrabullock thumb Top 10 Highest Paid Actresses in 2010

Estimated Net Worth: $85,000,000
Recent Earnings : $56 million
Winning Awards   : Oscar, MTV Movie Awards.
Hits: The Proposal and The Blind Side
Date of Birth: July 26th, 1964
Title : she’s the highest-earning actress. The most popular actresses in Hollywood.

2.Reese Witherspoon

ReeseWitherspoon thumb Top 10 Highest Paid Actresses in 2010

Recent Earnings : $32 million
Winning Awards   : Oscar, Golden Globe,MTV Movie Awards, etc.,
Hits: Walk the Line
Date of Birth: March 22, 1976
Title :  The most popular actresses in Hollywood.

3.Cameron Diaz

CameronDiaz thumb Top 10 Highest Paid Actresses in 2010

Recent Earnings : $32 million
Winning Awards   :  Golden Globe Nominated ,MTV Movie Awards.
Hits: Charlie’s Angels,Knight and Day,Shrek film series
Date of Birth: August 30, 1972

4.Jennifer Aniston

Jenniferaniston thumb Top 10 Highest Paid Actresses in 2010

Recent Earnings : $27 million
Winning Awards   : Emmy Award, a Golden Globe Award, and a Screen Actors Guild Award.
Hits: Comedy Series, Friends, Love Happens and The Bounty Hunter
Date of Birth: February 11, 1969
Title :  The Most Popular Overseas Star

5.Sarah Jessica Parker

SarahJessicaParker thumb Top 10 Highest Paid Actresses in 2010
Recent Earnings : $25 million
Winning Awards   : Four Golden Globe Awards, three Screen Actors Guild Awards, and Two Emmy Awards
Hits: Sex and the City
Date of Birth: March 25, 1965

6.Julia Roberts

JuliaRoberts thumb Top 10 Highest Paid Actresses in 2010

Recent Earnings : $20 million
Winning Awards   : Academy Award for Best Actress
Hits: Erin Brockovich, The Pelican Brief, Ocean’s Eleven and Twelve
Date of Birth: October 28, 1967

7.Angelina Jolie

AngelinaJolie thumb Top 10 Highest Paid Actresses in 2010
Recent Earnings : $20 million
Winning Awards   : Academy Award, two Screen Actors Guild Awards, and three Golden Globe Awards
Hits: Lara Croft Tomb Raider, Mr. & Mrs. Smith, Kung Fu Panda
Date of Birth: June 4, 1975
Title :  One of the Most Popular,Best-known and highest-paid actresses in Hollywood.

8.Drew Barrymore

DrewBarrymore thumb Top 10 Highest Paid Actresses in 2010
Recent Earnings : $15 million
Winning Awards   : Screen Actors Guild Award and the Golden Globe Award
Hits: Charlie’s Angels, 50 First Dates,
Date of Birth: February 22, 1975
Title : appeared People magazine’s 100 Most Beautiful.

9.Meryl Streep

MerylStreep thumb Top 10 Highest Paid Actresses in 2010

Recent Earnings : $13 million
Winning Awards   : Two Oscars, Two Academy Award, two Screen Actors Guild Awards, and seven Golden Globe Awards,two Emmy Awards, etc.,
Hits: Sophie’s Choice, Kramer vs. Kramer, record 16th Oscar nomination
Date of Birth: June 22, 1949
Title : American Film Institute’s Lifetime Achievement Award

10.Kristen StewartKristenStewart thumb Top 10 Highest Paid Actresses in 2010

Recent Earnings : $12 million
Winning Awards   : MTV Movie Award, Teen Choice Award, Young Artist Award
Hits: Twilight series,The Runaways
Date of Birth: April 9, 1990
Title : recognized as #17 on Entertainment Weekly’s top ’30 Under 30
 

Friday, February 25, 2011

Discovery Of The Blood's Circulation

William Harvey Lays the Basis of Modern Medicine

Galen
Galen
Blood, said the Greek physician Galen in the second century A.D., is manufactured in the liver from the food we eat, and is of two quite different and separate kinds. The two kinds are sucked up from the liver to the heart and sent out to the limbs and organs of the body from the heart’s two sides: the bright red blood through the arteries, far beneath the skin, and the other, darker variety through the veins which lie nearer the surface. The two varieties provide different elements for the nourishment of limbs and organs, and both pass through the lungs on their way, being cooled in the process. Hot blood, straight from the liver, is too fiery, must first be cooled by the air we breathe.
Much illness is caused by this blood being insufficiently cooled, and in cases of this sort the only treat­ment is to drain a little off with leeches, or by actually opening a vein and letting the fiery fluid run away. The blood when it reaches the limbs and organs of the body is used up entirely, and a new supply must constantly be produced in the liver from the food eaten.
William Harvey
William Harvey
This theory, incorrect, and verifiably so, in almost every detail, was subscribed to by the medical profession for fourteen hundred years. It remained for William Harvey, with his essay printed in 1628 under the imposing Latin title Exercitatio Anatomica de Motu Cordis et Sanguinis, to prove otherwise. His discovery, that the blood does not merely travel centrifugally to the extremities, get used up and disappear, but circulates continuously through heart, lungs, arteries and veins, an utterly novel, startling idea, is the basis of modern physiology, modern medicine. With its publication, medicine leapt from the ancient world to the modern, skipping a millennium and a half in the time it took to read the words, if a physician could read them calmly enough, in the seventeenth century, to get to the end: ”What is now to be said on the quantity and source of the blood is so novel and unheard of that I tremble lest I have mankind at large for my opponents. So much doth want and custom become a second nature…”
Harvey, after years of study in Padua, the anatomical workshop, men said, of the world, had estimated the quantity of blood sent out by the heart in each one of its pulses. He had been studying hearts of animals, birds, reptiles and men, noting their four chambers, measuring their dimensions, and he estimated that the amount of blood pumped by that organ was, in the case of an adult man, two fluid ounces with each beat. On average the human heart beat seventy-two times a minute, a fact with which even Galen would not have quarrelled. At that rate, a quantity of 72 x 60 x 2, or 8,640 ounces of blood, would be pumped out by it, in the course of every hour. This, Harvey saw, was three times the weight of the adult body. If all this vast quantity of blood were dissipated, as Galen had stated, if it had to be replaced by food, a man would be eating three times his own weight in every hour of the twenty-four.
This was obviously nonsense; and so it was, in Harvey’s mind, a logical step to a theory in which blood circulated from the heart to the extremities and hack again, before being re-used. The construction of the organ, with its four little chambers, two on each side of a central wall, and its valves which now, to Harvey’s eye, showed the direction in which the blood must flow, made his theory not only tenable but irrefutable. He went on, by a simple class-room experiment, to show that there were valves not only in the heart but in the veins. They were valves which allowed the blood to flow away from the extremities, toward the heart, and only in that direction, so that Galen’s theory of blood travelling outwards in the veins was doubly shattered.
Petit tourniquet engraving from 1798
Petit tourniquet engraving from 1798
Harvey’s experiment was to tie a bandage very tightly round the arm, midway between elbow and armpit, and twist the bandage with a small stick (a “tourniquet”) so that all flow of blood stopped. After two minutes the hand became blue and cold. Now if he released his bandage by a turn on the stick, leaving only a “medium tightness”, the hand would become engorged with blood and start to swell, at the same time displaying the valves in its surface veins as hard knots. Harvey’s explanation was: “The tight bandage not only obstructs the veins, but the arteries; whereby it comes to pass that the blood neither comes nor goes to the members. The medium bandage again obstructs the veins, while the arteries, lying deeper, being firmer in their coats and forcibly injected by the heart, are not obstructed but continue conveying blood to the limb. Wherefore follows the unusual fullness of the veins and the necessary inference that the blood flows incessantly outwards from the heart by the arteries, and ceaselessly returns to it by the veins. . . .”
(For any reader who cares to try Harvey’s experiment, it is worth noting that both “tight” and “medium” bandages on a limb are dangerous if kept on for more than a few minutes.)
Another experiment was to grasp a staff tightly, so that the veins in the forearm were displayed. Then, by pressing a finger-tip on them at various points, it was possible, observing closely, to see which way the blood flowed. There was no question about it: blood in the veins flowed to the heart, never away from it.
The circulation of the blood had been discovered, proved. From the left side of the heart, the lower of the two chambers on that side, or “left ventricle”, the blood was pumped out through the arteries to every part of the body. Then, through some “leakage” which Harvey failed to understand and at which he wisely made no effort to guess, it found its way into the veins and returned, this time to the right side of the heart, its upper chamber or “right auricle”. From here it passed through a one-way valve to the lower chamber on that side, the “right ventricle”, and was pumped out again, to the lungs, returning from them to the left side of the heart, an upper chamber or “left auricle”. A descent through the one-way valve on that side to “left ventricle”, and it set off again, through the arteries, to each part of the body.
Image of veins from Harvey's Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus
Image of veins from Harvey's Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus
So much could be proved and Harvey was content to leave it at that. In one way, this was a measure of his greatness. He had made a discovery which could be proved, and by several different observations. He was satisfied; he would not try to extrapolate a complete physiology of circulation and respiration, because he had no means of proving it. That could wait. Others had been eager to manufacture nonsensical and involved theories from a few shreds of misinterpreted evidence, and the medical profession, which was then (and still, some say, remains) a religion, like English Law, unquestioned, unquestionable, sacrosanct, were anxious to go on believing them. One can only conjecture how history might be rewritten if the rulers, tyrants, statesmen and scientists who died as a result of “blood-letting” and other treatments, based on a misunderstanding of the function of the blood, had lived.
Harvey’s “wicked and heretical” theory was gradually accepted, simply because it was impossible to refute. His successors were able to establish, with microscopes that Harvey did not possess, that the arterial blood crossed to the veins at the end of its outward journey, through tiny, hair-fine pipes or “capillaries” (from the Latin word for “hair”) just under the skin. At this point it did its “work”, returning through the veins to be pumped to the lungs, where it underwent a chemical change, releasing the carbon-dioxide which it had brought back from the extremities, a waste-product of its work, and replacing it with oxygen.
All arteries, then, carry blood away from the heart and all, with the exception of one, carry bright red, oxygenated blood. All veins carry blood back to the heart and all, with the exception of one, carry dull red blood, full of carbon-dioxide, on its way to the lungs. The two exceptions are the pulmonary artery, which carries “used” blood from heart to lungs, and the pulmonary vein which brings oxygenated blood back from lungs to heart.
William Harvey was the man who made of “anatomy” a new science of the living, moving body, the science of physiology. For generations men had been content to deal with matters of quality, of “humours” and vital forces in the body, a “spiritus” which was somehow injected into the blood by the brain, and so forth. No one had ever used a yardstick, a scale, a unit of measurement; but Harvey, watching the exposed, beating heart of an animal, watching the organ draw itself together, contract into a small hard ball, push blood into the aorta, was the first man to ask, “how much?” “how often?” “which way?” and “where?”; to treat the body as a machine, find out what made it work. The consequences of his discovery were immense, the most important step that had ever been taken in medicine: one which has probably only been approached in magnitude by the twentieth-century discovery of antibiotics.
William Harvey was born, on 1 April, 1578, in Folkestone. His father was a well-to-do merchant and was able to send the boy to Kings School, Canterbury, and to Cambridge University. As soon as he had taken his B.A. degree he travelled to Padua, which was felt, in the seventeenth century, to be the finest medical school in the world, particularly for anatomical studies. Here he studied for several years under the great anatomist Fabricius, becoming a Doctor of Medicine in 1602, when he was twenty-four.
He returned to England, obsessed by what he had learnt, and although he was appointed physician at St Bartholomew’s Hospital in 1609, although he had plenty of patients and much to do, he remained principally concerned with anatomical problems. Unlike the doctors of the sixteenth century, who had been delighted by the beautiful proportions, the harmony, of the human body, Harvey was concerned with its movement, the only attribute, he pointed out, which made it different from a marble statue. And the two movements which were ceaseless from birth to death, the pulse and breathing, these were what Harvey was most interested in.
But his practice was growing, his professional reputation, despite this private obsession, was increasing with each month, and he had the greatest difficulty finding time for research. In 1616, after he had been appointed professor at the College of Physicians in London, he gave his first lecture, and from the manuscript notes that survive we can see that he had solved, in his own mind, the problem of the circulation of the blood. How his audience reacted we do not know, but Harvey knew enough of his colleagues in the medical profession to realize that they would be not only sceptical but outraged, even abusive, unless he made his theory fool-proof-physician proof. He went on with his experiments, dissecting men, animals, birds, snakes, anything, and measuring what he found.
Two years after this lecture he was appointed Physician-in-Ordinary to King James I and when that monarch died he went on to serve Charles I in the same capacity. Still he checked, double-checked, his theory, expanding it slowly and carefully, only as far as his observations justified.
It was not until he had been in royal service for ten years that, as we have seen, he published his famous monograph in 1628. Immediately there was a storm of anger. What right had this “heretic”, this “devil”, to toss aside the teachings of a thousand years? There were angry meetings of medical men, demonstrations in the street. To the general public he became a “crackbrain”.
Charles I holding a council of war at Edgecote on the day before the Battle
Charles I holding a council of war at Edgecote on the day before the Battle
The Civil War added to Harvey’s problems. He had little interest in politics, but he was the king’s physician and as such he was present at the Battle of Edgehill, where his task was to take care of the young Prince of Wales and even younger Duke of York. While he was away his house was ransacked and many of his papers stolen or destroyed. History relates that he brought his young charges to a quiet hedgerow and had begun to read to them when a cannon-ball landed a few yards off. The king’s physician and the king’s two sons were then seen tearing across the field in search of shelter. Fortunately for history and mankind, they found it, and Harvey was allowed to go on for another decade working on embryology, the origin of human beings, their development in the maternal womb, a study to which he contributed greatly, but which is outside the scope of this article.
When the king moved his court to Oxford, Harvey was delighted ; he was able to continue his studies there. He was received with civility at first, then warmth, as his theory became gradually accepted, and at last he was made Warden of Merton College. He retired into private life in 1646, “much troubled with gout”.
During the eleven years of life that remained, his theory became accepted all over the world. In 1654 the College of Physicians wanted to confer upon him the highest possible honour, that of President, but Harvey declined, saying he was too old. He returned the compliment by erecting a new building for the college and equipping it with a library and a museum. He died, on 3 June, 1657, a widower and childless, bequeathing his estate at Burwash in Kent to the College of Physicians, and with it a fund for an annual lecture. The Harvey Oration is still given, each year.

Faraday Discovers Electricity

 

The Transformation of Everyday Life

“Prometheus, they say, brought fire to the service of mankind; electricity we owe to Faraday.”
The remark has been made, in our time, by a man who ought to know, Sir William Bragg, winner of the Nobel Prize for physics, holder of a great many other distinctions in the world of science.
And yet this is an astonishing claim, for without electricity the world we live in would be unrecognizable, a world lit by gas lamps and candles, without telephones, radio, television. A horizontal world, for without the electric lift no architect would dare design a building more than three or four floors high. Our spring-driven, needle-powered gramophones would squeak at us through trumpets. Our motor-cars, if, so deprived, we had ever got round to inventing them, would be clumsy, diesel-powered things, with acetylene headlamps, and when we refilled their tanks with diesel oil we would do it, laboriously, by hand pump.
One could go on for ever, for almost everything in our twentieth-century world needs electricity for its functions or its manufacture. How accurate, then, is the claim that we owe all this to Michael Faraday, the nineteenth-century London book-binder who was fired by a lecture on “natural philosophy” and never rested until he had invented half the things we use to-day? What sort of a man was this Faraday, who could do such things?
He was born in London, in 1791, son of a blacksmith and his wife who had walked down from the Yorkshire moors in search of work. They were only partly successful, and from an early age the young Faraday had been forced to go out and earn what he could to augment the meagre family income. When he was nineteen and working as a book-binder, his father died, leaving him to support a widowed mother and a young sister. He might well have remained a book-binder if his employer, a kindly man, had not encouraged him to use his off-duty hours to the best advantage and urged him to attend lectures on the “natural philosophy” we now call science. It was at one of these lectures, at the Royal Institute, that he heard
Sir Humphry Davy. He was so overcome with the wonder of what he heard that he sat down and wrote the great man a letter, enclosing a copy of the notes he had made of the lecture which so enthralled him.
To, his delight, Sir Humphry wrote back, on Christmas live, 1812. “I am far from displeased with the proof you have given me of your confidence, which displays great zeal, power of memory, and attention. It would gratify me to be of service to you; I wish it may be in my power.”
It was within his power, sooner than Sir Humphry had antici­pated. His assistant at the Royal Institution was dismissed for assaulting the instrument-maker; the astonished Faraday was offered the post, at a salary of twenty-five shillings a week and two rooms at the top of the house.
His life and work with Sir Humphry and Lady Davy was stimu­lating and at times exasperating, but he served his master well, not only in London, but on a long and eventful trip about the continent, where Davy lectured in many capital cities and young Michael Faraday was able to meet and talk with many of the great men of science, including Monsieur Ampere and Signor Volta, whose names were becoming, as they have remained, household words in the science of electricity. Faraday had served Davy, whom he wor­shipped, extremely well as “philosophical assistant” in experiments with chemistry and physics, but more and more he was being drawn to the study of this strange electric force which seemed to exist everywhere, to be conjured out of almost anything, like rabbits from a hat, and which, unlike the rabbits, seemed to promise a new strange magic, once man learnt to control it.
On their return from the continent, Sir Humphry arranged his promotion to a salary of thirty shillings a week which would now enable him to send his mother enough to afford good schooling for his sister. He settled down, in a mixture of enthusiasm for the work in hand and relief that he no longer had to suffer Lady Davy, who throughout their travels had treated him as the most menial of servants, to work as he had never worked before. He was torn between chemistry, he had already, without bothering to explore its commercial possibilities, invented stainless steel, and the study of electricity. In his heart he knew he could abandon neither, but the fact that Sir Humphry was now switching his efforts to the latter made his choice for him; he had to help his master.
In the auturnn of 1820 the Danish Professor Oersted had experi­mented with compass needles, pieces of magnetized steel, held near to wires carrying an electric current. The needles, Oersted found, were deflected by the current and when this was switched off they fell back into their normal, north-south orientation. Others, including Davy, had proved that steel needles, which had always previously been magnetized by rubbing with a lodestone or natural magnet, could be made magnetic if held long enough beside a wire carrying an electric current.
It now became clear to Faraday that there must be a measurable relationship between the current and the magnetism. At the same time he was forming in his own mind the theory, soon to be proved, that “electricity, whatever may be its source, is identical in its nature”. This included the electricity which Benjamin Franklin had enticed down his kite-wire from a flash of lightning, the current from Signor Volta’s battery, the “static electricity” produced by rubbing amber. None of this, as yet, had a use; Faraday was soon to change all that.
One day he balanced a small bar magnet upright in a bowl and poured in mercury, so that only the top of the magnet protruded above the surface. He then led a wire from one terminal of his electric battery to the mercury, a liquid which conducts electricity, bent it over the edge of the bowl and let it stay there, submerged. From the other battery terminal he took another wire which ended in a straight piece which he suspended from above the bar magnet and allowed to dangle in the mercury. There would thus be an electric current flowing from one battery terminal to the other, through the mercury.
He had left one terminal unconnected and now he joined it up.
The end of the straight wire, dangling in the mercury, began to spin around the bar magnet, in a neat circle.
He disconnected the terminal and the movement stopped; reconnected, and it began again; the first electric motor had been made, had been running. Not, as Faraday was the first to admit, a motor which had an immediate practical use, unless one wanted to stir mercury, but a motor, a device, of unlimited possibilities. But then, instead of developing it, Faraday went straight on to investigate more fully the behaviour of electric currents near magnets, of wires near wires, of wires in the earth’s magnetic field. He found that he could produce a movement similar to that of his experimental motor by using terrestrial magnetism instead of the bar magnet.
By now, having proved to himself that an electric current in a “magnetic field” could produce a mechanical movement, he was anxious to prove the converse, that the movement of a piece of wire in such a field would generate an electric current along that wire. He tried, many times, connecting the two ends of his wire to a sensitive, current-indicating, galvanometer, placing the wire near a strong magnet, but nothing registered. Yet he was becoming convinced that not only was this possible, if one worked out the correct positioning, but that the same process of “induction would be able to make a current flow along one wire, when it was brought near another along which current from a battery was already flowing.
The results were negative and exasperating, but he refused, to give up. As he wrote, he “could not in any way render any induction evident”, yet he was convinced it was there.
On 29 August, 1831, he succeeded. He had taken an iron ring, six inches in diameter and an inch thick, a large, hollow iron doughnut, had wound a few turns of insulated wire round one half of the ring, a few round the other, had connected one lot to a battery, the other to his galvanometer. When he joined up the battery, when he disconnected it again, there were sharp flicks of the galvanometer needle. When the current from the battery was flowing steadily or not at all, there was no deflection of the needle; only its interruption or resumption (and, as he soon found, an increase or decrease in its strength) had an effect. There was, because of the insulation, no electrical connexion between the two coils, only an “induction”. To Faraday, it was all quite clear; the current from the battery had given rise to magnetism, concentrated by the heavy iron ring, and this magnetism, in the process of changing, had generated an electric current in the second coil.
A discovery of major importance, upon which the whole principle of tunable radio, separating one station from another, is based: yet at first of no practical use. Faraday raced on. At the end of October, by passing a bar magnet inside a tightly wound coil of many turns of wire, not unlike a large reel of cotton, he found he had generated a current. A galvanometer, joined to the two ends of the wire, flicked each time he moved the magnet, but remained undeflected when the movement stopped. He had achieved, at last, “evolution of electricity from magnetism”, the first dynamo.
He next asked permission to experiment with a great permanent magnet which the Royal Society housed in Woolwich. He placed a large copper disc on an axle between the two poles with a rubbing contact, what we now call a “brush”, at the centre, and another at the circumference; he rotated it. Current flowed; and now there was no doubt that this new dynamo had a real and important future.
From 1831 he improved his motor and his dynamo, though his mind was on the theoretical aspects of his science, and he was prepared to leave others to get on with the practical details, and he embarked on an almost endless series of discoveries in the field of magnetism and electricity which he formulated into rules that still apply to-day. At the same time he was able to continue experiments in chemistry and to become Fullerian Professor of that science at the Royal Institution. He had been, for some years, much in demand with commercial firms, which paid him handsomely to serve them, from time to time, in almost any capacity he cared to choose, from inventor to expert witness, but in 1831 he resolved finally to devote his life to pure research. He announced that, among other things, he would cease making the high-grade optical glass for which he had become famous; he turned his back on commerce and a very large income.
It has been argued that by cutting himself off from commerce Faraday held up the techniques of electrical engineering by fifty years; after all, men said, he had invented the motor and the dynamo by 1831, yet it was years before they became more than scientific curiosities. Yet, without this zeal of Faraday’s for pure research, for finding out the answer, he might never have discovered the host of new things he did, like the science of “electrolysis”, the behaviour of electricity in liquids, which enabled men to measure with extreme accuracy an actual quantity of electricity by the amount of metal it deposited on an “electrode” in a liquid “electrolyte”, words in common use to-day, but which Faraday himself invented, with “cathode”, “anode”, “anion”, “cation”, “dielectric” and a lexicon of others, without which the modern science of electronics would be struck dumb. His own name has been immortalized in the “Farad”, the unit of inductance, of transfer of energy from one electric circuit to another, which he first demonstrated with his iron ring, and without which radio communication, radar, television or X-rays would be impossible.
So indeed it is true to say, “electricity we owe to Faraday”. Men knew of its existence, dreamed that it might some day have a use, but it was Faraday who handled it, measured it, made it work.

Man's First Powered Flights

First Step in the Conquest of the Air

“I would hardly think to-day of making my first flight in a strange machine in a twenty-seven-mile wind, even if I knew that the machine had already flown and was safe, yet faith in our calculations and the design of the first machine, based on our tables of air pressures, had convinced me…”
So wrote Orville Wright in 1913, ten years after the first powered flight at Kitty Hawk. It had been a tense and tiring morning, full of frustrations, and bitterly cold, but at last the brothers’ “Flyer” was in position. “After running the motor a few minutes to heat it up, I released the wire that held the machine to the track and the machine started forward into the wind. Wilbur ran at the side of the machine, holding the wing to balance it on the track, Wilbur was able to stay with it until it lifted from the track after a forty-foot run, this flight lasted only twelve seconds, but it was nevertheless the first in which a machine carrying a man had raised itself by its own power into the air in full flight, had sailed forward without reduction of speed, and had finally landed at a point as high as that from which it started”
It was the morning of 17 December, 1903, and a stiff wind was kicking up sand from the dunes. Orville made the first and third flights; Wilbur the second and fourth. There were five witnesses of the four short hops, astonished local residents who had watched the brothers come back each year, 1900, 1901, 1902, with wood-and-cloth gliders in which they made trips of up to six hundred feet. The one they had brought from their home in Dayton for the 1903 season was a great improvement, the Wrights felt, over its pre­decessors : its details, its refinements, had been tried again and again in their wind tunnel, an open-ended wooden box sixteen inches square by six feet long. Back home in Dayton they had tested, in the intervals of running their cycle shop, no less than two hundred kinds of wing: by September, 1903, when they set off again for Kitty Hawk (a sandy stretch on the coast of North Carolina, the only place, according to meteorological experts, which had ideal winds and plenty of room) they had not only perfected the design but had made themselves an engine to power it. A number of men had experimented with gliders, but no one had tried attaching an engine and an “airscrew” to see if the machine could take off by itself.
On this fourth visit to Kitty Hawk they were beset by difficulties: a backfire from the motor twisted a propeller shaft, a sudden storm nearly removed the camp in which they were living. It was not until 12 December that the machine, with new, reinforced, propeller shafts (it had one engine, but two propellers, chain driven), was ready to fly. Then the wind vanished and the test was postponed. On the 14th, the machine stalled after three seconds in the air and damaged itself on hitting the ground. As this “flight” had demonstrated that their new method of take-off, from a wood-and-metal track on the sand, really worked, they were, according to Orville, “much pleased”. They spent two days on repairs and on the morning of 17 December the four flights were made, as we have seen, in wind velocities of up to twenty-seven miles per hour.
And so the first powered flight took place. Such, however, was the hostility, hostility and disbelief, with which the reports were received that the United States Army, to whom the invention was offered, refused even to see a demonstration until 1908. At last, on 3 September of that year, while Wilbur was in France demonstrating to the French government (both French and British governments had shown interest after the first “incredible” reports, but had done little else), Orville took off from a field near Washington before a small, apathetic crowd of officers and civilians. From an eye-witness account we learn that when the plane left the ground, “the crowd’s gasp of astonishment was not only at the wonder of it, but because it was unexpected, a sound of complete surprise”.
When Orville descended, a minute and eleven seconds later, he was met by reporters with tears pouring down their cheeks. Yet, even now, there was suspicion, disbelief: no one who had not actually been present would believe in an “aeroplane” that actually left the ground under its own power. On 12 September, Orville, still demonstrating outside Washington, circled the field seventy-one times in an hour and fifteen minutes, reaching a height of three hundred feet, but still the Press ignored it, it was a freak, a phoney: even though reporters on the spot wired ecstatic stories, these were edited down to small paragraphs for the back page.
Then, on 17 September, Orville and his army passenger had an accident. The passenger died of a fractured skull and Orville went to hospital with broken leg, hip and ribs. Now, at last, the Press took notice, an accident was news, flying was not, and the Wrights became front-page material in their own country: thanks to Wilbur’s demonstration flights on the Continent, they were already well known in Europe. Now companies began to be formed, to manufacture Wright Aeroplanes under licence.
The idea of powered flight had exercised men’s minds for years. The Royal Aeronautical Society in Britain had been established years before the Wright brothers’ flights, had been formed in fact in 1866, but the chief stumbling block in the production of a practical “aeronautical machine” had been the absence of a suitable engine. Nothing was light enough. As Lord Brabazon, holder of British Pilot Licence Number 1, was to reminisce years later, “I remember talking to Wilbur Wright as to the possibility of building an aeroplane that would do a hundred miles an hour. His answer was simple, ‘Get me the engine.’ So it has been all the time, engine power.”
Aeroplanes had been designed and even built for centuries, but they never flew. Leonardo da Vinci in the fifteenth century sketched a flying machine which, if suitably powered, might conceivably have flown, but no suitable power was available and he never made it. At the end of the eighteenth century, Sir George Cayley in England devised and published the principles on which the modern aeroplane is based. To this extent he may be considered its inventor. The basic requirements were and have remained over the years, a light fuselage, cambered wings and a tail unit consisting of rudder and elevator. In 1804 Cayley built and flew his first model glider; then, late in life (half a century later, in fact), he built the world’s first man-carrying glider and made two successful flights with it.
Ten years before this, W. S. Henson, a young engineer in the lace trade in England, had published designs for an “Aerial Steam Carriage”. Because of its impossibly heavy and cumbersome steam engine it never worked, but its design aroused a great deal of interest and argument in Europe. One of those most involved in the orgy of experiment which followed on Henson’s design was the German, Otto Lilienthal. By the time Lilienthal was killed flying in 1896 he had proved that a man-carrying glider could be successfully and continuously flown and controlled in the air. Following in his footsteps, and using Lilienthal’s designs, the Englishman Percy Pilcher had actually begun constructing his first powered machine when he was killed in a glider accident in 1899.
It was left to the Wright brothers in America to take up the challenge. So great was the prejudice against ideas of human flight that no manufacturer would design an engine for them, or build one to their own specification, and they were forced to make it themselves. No doubt there was embarrassment among the manufacturers of the internal combustion engine, the first car-makers, when the Wright “Flyer” flew with a home-made engine. At any rate, this flight triggered off a burst of activity all over the United States and Europe: if the Wrights could fly with an engine put together in their bicycle shop, think what others might do with a powerful one, designed and built by professionals!
The French, the English, the Belgians, immediately produced “aero engines”, one of the most remarkable and long-lived being the French “Gnome”, an air-cooled, seven-cylinder mechanism that rotated about its stationary crankshaft and thus cooled itself. As the “Gnome” grew bigger, and the whirling, gyroscopic effect of its cylinders threatened to become uncontrollable, they were anchored to the fuselage and the crankshaft allowed to rotate, as in the surviving piston engines of today.
In 1916 the French Hispano-Suiza came into service, an improvement on everything before, and immediately afterwards Rolls-Royce produced their remarkable twelve-cylinder “Falcon” and “Eagle” engines of 250 and 360 horsepower and established a lead in aero engines which they have maintained ever since. In 1919 a Vickers Vimy aircraft powered by two Rolls-Royce “Eagles” crossed the Atlantic from Newfoundland to Ireland with Alcock and Brown in sixteen hours. In these early days, an aero engine only had to keep the aircraft airborne, but within a few years it had become a complete power-plant, providing electricity from its generators for lighting, radio, cabin pressure pumps, hydraulic undercarriage pumps, and a host of other essential equipment. It also, as altitudes increased, had to have an automatic arrangement for restricting fuel with height, in step with the thinning air. This was followed by the supercharger, for maintaining atmospheric pressure, rather than restricting fuel. The next step was a variable-pitch propeller which allowed the engine to revolve faster for the same forward speed, in order to obtain more power for climbing, exactly as with a car’s gear-box.
Slowly the various components were improved, with intense development taking place during the Second World War, as, in a more elementary way, had happened in the First. The famous Spitfire began the 1939 War with a speed of 367 miles per hour and ended it almost a hundred miles faster.
But the greatest step forward during the war, a development which occurred too late to have much effect on the war’s outcome— was the development by Flight Lieutenant (later Sir Frank) Whittle, of the jet engine. A gas turbine, consisting of a compressor for incoming air, a combustion chamber in which the air was mixed with fuel and ignited, and a turbine, similar to the larger steam turbines in ships, to drive both compressor and an airscrew, had been discussed for years, but it was Whittle who suggested that the aircraft could be more conveniently driven, not by an airscrew but directly from the high-speed jet of hot gas coming from the engine exhaust. The idea of using some sort of jet for propulsion had been considered from the days of da Vinci, hot air, men working a bellows, steam, but it was not until Whittle recognized the gas turbine as the ideal system that the idea could be developed.
The Gloster Whittle, a single-engined jet plane, was built and flown: from then on, the jet in its several forms gradually superseded the piston engine for almost all sorts of aircraft. The earlier idea of a turbine-driven propeller, the “turbo-prop”, was developed for short-range operations, as it had the advantage over a pure jet of being economical at low speeds and heights, and requiring a far shorter runway. Its chief advantage over the piston engine, apart from a startling absence of vibration, was its lightness: it became possible to hang four engines on an aircraft’s wing where only two had been before, with a consequent increase in power of consider­ably more than one hundred per cent.
Pure jet and turbo-prop engines are now highly developed. Although improvements still take place, the main development in aircraft propulsion is likely to be the use of rockets, which, because they require nothing from the atmosphere, can fly in a vacuum, are the only means yet devised of travelling in outer space. Speeded by a “space race” between the Soviet Union and the United States, an all-out, fantastically expensive contest to have the first man on the moon, and beyond, development in this field has been ex­tremely rapid.
And yet probably no foreseeable development is likely to have the significance of that first powered flight in December of 1903, the first time an aircraft with a man on board had left the earth, like a bird, and “sailed forward, without reduction of speed and finally landed at a point as high as that from which it started”.

Penicillin A Victory Over Death

A Victory Over Death

It was in France that the idea came to him, came during the noise and stench and dying of the trenches. The young Alexander Fleming, a trained and unusually promising bacteriologist, and therefore “reserved”, had surprised his friends and colleagues by volunteering for the trenches in 1914. He was shipped over, as a lieutenant in the R.A.M.C., and soon the wounded were coming into his hospital, hundred upon hundred of them, their wounds crawling with bacteria, and he realized, to his dismay, that there was little to aid them.
“Surrounded by all those infected wounds,” he wrote later, “by men who were suffering, dying, without our being able to do anything to help them, I was consumed by a desire to discover, after all this watching and waiting, something which would kill those microbes.”
And in his heart he knew that “something” would have to be a something which would help the body’s natural defences. The antiseptics then in use were worthless. Not only did they do nothing to prevent, for example, gangrene, they actually seemed to promote its development. For a surface wound, but there were few of these, they had some slight value: they destroyed the bacteria and, with them, because the wound was on the surface, only surface cells, which could be replaced. For deeper wounds, they were worse than useless, destroying irreplaceable tissue and at the same time seeming to destroy the body’s natural power to resist infection.
But the war ended in November, 1918, and two months later Alexander Fleming was demobilized, without having found an answer to the problem.
But he was thinking, he never stopped thinking, along the right lines. He knew that his substance would have to be something, perhaps from the body itself, which would encourage it to kill invaders itself, and three years later, in 1921, he took the first step forward. He had tried various human secretions and now he found that human tears, dropped into a culture of bacteria, dissolved them with startling speed. The substance in the tear-drop which had this effect he named “lysozyme” and he soon discovered that it was contained in nail-parings, hair and skin and even in certain leaves and stalks of plants.
Unfortunately, the lysozyme, so powerful against some bacteria, had practically no effect on the dangerous ones. Its immediate use was therefore limited; but it was, as we now realize, a tremendous step forward in bacteriology. Fleming read a paper on it to the august Medical Research Club in December of 1921 and was distressed when it got a chilly reception. Eight years later, he was to get exactly the same frigid reception for his first discovery of penicillin.
During those eight years, Fleming never stopped his research into that pet theory: that something from the human body, something living, was the answer to bacteria. Lysozyme, though it never became a practical proposition, seemed to prove him right.
Could lysozyme be improved, treated in some way to make it attack dangerous bacteria with the force it launched against harmless ones? Or would some other substance be the answer?
The answer came, quite suddenly, in Fleming’s London laboratory. The lab was bursting with bits of equipment, bunsen burners, crucibles, pipettes, test tubes, Petri dishes full of colonies of bacteria ripening for examination under the microscope, rubber tubes, glass jars. During the day, the cover had been taken off some of the Petri dishes to enable them to be studied under the microscope and now, as the Scots bacteriologist chatted to a young English visitor, he lifted the lids again, one by one and looked in. Several of the cultures, he noticed, had been contaminated by mould, but this was a common occurrence: the air was full of “spores” and when the tiny reproductive organs settled in a damp place they would proliferate, put out shoots in every direction, like a strawberry plant, become a fungus. It was tiresome, Fleming admitted, but that was all. “As soon as I uncover one of these dishes,” he said, “something just drops out of the air. Right into it.”
Suddenly he stopped talking. He bent over, looked carefully into one of the dishes.
On the surface of the culture of staphylococci which he had been breeding there was a growth of mould. It seemed exactly like the mould on practically all the other dishes, but on this one, round the edges of the fungus, the colonies of staphylococci had disappeared, vanished. If he looked carefully, he could see them, but, instead of being an opaque mass, they were simply drops of dew.
He picked up a small piece of the mould with his scalpel, put it in a test tube. To the younger doctor with him there was nothing at all surprising about the fungus and its effect on bacteria: the same thing would have happened, the bacteria would have been killed, if someone had dropped strong acid into the dish. Probably the fungus was exuding some acid. After all, it was easy enough to kill bacteria in a dish. The problem was to kill them in the human body, without killing the body in the process.
“This,” said Fleming, “is really quite interesting.” He scooped out the rest of the fungus, put it carefully into another test tube, corked it. Then he turned round, resumed the conversation.
“What struck me”, the young man was to write later, “was that he didn’t confine himself to observing, he took action at once. Lots of people observe a phenomenon, feeling it may be important, but they don’t get beyond being surprised.”
The next day, Fleming began to cultivate his mould. He took it from the two test tubes, spread it on a larger bowl of the nutritive broth which the laboratory used for breeding bacteria. The fungus grew, incredibly slowly, pushing out tentacles across the surface of the broth, becoming, centimetre by centimetre, a thick, soft, pockmarked mass of green and white and black. Fleming watched it for several days. Then, quite suddenly, the broth itself, having been a clear liquid, went a vivid yellow. Now he took a drop of this yellow liquid and placed it at the centre of the dish on which he had arranged, star-fashion, half a dozen different colonies of bacteria, each arm of the star being a different bacteria, streptococci, gono-cocci, staphylococci, and waited.
Breathless, he watched. Then slowly the colonies of bacteria, all of them, began to dissolve. Soon there was only the dew he had noticed before.
And now he knew, for these were serious, dangerous and even deadly bacteria, that he had found the answer to his problems. Lysozyme, his great hope of a few years back, had been almost useless against them. Ordinary antiseptics and disinfectants killed them, and killed the patient as well. This, and he was so sure of it he drank half a glassful, was a harmless substance. While he waited for any reaction he busied himself diluting the liquid and testing each dilution, from half-and-half to one part in five hundred. Still, though more slowly, it went on, killing bacteria.
It was important now to find out what the mould was, if he were ever to get any more of it, rather than having to rely on the slow breeding of the original spore which had landed on his bench. He knew almost nothing of mycology, the science of fungi, but he studied it, enlisted the help of experts, and soon was able to establish it as “penicillium notatum”, a penicillin, or fungus, of the “notatum” variety.
The problem, though, was to get more of it. A second problem, and more intractable, was getting the yellow liquid into a stable enough form to be stored and used when necessary: it lasted only a short time before degenerating into an inert, useless, liquid. These two problems held up the development of what Fleming knew was a wonder drug for over ten years. In the meantime, because he was unable to produce enough of it to demonstrate anything worth while, men scoffed at him.
From the day the spore blew in through Fleming’s Paddington window, he and others who believed in him never stopped working on the extraction and stabilization of the drug. Fleming was able to perform several minor but miraculous cures with the small quantities he was able to prepare, but there just was not sufficient to embark on a major test.
By the outbreak of war in 1939, ten years after the initial discovery, penicillin still could not be produced in adequate quantity or made stable. It had to be prepared from the mould—the pitifully small quantity of mould, for each treatment. Then, with government backing, a small team under the Australian Howard Florey got together in Oxford and determined to solve the problem. Gradually they found they were able to purify small quantities of the mould by a complicated method of evaporation, and the time came to try this new drug on a patient with something more seriously wrong with him than the boils and surface infections which Fleming had managed to cure.
News came that an Oxford policeman was dying of septicaemia from a small scratch at the corner of his mouth which had infected the blood stream. On 20 February, 1941, an intravenous injection of the purified penicillin was given to the dying man and thereafter every three hours. At the end of twenty-four hours the improvement was almost incredible.
Then, as Florey and his team had feared, the penicillin which they had laboured so long to produce ran out. The patient hung on for a few more days, but the microbes, no longer attacked by penicillin, seized the upper hand, and the man. died.
The drug just had to be made faster. To this end, Florey made inquiries in America, and at last the Northern Regional Research Laboratory of Peoria, Illinois, agreed to help. They had been working on uses for the organic by-products of agriculture and now, when they started work on the new drug from England, they discovered that corn-steep liquor, a by-product of maize, was an ideal medium for the growth of the penicillin fungus. They became enthusiastic, and within months Peoria was producing twenty times as much as Oxford. At the same time, they were on the look out for moulds which might give a larger yield of penicillin. Up to now every gramme of the drug that had been made had descended from the spore which landed on Fleming’s bench in 1929. Many experiments were made with moulds, but it was not until 1943 that the young woman the lab employed in Peoria to go round the markets cornering rotten fruit (they called her “Mouldy Mary”) brought back a melon. The mould from this, a “penicillium chrysogenum”, proved successful and remarkably productive.
Nowadays, almost all the penicillin in use is descended from one rotten melon bought in the market at Peoria, Illinois.
At last the real value of Fleming’s discovery was clear to everyone. Production, both in England and in America, mounted by leaps and bounds, and at first all of it was earmarked for the services. Thousands of dying soldiers, sailors and airmen were saved by the new Wonder Drug, and it was not until the end of 1944 that the military authorities felt themselves able to spare any of it for civilian patients. By this time the quiet, shy, sensitive man who had invented it had been honoured by his king and was now Sir Alexander Fleming. In the years before he died in 1955 he was showered with honours from every nation in the world.
Penicillin, the first of the “antibiotics”, substances produced by fungi or bacteria which inhibit the growth of other micro-organisms, was the biggest medical breakthrough in the first half of the twentieth century. It was and is startlingly effective against a wide range of diseases like pneumonia which had so often been fatal, and though it has no effect on the bacteria of, say, influenza and tuberculosis, other antibiotics, developed in the manner Fleming introduced, have been effective against these. The number of antibiotics, streptomycin, aureomycin, terrarnycin and numerous others, is increasing yearly. There is a danger that with too widespread a use of them, bacteria may become resistant; that some day the population may be exposed to an epidemic from a new and resistant strain of, say, tuberculosis. So far this danger has been kept at bay by the proliferation of new antibiotics which have ensured that most of the resistant strains of bacteria can be dealt with by another antibiotic to which they are not, yet, resistant.
The discovery of penicillin completely revolutionized the treatment of disease. The young doctor of today can hardly realize how helpless his predecessors felt against so many deadly infections. The average expectation of life has increased so greatly that the whole structure of society is altering. All this simply because a research worker believed in the possibility of a certain drug, and trained himself to recognize it the moment it appeared.

Teaching Of Jesus

Christianity is Born in the Middle East

In the year 4 B.C., Rome was flourishing in the golden age of Caesar Augustus. Though not yet expanded to its fullest extent, the Empire included all the Mediterranean lands, including Palestine. Indeed, Palestine had been benefiting from the Pax Romana for more than half a century, and though the Jews had once or twice tried to throw off the Roman yoke, they had not succeeded.
The Prophet Isaiah, by Ugolino di Nerio
The Prophet Isaiah, by Ugolino di Nerio
A proud and active people, they chafed under the rule of Roman Governors, but they were not unaware of their limitations, and at this time, probably more than any other, they were looking forward to the appearance among them of a miraculous king, a Messiah, who would free them from the bondage of Rome.
Such a king had been a part of Jewish religious belief for several centuries, for the Romans had not been the first foreign power to subjugate them. They held that one of their great prophets, Isaiah, had foretold the coming of such a king.
Isaiah had lived in the eighth century B.C. at the very time that the city of Rome was being founded, and he had foretold the subjugation of the Jews by Babylon, which occurred some two hundred years after his death. From the time of the Babylonian captivity (586 B.C.) the Jews suffered from succeeding foreign conquerors, and during all this time they had consoled themselves with the hope of the Messiah.
In religion, the Jews had long been distinguished from most of their ancient contemporaries by believing in and worshipping one God, as compared with the many gods of the Greeks, the Romans and the Assyrians, for example. This God, Jehovah, was all-powerful, a jealous God who punished if his commands were not implicitly obeyed. The Jewish prophets presented world history as the moral judgment of God on mankind.
This conception of God naturally regulated the Jewish attitude to life. Jehovah demanded that Man should live in righteousness. Goodness is the road to God, and by the same road God sends happiness in exchange.
From this they developed the view that the exchange is not Man’s right, but comes to him by the favour of Jehovah, and that this favour can only be obtained by Man obeying God’s commands implicitly; that is, by serving God.
Jewish national life was controlled by these beliefs, and it is interesting to notice that throughout their long history of subjugation by foreign powers they struggled not for political freedom but for the right to worship in their own way. This right was almost always accorded to them.
In practising their religion they had gradually built up a strict code of religious observances, in which ritual and ceremonial played a great part. The central point, the focus, of the religion was the Temple in Jerusalem. The destruction of the Temple which happened several times in Jewish history was always regarded as the most severe of all punishments which God could inflict.
The Temple and the local synagogues were administered by the priests. The priests constituted a special class in the community, and for many centuries they were drawn from one clan only, the Levites, the office being passed down from father to son. About 500 B.C., when certain reforms were undertaken, a higher order of priests was introduced, with a high priest at the head of them. This higher order of priests administered the Temple, and had a far more powerful influence in the lives of the people than their political leaders.
Jews Praying in the Synagogue on Yom Kippur, by Maurycy Gottlieb (1878)
Jews Praying in the Synagogue on Yom Kippur, by Maurycy Gottlieb (1878)
To maintain this influence, they insisted on the strict observance of religious rites and festivals—the Law and the Prophets, as laid down in the Scriptures. The festivals punctuated the Jewish year to mark historical events, such as the Passover, which celebrated the exodus from bondage in Egypt, and so on. With the reforms of 500 B.C., however, a new festival was introduced. Called the Day of Atonement (the seeking of divine forgiveness for sins) and now familiarly known as Yom Kippur, it is thought to commemorate the day on which Moses came down from Mount Sinai with the Tables of the Law and proclaimed forgiveness for worshipping the Golden Calf. (The story of this can be found in the Old Testament, the Book of Exodus chapter 32 and chapter 34.)
At the time of the great festivals, the priests required as many Jews as possible to make a pilgrimage to the Temple. Those who made such a pilgrimage and performed certain sacrifices when they reached the Temple could have a greater hope of forgiveness than those who did not.
These rules and regulations, like every other, were designed to give to the priests a greater power over the people than they might otherwise have achieved. It must be stressed that in their belief that Jehovah was the only One True God, the Jews held that all who did not worship Him could not hope for salvation; and that salvation could only come to the Jews if they obeyed the Law and the priests.
This, then, was the religious situation when in 4 B.C. there was born in the village of Bethlehem, about five miles south-west of Jerusalem, a boy who was given the common Jewish name, Jesus.
According to the accounts of the birth, life and teaching of Jesus contained in four short documents known as the Gospels “good news” the birth of the boy was accompanied by certain miraculous events.
His mother was Mary, wife of a carpenter called Joseph, who lived at Nazareth. Shortly before they were married, Mary had been visited by an angel who had told her that the Holy Ghost would come to her and she would conceive; and though she came to Joseph a virgin, she was actually pregnant when they were married.
Shortly before the birth of Mary’s baby was due, the Emperor Augustus decreed that a census of all the inhabitants of his Empire should be taken. For this purpose, every man was to return to his birthplace to be counted.
Joseph’s birthplace was Bethlehem, and he set out from Nazareth with his wife. When they arrived at Bethlehem, they found that all the public accommodation had already been taken and that the only place that could be offered them was a stable at the inn. Here the baby was born.
The birth was accompanied by a number of supernatural happenings: the appearance, to a party of local shepherds, of a choir of angels, and the arrival of wise men from the East who had been led to Bethlehem by a moving star. The latter had been told in dreams that a king was to be born in Bethlehem who would lead the Jews out of their present bondage, a declaration which they interpreted literally, though the actual meaning was symbolic that He would lead the Jews out of their rigid religious bondage to a state of spiritual salvation.
According to the author of Matthew’s gospel, Herod, the King of Judaea, also heard this news of the birth of a King. To avoid trouble in the future he first tried by a ruse to have the baby brought to him. But when Joseph heard that Herod was searching for the boy, he fled with his wife and the child to Egypt, and remained there until Herod died; while Herod, determined to rid himself of this threat to his throne, ordered the massacre of all the male children in Bethlehem who were two years and under, hoping thereby to include Jesus.
The next we hear of Jesus is on His achieving the status of manhood at the age of twelve. Following religious custom, Joseph went up to Jerusalem at feast-time to worship in the Temple. On the journey home, they found the boy missing, and on hurrying back to Jerusalem discovered Him in the Temple arguing with the theologians there.
When Joseph rebuked the boy for not staying with the family, Jesus replied, “Did you not realize that I must be about my Father’s business?” thereby demonstrating that from infancy He was conscious of having been sent into the world from God to accomplish some specific task.
For the next eighteen years, however, He lived in Nazareth in obscurity, working as a carpenter. After the death of Joseph it is probable that as head of the family He supported His mother and brothers and sisters.
When He was not quite thirty, His cousin John began to make a name for himself in Judaea as a prophet. John’s preaching foretold the coming of a saviour, of a Messiah, of the Messiah as preached by Isaiah five hundred years earlier.
It seems that Jesus realized now that John was referring to Him, and that He must begin the special work for which He had been born. So He went to John, and was baptized by His cousin in the river Jordan.
Gathering round Him a few young disciples, He began at Capernaum, on the Lake of Tiberias, in Galilee, a ministry of teaching and healing.
The main theme of His preaching was the coming of the Kingdom of God on earth. In parables winch attracted both attention and curiosity, He described the nature of this kingdom or rule of God which He had come to initiate. At the same time by restoring the sick to health, by feeding the hungry and raising the dead to life, He demonstrated the divine mercy which was so different from the jealous and awful judgments which Jehovah passed on those who did not obey His commands.
The true God was a God of mercy and forgiveness; and His own role was that of the Saviour of mankind from the results of their sins.
The essence of His teaching is to be found in what we now call the Sermon on the Mount. Beginning with the nine Beatitudes (Blessed are the meek, the poor in spirit, they that mourn, that seek righteousness, the pure in heart, the peacemakers, the merciful and those who are persecuted for the faith) and including the Lord’s Prayer, the Sermon sets out clearly Christ’s moral code, which may be summed up as: Love your enemies, tolerance, honesty, simplicity, meekness.
This teaching, if not in direct conflict with the teaching of the priests, was so different from it and so appealing in its freshness of concept that God is Love that people were drawn to Him and collected in great crowds wherever He went. This naturally brought Him into collision with the religious authorities who recognized that if His influence spread it could mean the end of their own doctrinaire teaching and destroy the privileges which the ancient system granted them; in other words, it threatened their authority over the people.
From the early days of His ministry, therefore, the religious leaders determined to put Jesus to death.
For His part, Jesus recognized that only through death could He accomplish His task, the seed must fall into the ground and die in order to live.
He had always made a practice of going to Jerusalem for all the festivals, and visited the Temple for the Passover in the third year of His ministry He was conscious that the end was very near. By raising Lazarus from the dead and by cleansing the defiled Temple, He deliberately provoked the priests to action against Him.
Through the treachery of one of His disciples, Judas Iscariot, He was quietly arrested after praying in the Garden of Gethsemane; an illegal trial was hurriedly held during the night; and on the morning of the Feast the religious authorities demanded that the Roman Procurator, Pontius Pilate, should authorize the crucifixion of their victim.
The Roman sense of justice at first rebelled against this application to have murder judicially approved and permitted, for Pilate had seen behind the arguments put forward by the religious authorities and had observed that they were not valid. However, when the High Priests threatened to denounce Pilate to his jealous and suspicious Emperor, Tiberius, Pilate agreed, though he made a show of refusing responsibility for the judgment. So Jesus was crucified on Mount Calvary probably in A.D. 29 or 30.
This is the full extent of the historical life of Jesus as we know it. His disciples claimed, however, that after His body had been three days in the tomb, He rose again, and the Gospels give accounts of several appearances which He made to several of His followers. The last of these appearances occurred forty days after the crucifixion.
On this occasion He appeared to the eleven disciples as they were meeting together in Jerusalem. After talking with them for a time, He asked them to go out with Him; and He went with them as far as Bethany, two miles east of Jerusalem, to the Mount of Olives. While He blessed them there, “he was taken up to heaven”.
Though the religious leaders imagined that with the death of Jesus they would remove the threat to their own authority, they quickly discovered that they were wrong.
Obeying Jesus’ command, His followers, after a short period of frightened disorganization, began to preach the faith which He had preached. It was the beginning of a mission which has lasted throughout the intervening two thousand years and which has spread to every corner of the world. No other religion has had so great an influence on the personal lives of so many people. Christians outnumber Hindus three to one, Muslims more than three to one and Buddhists four to one.
Christianity recognizes Man’s claim to a highest good, and promises blessings which constitute a full salvation for the individual. By his very nature, Man is conscious of his imperfections, especially in the realms of moral rectitude. Christianity offers release from the guilt and the penalties of sin for the individual, and holds out hope of final salvation for the individual. It is, in fact, this stress upon the individual’s relationship with God in which Christianity differs from other religions.
The principles of the Good Life as laid down by Jesus are without doubt the best rules which a man can follow to achieve the greatest spiritual fulfilment, whether he accepts Christianity as a faith, or whether he has no faith at all.