Skip navigation

Category Archives: This stuff actually interests me

We all know that the Ancient Greeks thought everything in the universe was made up of four indivisible elements: Earth, Air, Water and Fire. And we all snigger superciliously at their ignorance, because we all also know that Earth and Air are mixtures, pure Water is a compound, and Fire is – wait, what is Fire?

If you were ever curious enough to ask that question, there is a significant chance that you received one of the following incorrect answers:

1. Fire is pure energy. I always found it pretty hard to wrap my head around this idea. I mean, what exactly is “pure energy”? Mass times the speed of light squared?

Well, okay, I suppose light and heat could plausibly be suggested as being forms of “pure energy”, but they both can be understood in terms that go a bit further that just “pure energy”. Light is often well described as an electromagnetic wave, and heat can be described as the stored kinetic and potential energy of atoms and molecules. In contrast, if we called fire “pure energy” and just left it at that, we would really just be saying that we had no idea what it was.

2. Fire is a plasma. This answer isn’t actually necessarily wrong – fire can create a plasma. However, the fires most of us think of when we ask the question (candle flames, forest fires, burning buildings, Molotov cocktails, etc) almost never do create a plasma. You can be pretty sure of this because these ordinary fires are not affected by electricity or magnetism. A plasma – an ionized gas – would be affected by both.

Now that we’ve got those out of the way, let’s look at the correct answer to the question What is Fire?

Fire is a mixture of incandescent matter. 

Nice, simple, one-line answer, isn’t it? I wanted to give you that right up front, so you don’t get a little distracted by some of the complicating details we’ll go into next.

THE INGREDIENTS OF FIRE

Here’s the first of them. A fire can exist only in the presence of these four ingredients: heat, fuel, oxygen and a chain reaction. The fires you see around you are nearly all created during a combustion reaction between an organic compound (the fuel – an example would be the butane in your lighter) and oxygen. However, these reactions don’t usually start spontaneously – you need to provide heat to the fuel-oxygen mixture to get them started. Once the reaction gets started though, it often releases enough heat to keep itself going until all the fuel/oxygen is used up. Thus, a chain reaction keeps the fire going.

The four key ingredients in any fire are eloquently summed up by the following diagram, called the fire tetrahedron:

We now know much more than we did when we first asked the question. Here’s a quick interim summary: Under special conditions, a fuel/oxygen mixture reacts in a self-sustaining way to release incandescent matter (both gases and un-combusted solids like soot) that we perceive as fire. There’s just one last thing we need to clear up: what exactly does incandescent mean?

INCANDESCENCE

Once again, let’s keep things simple. We’ll start with the fact that anything that has a temperature above absolute zero (0 Kelvin, or -273 degrees Celsius) is emitting electromagnetic radiation in a process called thermal radiation. Why? Because all matter consists of charged particles (e.g. electrons and protons), and when you accelerate a charged particle, it gives off electromagnetic radiation (see Larmour Formula).  So who’s accelerating the atoms? The temperature is – when a body gains heat, its atoms/molecules begin to move about randomly (in fact, this is part of the definition of temperature), bumping into one another, and thus causing acceleration of charges.

Right, so everything around you is emitting thermal radiation in some area of the electromagnetic spectrum. The point I’m trying to get to is that some of that thermal radiation is in fact sending out visible light – and that is called incandescence. Tungsten filament bulbs work by heating tungsten to the point where its thermal radiation is in the form of visible light – i.e., by incandescence. (By the way, fluorescent lights work in a very different, and fascinating way, but let’s save that story for later). The flames in ordinary fires also give off light through incandescence.

Well, there you have it. Fire is a mixture of incandescent matter. You may now go back to mocking Aristotle.

Oh, one last thing, though: a little treat for having stuck around this long. Below is an image of a candle in space. As you’ll notice, it’s flame is pretty different from the ones we’re used to. It’s perfectly spherical, because in microgravity, there’s no “up” for the hot gases to go to, and they spread equally in all directions.

In a microgravity environment, a flame is spherical in shape

Remember how there’s a spell in the Harry Potter novels that allows you to erase someone else’s memories (I think you’re supposed to say “obliviate” when you cast it)? Well, it turns out that, once again, boring old Science has proved itself capable of replicating the effects of Magic. To some extent, anyway.

You may have heard of electroshock therapy; and you probably pictured something like this when you did hear of it:

Electroconvulsive Therapy - Frankenstein's legacy?

But the truth is, electroconvulsive therapy (ECT), as it’s known today, isn’t really that dramatic, and usually doesn’t require a murderously deranged doctor.

ECT is a psychiatric treatment that’s used to relieve the effects of some kinds of mental illnesses (including, prominently, severe depression). As far as I can tell, ECT is almost as simple as it looks: you hook up a few wires and induce a few seizures by passing electric currents through the patient’s brain. But before you let your imagination run away with you, note that the patient is under general anesthesia, is given muscle relaxants to prevent major convulsions, and the currents are usually tiny.

Although it’s great that people often feel a lot better after ECT, it’s a bit scary to think that no one knows why or how it works. We do know, of course, that electrical currents passing along neurons are part of the foundation of how the brain works; but that doesn’t explain why a sudden shock to a generalized area of the brain should relieve a wide variety of symptoms.

Anyway, here’s the interesting thing: experiments with patients who’ve received ECT have found that they often incur amnesia after the therapy, sometimes forgetting things as far back as a few years. This would be horrible if not for the fact that this kind of severe amnesia following ECT is almost always temporary – the lost memories usually do return in a few days or weeks (again, how this might happen is a mystery).

However, there are a few memories that usually do not return: those of events immediately before and after the administering of the ECT. And that’s what you use as a Memory Charm. This could, of course, be of immense practical value. For instance, so long as you’re quick about it, you actually can get that certain someone to forget something extremely embarrassing that you do in front of him/her.

Now if only we could fit ECT apparatus into a little wand-shaped thingy…

About two posts back (“What Am I” – a short play) I mentioned the question “Does the evidence of our eyes tell us about the true nature of the world?” as one that philosophers have been mulling over for more than 2500 years. I think the question itself might require a bit of explanation.

Let’s say you’re looking at an orange (there’s one right here for your convenience). I know that it’s perfectly natural to believe that your eyes aren’t lying to you about the fact that its colour is, well, orange. It feels as though the evidence of our eyes is enough for us to say something completely non-subjective about the orange – that it is, in fact orange.

An orange.

But that’s not quite true. What is non-subjective (or at least a little more so) is that the orange is reflecting light mainly in the 635-590 nm wavelength range. Your brain perceives that light as being orange in colour, but that doesn’t necessarily mean that “orange-ness” is in any way a fundamental property of that light.

Consider this: visual information is not carried from the orange in front of you to your brain through fibre-optic cables in your head; it’s converted to electrical charges in your eye and then sent along nerves (in a process called phototransduction; and by the way, isn’t it astounding that we actually know this stuff? Three cheers for medical science!) to the brain, where it’s processed to form visual imagery.

In other words, not a single ray of light has ever reached your brain (well, unless you’ve had a lobotomy, but let’s ignore that possibility for now). Light is something that we cannot experience directly. It’s as though we’ll never be able to listen to an opera, but we can make some sense of its sheet music. And more than that, it isn’t even true that the notation in our sheet music is somehow the “right” one and that there’s no other way to record the opera. Many animals see colour very differently from how we do; some can even see ultraviolet light, which we can’t.

As a matter of fact, colour as a whole is an illusion that our brains create in order to help us deal with the world around us. Look at the table below, and consider the “border” between orange and red light – the wavelength of 635 nm. Truth is, it doesn’t really exist. It’s not as though light with a wavelength of 634 nm has a property of orange-ness whereas as we cross over to 636 nm, the light acquires a property of red-ness. That’s just how our brains perceive things, for some evolutionary reason.

This is how human beings perceive light of adequate intensity between 450 and 700 nm in wavelength

Finally, here’s a treat for having stuck around for all this: a black-and-white photograph of a rainbow. Notice that there’s no way to tell that there are different colours in there – black and white cameras don’t have brains like ours, which lump different wavelengths into fixed categories of colour. A rainbow spans a continuous spectrum of colours; the distinct bands that we normally see are an artefact of human colour vision, and no banding of any type is seen in this black-and-white photo (only a smooth gradation of intensity to a maximum, then fading to a minimum at the other side of the arc).

A black-and-white image of a rainbow

“Philosophy isn’t practical, like growing food or building houses, so people say it is useless. In one sense those basic tasks are much more important, it’s true, because we couldn’t live without them… We eat to live, but we live to think. And also to fall in love, play games and listen to music, of course. Remember that the practical things are only about freeing us up to live: they’re not the point of life.”

That’s from If Minds Had Toes, by Lucy Eyre. It’s a wonderful book that works as a sort of advertising campaign for the subject of philosophy (which, let’s face it, is dismissed as a waste of time by a whole lot of people) while also being funny, imaginative and intensely thought-provoking. I wouldn’t quite rate it RREHR, but perhaps just a notch lower, as EM-BGIB: Educated Minds should have a Basic Grasp of the Ideas in this Book… Yeah, that’s not catching on, is it?

'If Minds Had Toes' a simply-written, but intriguing tale of a 15 year-old's introduction to the lofty ideas of philosophy

Anyway, quick summary: Socrates and Ludwig Wittgenstein – two giants of philosophy – are having an argument about whether knowing something about philosophy can actually make people happier. They decide to settle the matter with an experiment; they spend a few weeks guiding a fifteen-year-old boy, Ben, through the world of philosophy. Ben is introduced to timeless philosophical questions such as “Does the evidence of our eyes really tell us about the true nature of the world?” and “Is free will an illusion?”. And even though at first he would rather think about girls and football than that sort of thing, in the end, he does come away as a changed  person and has a new perspective on life. Socrates wins the bet and the “bad guy” Wittgenstein goes back to sulk in a corner somewhere.

Ludwig Wittgenstein (1889-1951), the bad boy of philosophy

In order to summarize one of the most interesting ideas expounded in the book, I decided to write a (short and inelegant) play. Here it is:

“What Am I”

A Short Play

Individual A: Hi there! I just wanted to ask you a simple question. What are you?

Individual B: Um, hey. You again. Well, since I know you’re not gonna leave me alone before I answer your stupid question – I’m a human being.

Individual A: But that doesn’t answer my question. That just places you in a category. It doesn’t tell me what you are. If I were an alien from outer space, who’d never even heard of Earth, do you think “I’m a human being” would tell me anything useful?

Individual B: Ugh. Fine. [Gestures towards himself/herself] This body, and everything in it, is me.

Individual A: [grabs B’s finger] What about this? Is this you?

Individual B: Sure, why not?

Individual A: So if I cut this off and sent it to France, would you say you had gone to France?

Individual B: Damn it, no! Only my whole body is me!

Individual A: So amputees aren’t complete human beings?

Individual B: Er, no, that’s not what I meant to say.

Individual A: Of course not. Let’s say you meant to say that your body is something very closely associated with you, but it’s not you. The most fundamental part of what you are is the part that you would recognize as you even in complete isolation. That’s why you wouldn’t consider a recently-dead corpse of your body to be you. So what about the stuff that’s in the particular part of your body that you call the brain? Your memories and experiences – are they what makes you you?

Individual B: [a little more interested now] Yeah, I guess that could be it. That must be it, right?

Individual A: Sorry, no, I don’t think so. We’re in murkier territory now, but I really don’t think it would be impossible to upload all your sensory and emotional data on to a computer’s hard disk (even if not now, in a decade or two). But that wouldn’t mean that you would become the computer, or that there’d be two yous.

Individual B: But, then…What the hell am I?… All that’s left is… I must have a non-physical, supernatural soul

Individual A: Thankfully, there’s a way to avoid that. But it’s not easy to grasp. You are not simply your past and your present, but also your future. You are a continuous event that spans a certain time period. This continuous event is made up of lots of little experiences. Even at the moment you are born, the experiences just before your death are a part of you. You are a pan-dimensional being, because your self-awareness transcends space and time.

Individual B: Whoaa. That is so cool. Philosophy rocks!

[Hugs and high-fives ensue…]

–The End–

What is it with German megalomaniacs and moustaches? Observe:

Left: Kaiser Wilhelm II; Right: Adolf Hitler

On the left we have Kaiser Wilhelm II (1859-1941), ruler of the German Empire during World War I (“the most evil German to ever live” – according to the Simpsons, anyway). And on the right is good old Adolf (1889-1945). You’ll notice from the dates that it’s quite possible that they met at some point to discuss their respective moustaches – oh, and World Wars; I guess that was a common interest too.

A friend recently brought to my attention the LHC@Home project, a distributed computing project dedicated to analyzing the data generated by CERN’s Large Hadron Collider (data totaling 15 million gigabytes a year!), and possibly even finding the Higgs boson. A distributed computing project is basically one that attempts to solve a problem using a large number of autonomous computers that communicate via a network (in the case of LHC@Home and other “citizen science” projects, this network is, of course, the Internet). The overall problem is divided into several small tasks, each of which is solved by one or more computers.

Modern PCs are powerful enough to be useful in solving extraordinarily complex problems, such as modelling the paths of beams of protons. And it’s not even like you’ll notice that your computer seems a bit sluggish and distracted (as human beings often get when thinking about things like the origins of the universe and the Higgs boson), because distributed computing projects use software platforms that only allow your PC’s resources to be shared when your system is idle – i.e. when you’re not doing anything. So you only really notice anything when you’re PC’s been idle for long enough for a screen saver to start up.

I had the Rosetta@Home project (more on that later) installed on my old PC, and I can tell you this: the visualizations that the software creates as a screensaver while working on the distributed project are actually quite mesmerizing. I expect the same will be true of the LHC@Home project.

Rosetta@Home screen saver

If you’re interested in joining a distributed computing network, I’d recommend first installing a software called BOINC – the Berkely Open Infrastructure for Network Computing. The BOINC was developed by the University of California, Berkely, and was first used as a platform for the SETI@Home project, but now it’s used in nearly all distributed computing projects.

The BOINC platform is used by many distributed computing projects

Finally, here’s a list of a few other notable distributed computing projects that you might consider joining:

1. Rosetta@home: Geared mainly towards basic research in protein structure prediction, but the results of this research have vast potential in curing dozens of diseases. Implemented by the University of Washington.

2. Folding@home:  Created by Stanford University to simulate protein folding. Note that this is one of the few distributed computing projects that does not use the BOINC framework.

3. SETI@home: The SETI Project does exactly what its name suggests- Search for Extra-Terrestrial Intelligence; it does so by analyzing radio telescope data.  Like the BOINC interface, it was created by the University of California, Berkeley.

4. Einstein@home: Analyzes data from the Laser Interferometer Gravitational-Wave Observatory (LIGO) in USA and the GEO 600 (another laser interferometer observatory in Germany) in order to confirm direct observations of cosmic gravitational waves, which Einstein predicted, but have never been observed.

5. MilkyWay@home: Milkyway@Home uses the BOINC platform to create a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey. This project enables research in both astroinformatics and computer science.

So there you have it: you can help cure cancer, discover alien life or radically change our view of the physical universe. What’re you waiting for? Screen Savers of the world, unite!

UPDATE: Here’s a more complete list of BOINC projects: http://lhcathome.web.cern.ch/LHCathome/Physics/

UPDATE 2: I now have Rosetta@Home installed again. Yay, I’m making an actual (although tiny) contribution to Science! I’ll just go put a tick on my List of Things To Do in Life next to that.

Firstly, let me just say that it’s getting a lot harder to write the kind of full-length posts that I actually prefer writing; so from now on you might find that The Folly of Human Conceits will have a lot of mini-posts about whatever the latest thing to catch my fancy is. And here’s what it is as of right now : The Postmodernism Generator! (available at the following site: http://www.elsewhere.org/pomo/)

So, really quickly, here’s the thing: the word “postmodernism”, outside the limited context of architecture, doesn’t actually mean anything. Yes, you might hear it dropped quite often in certain intellectual circles, but these are generally the kind of intellectual circles in which the intellectuals have quotation marks around them, if you know what I mean. They’re the kind of intellectual circles in which passages of writing such as the following might actually be considered impressive, rather than ludicrous:

“We can clearly see that there is no bi-univocal correspondence between linear signifying links or archi-writing, depending on the author , and this multi-referential, multi-dimensional machinic catalysts. 

You might be tempted to think that postmodernist thought such as that is just a bunch of complete nonsense – and you would pretty much be right! The physicist Alan Sokal concocted gibberish like the above and sent it to the American journal Social Text in 1996And despite the fact that it was nothing more than a carefully crafted parody of postmodernist writing, the editors of the journal accepted the article and published it!

In the words of Richard Dawkins [in his essay “Postmodernism Disrobed”]:

“Sokal’s paper must have seemed a gift to the editors because this was a physicist saying all the right-on things they wanted to hear… They didn’t know that Sokal had also crammed his paper with egregious scientific howlers, of a kind that any referee with an undergraduate degree in physics would instantly have detected.”

Sokal later expressed regret at having failed to jam even more syntactically correct, but completely meaningless, sentences into the parody article he sent to Social Text. Apparently, he just didn’t have the knack for creating them. And now, finally, enter The Postmodernism Generator – a program that generates an entirely new postmodernist essay every time you visit the site! Every essay is grammatically correct, but complete nonsense! You have to try it to get it. Enjoy!

N.B.: You might also consider submitting one of these to a college professor whom you suspect never reads any of the essays you submit!

An illustration from the second edition of The Canterbury Tales, printed in 1483

Sometimes the wide range of things that I manage to find interesting surprises even me. Regular readers of this blog – including Batman, Dracula, Cinderella, and several other seriously non-imaginary people – may even realize that the name of the blog was chosen because of my interest in humanity as a whole, and all of the inconsequential little things that humans get up to (Cf. “About this Blog).

For no good reason, I’m currently reading The Norton Anthology of English Literature (“Revised Edition”; but seriously, has that ever induced anyone to buy a second copy of the same book? Would anyone notice if they branded the first printed edition of the book that way? ). I’m barely two hundred pages into the 1900+ page tome, but already I’m having a lot of fun.

Right now, I’m reading The Canterbury Tales, by Geoffrey Chaucer (1343 – 1400). Chaucer’s lively narration shines especially bright in comparison to the grave, dignified prose of Beowulf, which preceded it in the book. Don’t get me wrong, if you’re willing to take Beowulf on its own terms – a mental exercise that mainly entails picturing very big, hairy, well-armed and belligerent Vikings (it was written in the 8th century AD) listening around a roaring fire as one of their elders intones the lines of the poem – it’s a marvelous work. But there’s something much more human about Chaucer’s wit, his impudence, and his use of believable characters.

As an example, I’ve included a few lines in italics below. They’re all from “The Wife of Bath’s Tale”, which might be considered as one of the “chapters” of The Canterbury Tales. And in case you didn’t know, the language that Chaucer wrote in wasn’t quite the one of modern English-speakers; it’s now known as Middle English. It came into use after the Norman Conquest of England (1066 AD) and was replaced by Early Modern English mainly in the Elizabethan Era (1558 – 1603), the time of Shakespeare.  Old English – the language in which Beowulf was written – is a bit of a misnomer, because, trust me, it’s nothing like English.

I think you don’t really get the feel of Chaucer’s work unless you read it in his language, so I’ve given the lines first in Middle English, then included a Modern English translation as well. The Canterbury Tales were composed in rhyming couplets and in iambic pentameter; what that means is that generally, each line is supposed to have five stressed syllables, and the endings always rhyme. And without going into any technical details, here’s a quick tip on how to pronounce the words: with a heavily exaggerated Scottish accent. Anyway, give it a shot:

“For hadde God commanded maidenhede,

Thanne hadde he dampned wedding with the deede;

And certes, if there were no seed ysowe,

Virginitee, thane whereof sholde it growe?”

And in Modern English:

“For if God had commanded maidenhood [it means ‘virginity’ here]

Then with that same word had he condemned marrying.

And certainly, if no seed were sown,

From where then should virgins spring?”

And to further translate into vernacular, here’s what those last two lines mean: “God can’t have been that fond of virgins, because where are virgins supposed to come from unless – well, unless people lose their virginity once in a while?” The irony is delicious.

There’s one vital fact that you must keep in mind when considering those lines: they were written at the height of the Catholic Church’s dominion over the Western world. And yet Chaucer had the audacity to use a female character – the eponymous “Wife of Bath” – to poke fun at Christian doctrine, making her unapologetically challenge several commonly held preconceptions about women.

And here’s another pair of lines that would have gained the approval of some Medieval Barney Stinson:

“In womman vinolent is no defence –

This knowen lechours [lechers] by experience”

Here’s a (rough) translation of that: “As every player knows, a drunk chick can’t say no.”

And finally, here’s another part that I really liked:

“Of alle men yblessed mote [may] he be

The wise astrologen [astrologer] daun [master] Ptolemy,

That saith this proverb in his Almageste:

‘Of alle men his wisdom is the hyeste [highest]

That rekketh [cares] nat who hath the world in honde [hand].”

Those lines translate to:

“May he be blessed of all men,

That wise astrologer, Sir Ptolemy,

Who says this proverb in his book Almagest,

‘Of all men, he who never cares who has the world in hand

Has the greatest wisdom.’”

I can’t help but feel that there’s Deep Truth in those last two lines. In fact, they’re going on The Folly of Human Conceits’ “Memorable Words” page.

If you have any doubts as to the awesomeness of what is to follow, go back and reread the title of this post- Dinosaur Comics! Created by Canadian Ryan North in 2003, these comics deal with everything from love to linguistics, history and science fiction, the nature of Good and Evil… and lots more.

And what makes them even more interesting is their format- Dinosaur Comics is what’s known as a constrained comic. You’ll notice that nearly every comic has the same six panels, and that it’s only the dialog that changes from one to the next. This constraint makes it challenging to continue writing such a comic in a way that retains originality and interestingness (by the way, I just found out that that is a real word- interestingness).

One might say that constrained comics aren’t exactly a new idea, but are descended from some very illustrious forms of constrained art. In poetry, for instance, sonnets, and Japanese haiku impose rigid constraints upon writers. And then there are those weirdos who attempt to write novels in palindromic form, or without the letter “e”. A heads up for the regular readers of this site (i.e. no one): that last constrained form sounds like so much fun to me that my next post is going to be an attempt at such a story!

Anyways, enjoy the three comics below, and for more, go to http://www.qwantz.com. I recommend going to the archives and reading them from the very beginning- 1st February, 2003. Note that Ryan North gives full permission to share his comics in any way you like, but if you do publish them publicly, just let him know by email.

EATR- The Energetically Autonomous Tactical Robot (artist's impression :P)

The Victorian Era (1837-1901) is a period of human history that I’ve always felt had its own an inimitable charm, what with its delightful gas-lit city streetlights, its tailcoats and top hats, and its neo-Gothic architecture (an example of which is the U.K.’s Houses of Parliament, which were rebuilt between 1840 and 1870 after the original palace was destroyed in a fire). But without doubt, one of the most enduring icons of the Victorian Era, and the concomitant Industrial Revolution, was the ubiquitous steam engine.

Coal-fired steam engines were what drove everything from the great railways, the mines, the textile mills and other factories, the pumping of the domestic water supply, and the irrigation of farmland. By the 20th century, though, advances in internal combustion engines (the kind that’s in your car) and electric motors (like in the ceiling fan above you), and the adoption of oil as a fuel spelled doom for the once-mighty steam engine.

Or did it? Maybe steam was just waiting for a comeback born of the drug-induced hallucinations of some crazy scientist at a government laboratory. I presume that that must be what happened because, seriously, no one in a normal state of mind could come up with something like EATR (Energetically Autonomous Tactical Robot): a steam-powered, vegetarian robot. It’s still primarily a concept, but a working prototype is being built.

The motive for creating something like this (apart from the obvious “Because we can!”) is that such a robot could theoretically operate indefinitely in environments where conventional fuel sources are hard to find. It’s perfect for the American Army, for instance, because it would allow them to dispatch teams of EATRs to perform reconnaissance missions in environments like forests. It could also allow human soldiers to rest while it forages for biofuels, recharges electrical devices, or even transports heavy machinery. Civilian versions of the EATR could be used for forestry patrol and for agricultural applications.

The EATR uses image-recognition software linked to a laser and a camera to recognize plants, leaves and wood. 68kg of vegetation would provide enough electricity to travel around 160km, its builders estimate. Once it identifies appropriate fuel, a robotic arm gathers and prepares the vegetation before feeding it through a shredder into a combustion chamber. The heat from combustion turns water into steam, which drives a six-piston steam engine that turns a generator that creates electric power to be stored in batteries and delivered to EATR’s electric motors when needed. A little circuitous? You bet.

As well as using biomass, EATR’s engine can also run on petrol, diesel, kerosene, cooking oil or anything similar than could be scavenged. The ability to consume a wide range of fuels would be important if the vehicle found itself in areas like deserts, where vegetation may not be available and alternative fuel would be needed.

The robot is actually being developed by a private firm, Robotic Technologies, Inc., but has received funding from DARPA- the Defense Advanced Research Projects Agency, a US government agency. DARPA is, of course, no stranger to outlandish research projects. If the ARPANET that they had created by 1970 to link government communications networks could turn into the behemoth that is the Internet today, who’s to say that we won’t soon be letting our cars out to graze at night, instead of taking them to petrol stations?

[For a hilarious press release from Robotic Technology Inc, countering media claims that EATR feeds on the dead, go to http://www.wired.com/dangerroom/2009/07/company-denies-its-robots-feed-on-the-dead/]

[Most of the information in this article is from Technology Quarterly (June 12, 2010), a publication of The Economist]

http://www.youtube.com/watch?v=_OBlgSz8sSM&feature=player_embedded

You have got to see this. After all, 190 million other people found it worth watching. According to TIME magazine, it’s the most-watched YouTube video of all time (as of March 29, 2010). It’s fascinating not only because it’s the most popular video on YouTube, but also because it’s really not the kind of thing one would think of when asked to picture what YouTube’s most popular video would look like.

[This was written in July, 2009]

In the parallel universe that most Hollywood movies are set in, global catastrophes are often masterminded by evil geniuses with sinister motives. The implicit rules of this nigh-incomprehensible world usually ensure that the audience is afforded a fleeting glimpse of the principal villain before the true nature and extent of the impending disaster is revealed. We find him in a darkened, opulent office, skulking in a high-backed leather chair; and apparently taking great care to make sure that nothing but the top of his head is visible over the back of his chair. In the portentous silence, a slowly curling wisp of smoke rises ponderously upwards from an expensive cigar held in a bejewelled hand.

We only really get to meet this villain once some hero has braved one unimaginable danger after another to arrive at his doorstep and have a word with him about what he’s doing to the world. The imposing leather chair now swings around smoothly, and we find ourselves face to face with a man whose appearance alone may have forced him into a career of extravagant criminal activities. Our villain sports slicked-back hair and a permanent sneer at the stupidity of the world around him; more often than not, a disfiguring scar of some sort graces his facial features. In Hollywood-land, if you’re the man behind the destruction of the world, you must look evil enough to be the man behind the destruction of the world.

But that’s just how Hollywood sees things.

In real life, global catastrophes sometimes just happen, in the complete absence of scheming criminal masterminds. Case in point: the global financial crisis that had begun to manifest itself in the world’s most powerful economies by around mid-2007. No one person caused the global financial crisis; rather, it was the complex interplay of the actions of several key individuals and institutions that led to the conditions of the crisis. Nevertheless, an examination of the facts and of the sequence of events could allow one to guess at which people were most culpable in bringing about the crisis. It’s a bit like a game of Clue, really.

Well. Let’s play, then.

Bubbles in the Economic Ocean

We begin our manhunt by contemplating the strange and (largely) inexplicable events that have come to be known as bubbles. For those completely uninitiated in the technical jargon that is usually used in discussing the global financial crisis, it should be pointed out right away that economic bubbles have about as much to do with the kind of bubbles you’d find in your bathtub as the physicist’s notion of work has to do with what you handed in last week as your homework.

An economic bubble has been defined as a condition where “trade in high volumes occurs at prices that are considerably at variance with intrinsic values”. What this basically means is that when an economic bubble is formed in the market for a particular commodity, a disproportionately large volume of that commodity is being produced and sold; and furthermore, the price at which the commodity is sold is considerably higher than its equilibrium value on the market. The equilibrium price of a commodity can be simply defined as a stable price at which the supply of the commodity consistently equals the demand for the commodity.

Bubbles are said to burst (or crash) when the market comes to its senses and “realizes” that too much of a commodity is being traded at inflated values. When this happens, both the price and the quantity of trade in the market fall drastically. Economic bubbles are notoriously difficult to identify (usually because the actual intrinsic values of assets in real-world markets are almost impossible to calculate). It is often only after a bubble has burst that economists are able to be absolutely certain that a bubble existed in the market in the first place.

Further adding to their mystique is the fact that no one really knows what causes economic bubbles. And interestingly enough, some economists even deny their existence. Nonetheless, an examination of the recent history of a few major markets in the developed world in terms of bubbles goes a long way in explaining what caused the global financial crisis. Of particular interest are the Dot-Com Bubble that burst in 2000, and the U.S. Housing Bubble that’s arguably still “bursting”. Let’s take a look at each one of these in turn.

The Internet Age Hits Adolescence

Chaos in the Stock Markets

Starting around the year 1998, the meteoric rise of a myriad of IT-related companies (collectively referred to as dot-coms) boosted the economies of nations throughout the developed world. Unfortunately, much of the economic value that these companies represented turned out to be- well, worthless. The rapid growth of many dot-coms was subsequently matched only by their sudden and spectacular failures. And while the dot-com collapses had widespread repercussions, it was in the stock markets that the blow to the economy was especially obvious. That’s where the Dot-Com Bubble had been residing, quietly biding its time, waiting for the opportunity to snatch the Internet Age out of its carefree childhood years.

According to the NASDAQ Composite index (which is a complicated tool that’s used to measure the performance of stocks on the NASDAQ Stock Exchange), the bubble burst on March 10th, 2000. Hundreds of dot-coms collapsed after burning through their venture capital, the majority of them never having made any net profit. “Get large or get lost”- the business model backed by the belief that internet companies’ survival depended primarily upon expanding their customer bases as rapidly as possible, even if it produced large annual losses- was revealed to be dangerously unsound advice. The crash of the Dot-Com Bubble caused the loss of around $5 trillion on U.S. stock exchanges, and exacerbated the conditions of the recession that occurred between 2001 and 2003.

Alan Greenspan to the Rescue?

Following the collapse of the Dot-Com Bubble, Federal Reserve Chairman Alan Greenspan initiated several policies in the United States. The U.S. Federal Reserve System (often referred to simply as The Fed) serves as the country’s central bank; it is comprised of twelve regional Federal Reserve Banks in major cities across the nation. The Federal Reserve manages the nation’s money supply and its monetary policy, and is responsible for attaining the (sometimes conflicting) goals of maximum employment, stable prices, and moderate long-term interest rates.

In the aftermath of the Dot-Com Crash, Greenspan set the federal funds rate at only 1% (for comparison, note that between 1999 and 2001, the rate had never been set below 4%). It’s been argued that this allowed huge amounts of “easy” credit-based money to be injected into the financial system, and therefore created an unsustainable economic boom. In other words, the economic growth that occurred between 2003 and 2007 is largely attributable to the excessively high level of credit that was sloshing around the economy at the time. All that credit wasn’t backed by enough actual assets, though; this began to become clear in mid-2007, and that’s when the whole house of cards came crashing down.

The Federal Funds Rate

In order to understand the role that the federal funds rate had to play in flooding the economy with credit, one must begin with the realization of a simple fact: that banks create money by lending. A comparison of two hypothetical scenarios will make this a lot clearer. In the first scenario, a fellow that we’ll call Christiano Kaka earns a $1000 bonus for his work as a pro footballer. But since he’s already got millions in his bank account, he figures that there’s no point in bothering to go down to the bank to deposit the money there.

Instead, he stuffs it under his mattress. He happens to be in the habit of losing his wallet, and this safety measure ensures that even if that were to occur again, he could readily get to the cash the next time he’s in the mood to hit the nightclubs. Now here’s the important thing: that $1000 is effectively dead for so long as it stays there under Christiano Kaka’s mattress. It plays no part in the economy, and doesn’t do anything useful for anybody.

In our second scenario, Christiano Kaka realizes that he’ll be travelling past the bank on his way to the nightclubs anyway, so he does deposit the money there. Christiano Kaka’s money gets added to a large pool of money composed of the deposits from all of the bank’s customers. When a young college dropout called Bill Jobs approaches the bank with his crazy schemes of starting a company that deals in personal computers, the bank’s manager decides to throw him a bone, and loans him $1000.

For our purposes, we might as well assume that the $1000 the bank loaned to Bill Jobs is the same $1000 that Christiano Kaka deposited earlier. But, of course, Christiano Kaka hasn’t lost that money; it’s still his, as he could prove by showing us his bank statement. It’s just that the money also happens to be Bill Jobs’ at the same time. As a matter of fact, the bank has created $1000 for Bill Jobs based on the $1000 that Christiano Kaka deposited. Where in the first scenario the $1000 was retired from the economy, in this second scenario it was used to create another $1000 that will go back into the economy (when Bill Jobs rents an office, buys furniture, pays employees, etc). And that’s how banks create money.

But they can’t just go around creating as much money as they please.

The law requires all banks to maintain a certain level of reserves, either as vault cash, or in an account with the Fed. The ratio of bank reserves to money loaned cannot be allowed to fall below the limit set by the Fed. Therefore, the amount of money that any particular bank can create depends upon the amount of actual money that it holds as deposits from customers. Now, whenever a bank makes a loan, the ratio of reserves to loans falls (assuming that reserves remain constant). A bank may decide to issue a loan large enough to cause its ratio to fall below the limit set by the Fed, but it must immediately raise the reserve ratio again by borrowing cash from other banks. The interest rate at which banks borrow from one another is known as the federal funds rate.

When the federal funds rate is as low as 1%, it becomes very cheap for banks to borrow from one another to make up for reserves deficits. Hence, in the interest of making profits, banks can give out much larger loans to many more people, and still remain on the right side of the law. And that’s exactly what happened in the U.S. economy. With banks handing out credit to any and all who cared to ask for it, the economy became flooded in virtual money. The benign economic conditions that prevailed between 2003 and 2007 created not only a nation of spenders, but a nation that spent money that it didn’t really have; a nation that buried itself under a mountain of debt.

Reaganomics Run Amuck

The deluge of credit-based free spending in the economy was the ultimate expression of a socio-political-economic ideology that had had America in its grips for more than twenty years. It was an ideology founded upon the belief that Government was the problem, not the solution. It was an ideology that declared in stentorian tones: The marketplace must be set free!

Reaganomics.

Initiated and popularized by President Ronald Reagan and his advisors in the ‘80s, the economic policies that came to be known as Reaganomics were aimed at reducing the role of the government in the economy, and allowing it to regulate itself instead. Reagan reduced taxes for the rich, decreased government oversight of financial markets, and ushered in an era of astounding fiscal irresponsibility. Reaganomics has been repeatedly criticized for raising economic inequality and throwing both the public and private sectors of the U.S. economy into massive debt.

Traditionally, the U.S. government ran significant budget deficits only in times of war or economic emergency. Federal debt as a percentage of G.D.P. fell steadily from the end of World War II until 1980; that’s when Reagan entered the scene with his own version of the New Deal[1]. Government debt rose steadily through Reagan’s two terms in office and- except for a short hiatus during the Clinton years- continued to rise right until George W. Bush left office in 2009.

The rise in public debt, however, was nothing compared to the skyrocketing private debt.

The pattern of financial deregulation that Reagan set in motion allowed American consumers access to ever-increasing amounts of credit (and hence ever-increasing levels of debt) for decades. America wasn’t always a nation of big debts and low savings: in the ‘70s, Americans saved almost ten percent of their income (even more than in the ‘60s). It was only after the Reagan-era deregulation that thrift gradually disappeared from the American way of life, culminating in the near-zero savings rate that prevailed just before the current economic crisis hit. Household debt was only 60 percent of income when Reagan took office; by 2007 it had zoomed to more than 130 percent.

It was only with the crash of the housing market in 2007- the second major shock to the U.S. economy in the last decade- that it would become painfully clear that wanton debt as a way of life would have to be abandoned.

If You Can’t Understand ‘Em, Don’t Regulate ‘Em

Returning to the antics of Alan Greenspan in the years following the Dot-Com Crash, we find that he was also responsible for vehemently opposing any regulation of financial instruments known as derivatives. He wasn’t alone in feeling that financial markets could regulate themselves just fine: Securities and Exchange Commission Chairman Arthur Levitt and Treasury Secretary Robert Rubin also held the same view. Together, in the Clinton years, they ensured that investment banks and other financial institutions were given free rein in creating and selling these complex financial instruments.

Nonetheless, those financial institutions have little to thank Greenspan and his cohorts for; they ended up crippling themselves by their involvement in an unregulated market for complex derivatives that no one fully understood. By 2008, large portions of the derivatives portfolios of major investment banks were reclassified as toxic assets. Lehman Brothers, Bear Stearns and Washington Mutual succumbed to the poison coursing through their veins and had to declare bankruptcy or sell off their assets under duress. Other major banks, such as the American International Group (AIG) and Citibank sustained huge losses and only managed to stay afloat with the help of the government.

Although most derivatives are relatively benign, the late ‘90s saw the proliferation of two particularly complex instruments that would later threaten the stability of the entire financial sector: collateralized debt obligations (CDOs) and credit default swaps (CDSs). Because these innovative new instruments offered lucrative payments in times of economic growth and rising asset prices, they spread like wildfire in the years leading up to the current financial crisis.

All derivatives derive their prices from the value of an underlying asset (except credit derivatives, which derive their prices from the values of loans). Investors can make profits on derivatives if they correctly anticipate the direction that the prices of underlying assets will move. Since hardly anyone had foreseen the appalling conditions of the current crisis, it’s probably fair to say that losses were made on derivatives of all kinds, but it was the fact that CDOs and CDSs became extremely popular in the (at the time) flourishing housing market that resulted in their posing such a great threat to the stability of many financial institutions.

The risks of subprime mortgages in the housing market (which we’ll come back to later) were commonly spread out using CDOs and CDSs. It seemed to be a win-win situation for everyone involved: investment bankers could get in on the profits from the housing market, mortgage lenders could spread the risks of questionable loans, and American consumers benefitted from an infrastructure that encouraged offering home financing for all.

But once the Housing Bubble burst and house prices crashed down to nearly-inconceivable levels, those groups were left staring stupidly at one another.

The valuation of CDOs and CDSs is actually so complicated that no one could say for sure what they were worth once the housing market crashed. And without a price that all parties could agree upon, the markets for these derivatives became completely dysfunctional. As a result of the unfortunate marriage between questionable loans and the complex derivatives that securitized those loans, many American citizens lost their homes through foreclosures; mortgage lenders lost billions in loan defaults; and investment banks found themselves laden down with assets that were literally more trouble than they were worth.

Then the government stepped in to clean up the mess.

If only it had done that on a regular basis (and amidst less catastrophic circumstances) through the implementation of a more rigorous regime of financial market regulation.

Houses under the Sea

A Cycle of Folly

It is a regrettable fact that the powers that be in America and other developed nations seem to have very short memories when it comes to the economy. Time and again, the painful lessons learnt during economic hardship were thrown out the window once things turned for the better. The stewards of the American economy would thereby doom themselves and their compatriots to relive the suffering born of past mistakes by making the same mistakes again and expecting different results. Only once catastrophe struck again would they realize that they had brought it upon themselves by ignoring the experiences gained from the last such catastrophe. But catastrophes don’t last forever; and as the most recent one faded into the past, the collective knowledge gained from the last two would disappear from the consciousness of the nation…

And so the cycle of folly that has played such an important role in determining the economic fortunes of the nation has gone on.

It began with the Great Depression. One of the most important initiatives taken to revive the economy under President Roosevelt’s watch was the ratification of the Glass-Steagall Act in 1933. The Act provided for more stringent regulation of the banking sector, and aimed to prevent a repeat of the banking collapse of early 1933. Among other things, it prohibited any one institution from acting as a combination of a commercial bank, an investment bank, and an insurance company; it gave the Fed the ability to control interest rates; it created the Federal Deposit Insurance Corporation (FDIC) to insure bank deposits in commercial banks; and it imposed stringent restrictions on mortgage lending.

Roosevelt also spurred the government to increase spending in the economy as part of his New Deal programs. The drop in public expenditure that marked the cessation of these programs created another recession in 1937. From then onwards, significant decreases in public expenditure regularly led to dismal economic conditions. This happened again in 1953 when large portions of public spending were transferred to national security projects during the Korean War. President John F. Kennedy managed to halt the recession of 1960 by calling for increased public spending in the economy. In the ‘70s, the diversion of funds to the military during the Vietnam War (alongside the quadrupling of oil prices by OPEC in 1973) created another major recession.

And then came the ‘80s. Ronald Reagan took the helm at a time of economic prosperity (“It’s morning again in America!” he would proclaim), and undertook the most radical overhaul of the nation’s economic policies since F.D.R.’s New Deal. He convinced American citizens that their government had no business prying into the affairs of the market, and initiated sweeping cuts in public expenditure throughout the economy. And, perhaps more significantly, he overturned many of the regulatory policies that Roosevelt had set in place.

Reagan ratified the Depository Institutions Deregulation and Monetary Control Act and the Garn-St. Germain Depository Institutions Act, both of which sought to repeal parts of the regulatory provisions of the Glass-Steagall Act. Of the Garn-St. Germain Act, Reagan said: “This bill is the most important legislation for financial institutions in the last 50 years.” He may have been right about the significance of the Act, though he probably never intended for it to have the effect it eventually did. By liberalizing mortgage lending and the Savings and Loans industry, Garn-St. Germain paved the way towards a debt-ridden American economy that would be woefully unfit to weather the economic storms of the new millennium.

In the ‘90s, the American lifestyle of living beyond one’s means through the use of cheap credit was considered justifiable because once one took into account the rising values of people’s stock portfolios, everything seemed just fine. It was during this time that the final blow to the Glass-Steagall Act came in the form of the Gramm-Leach-Bliley Act of 1999. This Act allowed commercial banks, investment banks, securities firms and insurance companies to consolidate and form conglomerates.

It was believed that the conflicts of interest between these different kinds of institutions that the Glass-Steagall Act had sought to prevent would no longer be a problem in a flourishing financial sector. Nonetheless, it was because of the Gramm-Leach-Bliley Act that banks such as AIG and Citigroup (which started out as Citicorp, a commercial bank, and became a financial services conglomerate by merging with Travelers Group only after the passing of the Act) managed to get embroiled in the problems that the mortgage industry began to face after 2007. And since these two banks- and others like them- had become “too big to fail”, the government had to spend millions of taxpayer dollars in keeping them afloat.

As we’ve already noted, the stellar performances of U.S. stock markets did come to an end in 2000, with the bursting of the Dot-Com Bubble. But once the economy recovered and growth set in between 2003 and 2007, Americans returned to their free-spending ways. This time, they reasoned that a booming housing market would support their costly habits, just as they had assumed with the stock market before 2000. If anything, they were even more confident this time round. Housing was an infallible investment, right?

Wrong.

The Housing Bubble Expands

As we explore the most proximate cause of the global financial crisis- the bursting of the U.S. Housing Bubble- you’ll begin to see why it was necessary to start our discussion with the Dot-Com Crash, and to jump back and forth in time as often as we have. In a very real sense, the Housing Bubble was caused by the Dot-Com Bubble. The economic conditions that led to the formation of the Housing Bubble were created during and after the crash of the Dot-Com Bubble. Similarly, the legislation and economic policies that came into effect at that time had their roots in the policies of several decades ago; and their effects extend into the present day.

For one hundred years, between 1895 and 1995, U.S. house prices rose in line with the rate of inflation. Then, between 1995 and 2005, the Housing Bubble began to envelop the economy and house prices across the country rose at phenomenal rates. During this time, the price of the typical American house rose by 124 percent; house prices went from 2.9 times the median household income to 4.6 times household income in 2006. Where the average number of houses built and sold before 1995 was 609,000, by 2005 that figure had risen to 1,283,000.

Housing appeared to be outperforming nearly every other sector of the U.S. economy, and there were those who would have us believe that it would continue to do so ad infinitum. Influential personalities such as David Lereah, the chief economist of the National Association of Realtors, regularly trumpeted the rock-solid dependability of housing as an investment; consider the title of his bestselling book- Are You Missing the Real Estate Boom? The media joined in, too, and helped inflate the bubble by glamorizing the housing boom with television programs such as House Hunters and My House is Worth What?

Even amidst all the frenzy, however, a small number of astute observers managed to figure out what was actually happening. In 2002, economist Dean Baker was the first to point out the existence of a bubble in the housing market; he put the value of the bubble at $8 trillion. And what’s even more impressive is that he correctly predicted that the collapse of the bubble would lead to a severe recession, and would devastate the mortgage lending industry.

The Subprime Mortgage Crisis

The unreasonably high level of confidence in the housing market, coupled with the lenient regulations that governed mortgage lending, caused the number of mortgage-backed home purchases in the U.S. to shoot upwards after 2003. The real problem, however, was the fact that mortgage lenders got greedy, and began to offer mortgages to thousands of people who had little ability to repay them. Mortgage lenders are expected to assess the suitability of clients by checking their credit histories, income levels, and other relevant factors; but this process was often overlooked (or only nominally undertaken) in the heady years of the housing boom.

All this irresponsible lending created a huge market for what are known as subprime mortgages. Calling them “subprime” is a euphemistic way of saying that they’re extremely risky loans, and that there’s a high probability that they won’t be paid back. It was to people who didn’t qualify for “prime” loans that the mortgage lenders offered the subprime mortgages (usually at higher interest rates than “prime” mortgages). Lenders such as Countrywide, Indymac Bank, and Beazer Homes became notorious for the aggressive manner in which they marketed subprime mortgages to low-income consumers. (Two of those companies later declared bankruptcy, and one of them is being investigated for mortgage fraud).

Worsening the situation was the fact that nearly 80 percent of the subprime mortgages issued in the last few years were adjustable-rate mortgages (ARMs). Created by the World Savings Bank in the 1980s, the ARM seemed an innocent enough offering until the housing market turned sour in 2007. The interest rate on ARMs doesn’t remain constant throughout the term of the loan; instead, it’s tied to any one of several indices that measure economic performance.

In March 2007, house prices began to plummet, falling 13 percent in that single month. It was, however, only the beginning of a prolonged market correction that hasn’t quite ended yet. As the housing market began to deflate nationwide, the interest rates on ARMs climbed steadily upwards. Millions of homeowners across the U.S. found themselves unable to pay the higher interest rates on their mortgages, and were forced to default on their loans. This resulted in banks and mortgage lenders foreclosing those homes- in other words, throwing the former homeowners out and assuming ownership of the properties. By July 2009, more than 1.5 million homes had been foreclosed, and another 3.5 million are expected see the same fate by the end of the year.

Following closely on the heels of the “victims” of foreclosure are those that are referred to as being “under sea level”. These are the homeowners whose mortgage loans are now worth more than their houses are worth. The technical term for this phenomenon is negative equity. The unfortunate homeowners who find themselves in this position couldn’t even get enough money to clear their mortgage debts if they sold off their homes; they are, therefore, extremely vulnerable to facing foreclosure in the near future. As of December 2008, there were 7.5 million homeowners under sea level. Another 2.1 million people stood right on the brink, with homes worth only 5 percent more than their mortgages.

The Misfortunes of the Mortgage Lenders

The drastic fall in house prices first affected those financial institutions that were directly involved with the housing industry- the banks and corporations that financed house construction and mortgage lending. As the number of foreclosures soared, these companies lost millions of dollars in unredeemable loans. And if you’re thinking that at least they were left with the properties that they gained through foreclosure- well, that didn’t exactly help a great deal.

Look at it this way: in 2005, a bank puts up a $10,000 ARM so that a nice young couple can buy a house and start a family. The bank expects to profit on this investment through the interest payments it will receive over, let’s say, the next ten years. Since the economy is chugging along just swimmingly, the interest rate on the ARM is kept relatively low. But that’s okay from the bank’s point of view, because the value of the property itself makes up for the low interest rate. You see, even in the regrettable event that the new homeowners fail to keep up on their payments and the bank has to foreclose on the property, it ends up with a very marketable house that’s probably worth even more than the $10,000 it initially cost. Nice.

Once house prices began on their steep decline, though, and the interest rates on ARMs reset at much higher levels, things got ugly. There wouldn’t really have been a problem if every mortgagor still had the ability to pay the interest on his mortgage; but the aggressive subprime lending of the last few years- even to people who didn’t really qualify for mortgages- meant that there were millions of homeowners who just couldn’t pay the higher interest rates, and were forced to default on their loans.

After the inevitable foreclosures that followed, banks and mortgage lenders were left in possession of houses that nobody wanted to buy and were now worth almost nothing. Returning to our earlier example, we’d find that the bank would have lost nearly the entirety of the $10,000 that it initially put up; it would lose out on interest payments after foreclosure, and would be left with a house that could hardly even sell for five hundred dollars.

Hence, it’s no wonder that twenty-five major subprime lenders (several of them Fortune 500 companies) had to declare bankruptcy between 2007 and 2008.

Introducing Fannie, Freddie, and the Credit Crunch

Next in the line of fire were the companies that dealt in the trade and securitization of mortgages. The two giants in this industry had come to be known as Fannie Mae and Freddie Mac. The quirky names come from the acronyms that represent the full names of each one of them: FNMA (Federal National Mortgage Association) for Fannie Mae and FHLMC (Federal Home Loan Mortgage Corporation) for Freddie Mac. Both Fannie and Freddie were Government Sponsored Enterprises (GSEs), meaning they operated in a sort of grey area between the public and private sectors.

Fannie Mae and Freddie Mac were responsible for buying mortgages from mortgage lenders, and creating and selling mortgage-backed securities (MBSs). By buying mortgages, they provided banks and other financial institutions with fresh money to make new loans; and by creating and selling MBSs, they created a secondary mortgage market that investment banks and securities traders could participate in. The primary purpose of all this was to the give the American housing and credit markets increased flexibility and liquidity. Fannie and Freddie were so deeply enmeshed in the housing market that by 2008 they owned $5.1 trillion in residential mortgages- about half the total U.S. mortgage market.

The final link in the chain consisted of the members of the shadow banking system- investment banks such as Lehman Brothers, Bear Stearns and Goldman Sachs. They traded in MBSs in the secondary mortgage market, and insured pools of mortgages using ridiculously complex financial instruments such as CDOs and CDSs. These derivatives were sold back to mortgage lenders and to commercial banks throughout the economy. Investment banks don’t actually hold any cash reserves, but their influence on the economy became increasingly important as the nation’s financial sector loaded up on debt and the financial instruments that backed up that debt.

Therefore, when the subprime mortgage industry imploded, it wasn’t only the mortgage lenders who were affected. An entire industry that dealt in mortgage-backed securities went down with it; Fannie Mae and Freddie Mac had to be effectively nationalized to prevent their complete collapse. The investment banks that purported to spread the risks of the mortgage industry also sustained huge losses because they had completely failed to foresee the effects of the housing market crash. And finally, banks across the country that held portfolios of credit derivatives were left with worthless, “toxic” junk

These huge losses across the financial sector created what was known as the “Credit Crunch”. Billions of dollars of capital that had been based on the housing market were wiped off the balance sheets of banks and other financial institutions. This left them with very little ability to extend new credit to consumers. It was at this point that that the crisis was said to extend its reach from “Wall Street to Main Street”, meaning that it no longer affected just the major financial institutions, but was now impacting the lives of citizens throughout the country.

As credit streams began to freeze, the entire economy slowed, and then descended into recession. Businesses began to shut down and unemployment rose. Investment and consumer spending plummeted. Alongside the U.S., other developed nations experienced similar symptoms. And with the economies of the developed world in tatters, developing nations lost major sources of manufacturing revenue; their economies began to slow, too.

All in all, the global financial crisis had arrived.

And The Rest Is History…

Through most of 2008, the U.S. government scrambled to contain the crisis. It initiated the Troubled Asset Relief Program (TARP) to help financial institutions get rid of their toxic assets, and injected nearly $800 billion into the economy through a stimulus package. The worst economic crisis since the Great Depression isn’t about to go down without a fight, though; many experts believe the economy won’t fully recover right until 2011.

But let’s not bother ourselves with speculations about the future. Instead, let’s go back to what we had initially set out to do: have we managed to identify the criminal mastermind behind the global financial crisis? No. Of course not. While we probably managed to gain a few interesting insights into the causes of the crisis, we never really came close to achieving that goal. If anything, we should have come to the conclusion by now that it’s ridiculous to assume that there was any one person behind it all.

Real life is far too boring for that.

-THE END-


[1] In an effort to resurrect the U.S. economy at the height of the Great Depression, President Franklin Delano Roosevelt initiated a sweeping range of economic reforms between 1933 and 1935 that collectively became known as the New Deal. In contrast to Reagan’s policies, Roosevelt’s New Deal stressed the importance of fiscal responsibility and government oversight of the economy.