Skip navigation

Monthly Archives: August 2011

About two posts back (“What Am I” – a short play) I mentioned the question “Does the evidence of our eyes tell us about the true nature of the world?” as one that philosophers have been mulling over for more than 2500 years. I think the question itself might require a bit of explanation.

Let’s say you’re looking at an orange (there’s one right here for your convenience). I know that it’s perfectly natural to believe that your eyes aren’t lying to you about the fact that its colour is, well, orange. It feels as though the evidence of our eyes is enough for us to say something completely non-subjective about the orange – that it is, in fact orange.

An orange.

But that’s not quite true. What is non-subjective (or at least a little more so) is that the orange is reflecting light mainly in the 635-590 nm wavelength range. Your brain perceives that light as being orange in colour, but that doesn’t necessarily mean that “orange-ness” is in any way a fundamental property of that light.

Consider this: visual information is not carried from the orange in front of you to your brain through fibre-optic cables in your head; it’s converted to electrical charges in your eye and then sent along nerves (in a process called phototransduction; and by the way, isn’t it astounding that we actually know this stuff? Three cheers for medical science!) to the brain, where it’s processed to form visual imagery.

In other words, not a single ray of light has ever reached your brain (well, unless you’ve had a lobotomy, but let’s ignore that possibility for now). Light is something that we cannot experience directly. It’s as though we’ll never be able to listen to an opera, but we can make some sense of its sheet music. And more than that, it isn’t even true that the notation in our sheet music is somehow the “right” one and that there’s no other way to record the opera. Many animals see colour very differently from how we do; some can even see ultraviolet light, which we can’t.

As a matter of fact, colour as a whole is an illusion that our brains create in order to help us deal with the world around us. Look at the table below, and consider the “border” between orange and red light – the wavelength of 635 nm. Truth is, it doesn’t really exist. It’s not as though light with a wavelength of 634 nm has a property of orange-ness whereas as we cross over to 636 nm, the light acquires a property of red-ness. That’s just how our brains perceive things, for some evolutionary reason.

This is how human beings perceive light of adequate intensity between 450 and 700 nm in wavelength

Finally, here’s a treat for having stuck around for all this: a black-and-white photograph of a rainbow. Notice that there’s no way to tell that there are different colours in there – black and white cameras don’t have brains like ours, which lump different wavelengths into fixed categories of colour. A rainbow spans a continuous spectrum of colours; the distinct bands that we normally see are an artefact of human colour vision, and no banding of any type is seen in this black-and-white photo (only a smooth gradation of intensity to a maximum, then fading to a minimum at the other side of the arc).

A black-and-white image of a rainbow

I think it’s to J.K. Rowling’s credit that she managed to include so many classical monsters in her Harry Potter series- in a sense reviving them from obscurity and allowing them to be passed onto later generations of modern readers. Obvious examples include centaurs, the basilisk and the sphinx, all of which come from Ancient Greek myths. I’m sure there are a few others as well.

Until now, though, I didn’t realize that the Hippogriff wasn’t Rowling’s creation either. Somehow, the name “Hippogriff”” seems to convey the same sort of whimsy as “Dumbledore” (which I’m pretty sure is something that Rowling created by combining two other words – although, of course, I could be wrong about that, too), and I just assumed that she’d come up with it.

Apparently not. Seems that Hippogriffs have been around for quite some time. An early reference occurs in the epic poem Orlando Furioso, by Ludovico Ariosto, which was first published in its complete form in 1532. Orlando Furioso is, by the way, probably no less of a swashbuckling fantasy story than the Harry Potter series: the plot wanders from Scotland to Japan, and even to the moon; and of course, there is an admirable array of mythical monsters as well. Haven’t read it yet, but I probably should at some point.

I found out about the Hippogriff’s impressive ancestry when I came across two illustrations for Orlando Furioso by Gustave Doré (for more on him: Gustave Doré: Gloom and Glory). Here they are:

"Hippogriff", illustration by Gustave Doré for Orlando Furioso.

"Ruggiero Rescuing Angelica" by Gustave Doré; illustration for Orlando Furioso

By the way, my apologies to readers with delicate sensibilities, but nude women are pretty common in a lot of art.

UPDATE: Who knew Harry Potter could be analyzed this deeply (or weirdly)?

UPDATE 2: For an accessible overview of Orlando Furioso, and its prequel Orlando Innamorato, this is a great link.

“Philosophy isn’t practical, like growing food or building houses, so people say it is useless. In one sense those basic tasks are much more important, it’s true, because we couldn’t live without them… We eat to live, but we live to think. And also to fall in love, play games and listen to music, of course. Remember that the practical things are only about freeing us up to live: they’re not the point of life.”

That’s from If Minds Had Toes, by Lucy Eyre. It’s a wonderful book that works as a sort of advertising campaign for the subject of philosophy (which, let’s face it, is dismissed as a waste of time by a whole lot of people) while also being funny, imaginative and intensely thought-provoking. I wouldn’t quite rate it RREHR, but perhaps just a notch lower, as EM-BGIB: Educated Minds should have a Basic Grasp of the Ideas in this Book… Yeah, that’s not catching on, is it?

'If Minds Had Toes' a simply-written, but intriguing tale of a 15 year-old's introduction to the lofty ideas of philosophy

Anyway, quick summary: Socrates and Ludwig Wittgenstein – two giants of philosophy – are having an argument about whether knowing something about philosophy can actually make people happier. They decide to settle the matter with an experiment; they spend a few weeks guiding a fifteen-year-old boy, Ben, through the world of philosophy. Ben is introduced to timeless philosophical questions such as “Does the evidence of our eyes really tell us about the true nature of the world?” and “Is free will an illusion?”. And even though at first he would rather think about girls and football than that sort of thing, in the end, he does come away as a changed  person and has a new perspective on life. Socrates wins the bet and the “bad guy” Wittgenstein goes back to sulk in a corner somewhere.

Ludwig Wittgenstein (1889-1951), the bad boy of philosophy

In order to summarize one of the most interesting ideas expounded in the book, I decided to write a (short and inelegant) play. Here it is:

“What Am I”

A Short Play

Individual A: Hi there! I just wanted to ask you a simple question. What are you?

Individual B: Um, hey. You again. Well, since I know you’re not gonna leave me alone before I answer your stupid question – I’m a human being.

Individual A: But that doesn’t answer my question. That just places you in a category. It doesn’t tell me what you are. If I were an alien from outer space, who’d never even heard of Earth, do you think “I’m a human being” would tell me anything useful?

Individual B: Ugh. Fine. [Gestures towards himself/herself] This body, and everything in it, is me.

Individual A: [grabs B’s finger] What about this? Is this you?

Individual B: Sure, why not?

Individual A: So if I cut this off and sent it to France, would you say you had gone to France?

Individual B: Damn it, no! Only my whole body is me!

Individual A: So amputees aren’t complete human beings?

Individual B: Er, no, that’s not what I meant to say.

Individual A: Of course not. Let’s say you meant to say that your body is something very closely associated with you, but it’s not you. The most fundamental part of what you are is the part that you would recognize as you even in complete isolation. That’s why you wouldn’t consider a recently-dead corpse of your body to be you. So what about the stuff that’s in the particular part of your body that you call the brain? Your memories and experiences – are they what makes you you?

Individual B: [a little more interested now] Yeah, I guess that could be it. That must be it, right?

Individual A: Sorry, no, I don’t think so. We’re in murkier territory now, but I really don’t think it would be impossible to upload all your sensory and emotional data on to a computer’s hard disk (even if not now, in a decade or two). But that wouldn’t mean that you would become the computer, or that there’d be two yous.

Individual B: But, then…What the hell am I?… All that’s left is… I must have a non-physical, supernatural soul

Individual A: Thankfully, there’s a way to avoid that. But it’s not easy to grasp. You are not simply your past and your present, but also your future. You are a continuous event that spans a certain time period. This continuous event is made up of lots of little experiences. Even at the moment you are born, the experiences just before your death are a part of you. You are a pan-dimensional being, because your self-awareness transcends space and time.

Individual B: Whoaa. That is so cool. Philosophy rocks!

[Hugs and high-fives ensue…]

–The End–

What is it with German megalomaniacs and moustaches? Observe:

Left: Kaiser Wilhelm II; Right: Adolf Hitler

On the left we have Kaiser Wilhelm II (1859-1941), ruler of the German Empire during World War I (“the most evil German to ever live” – according to the Simpsons, anyway). And on the right is good old Adolf (1889-1945). You’ll notice from the dates that it’s quite possible that they met at some point to discuss their respective moustaches – oh, and World Wars; I guess that was a common interest too.

A friend recently brought to my attention the LHC@Home project, a distributed computing project dedicated to analyzing the data generated by CERN’s Large Hadron Collider (data totaling 15 million gigabytes a year!), and possibly even finding the Higgs boson. A distributed computing project is basically one that attempts to solve a problem using a large number of autonomous computers that communicate via a network (in the case of LHC@Home and other “citizen science” projects, this network is, of course, the Internet). The overall problem is divided into several small tasks, each of which is solved by one or more computers.

Modern PCs are powerful enough to be useful in solving extraordinarily complex problems, such as modelling the paths of beams of protons. And it’s not even like you’ll notice that your computer seems a bit sluggish and distracted (as human beings often get when thinking about things like the origins of the universe and the Higgs boson), because distributed computing projects use software platforms that only allow your PC’s resources to be shared when your system is idle – i.e. when you’re not doing anything. So you only really notice anything when you’re PC’s been idle for long enough for a screen saver to start up.

I had the Rosetta@Home project (more on that later) installed on my old PC, and I can tell you this: the visualizations that the software creates as a screensaver while working on the distributed project are actually quite mesmerizing. I expect the same will be true of the LHC@Home project.

Rosetta@Home screen saver

If you’re interested in joining a distributed computing network, I’d recommend first installing a software called BOINC – the Berkely Open Infrastructure for Network Computing. The BOINC was developed by the University of California, Berkely, and was first used as a platform for the SETI@Home project, but now it’s used in nearly all distributed computing projects.

The BOINC platform is used by many distributed computing projects

Finally, here’s a list of a few other notable distributed computing projects that you might consider joining:

1. Rosetta@home: Geared mainly towards basic research in protein structure prediction, but the results of this research have vast potential in curing dozens of diseases. Implemented by the University of Washington.

2. Folding@home:  Created by Stanford University to simulate protein folding. Note that this is one of the few distributed computing projects that does not use the BOINC framework.

3. SETI@home: The SETI Project does exactly what its name suggests- Search for Extra-Terrestrial Intelligence; it does so by analyzing radio telescope data.  Like the BOINC interface, it was created by the University of California, Berkeley.

4. Einstein@home: Analyzes data from the Laser Interferometer Gravitational-Wave Observatory (LIGO) in USA and the GEO 600 (another laser interferometer observatory in Germany) in order to confirm direct observations of cosmic gravitational waves, which Einstein predicted, but have never been observed.

5. MilkyWay@home: Milkyway@Home uses the BOINC platform to create a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey. This project enables research in both astroinformatics and computer science.

So there you have it: you can help cure cancer, discover alien life or radically change our view of the physical universe. What’re you waiting for? Screen Savers of the world, unite!

UPDATE: Here’s a more complete list of BOINC projects:

UPDATE 2: I now have Rosetta@Home installed again. Yay, I’m making an actual (although tiny) contribution to Science! I’ll just go put a tick on my List of Things To Do in Life next to that.

Wanna know how little physics you know?

Ever wondered how little sense cutting-edge physics research would make to a layperson like you or me? (I’m assuming that you don’t have a PhD in physics; if I’m wrong, please let me know, because – well, it would be pretty cool if there were any Physics PhDs reading this blog.) Well, find out now! – by playing arXiv Vs. snarXiv, a game in which you’re asked to choose which of two titles belongs to an actual research paper in physics and which one is made up.

Even though it’s sometimes pretty easy to get the answer right (hint: research papers in any science very rarely have titles that are just two or three words long), try to concentrate on the fact that you (and I) don’t have any clue what a lot of the words and concepts in the actual physics papers’ titles mean.

Just so you know (and learn something out of this endeavour), the arXiv (pronounced “archive”; the X represents the Greek letter “chi”)  is an archive for electronic pre-prints (i.e. not-yet peer reviewed drafts) of scientific papers in the fields of mathematics, physics, astronomy, computer science, quantitative biology and statistics.

The snarXiv, according to its creator, “only gen­er­ates tan­ta­liz­ing titles and abstracts at the moment, while the arXiv deliv­ers match­ing papers as well.” Here are the uses he suggests for the site:

  • If you’re a grad­u­ate stu­dent, gloomily read through the abstracts, think­ing to your­self that you don’t under­stand papers on the real arXiv any better.
  • If you’re a post-doc, reload until you find some­thing to work on.
  • If you’re a pro­fes­sor, get really excited when a paper claims to solve the hier­ar­chy prob­lem, the lit­tle hier­ar­chy prob­lem, the mu prob­lem, and the con­fine­ment prob­lem. Then expe­ri­ence pro­found disappointment.
  • If you’re a famous physi­cist, keep reload­ing until you see your name on some­thing, then claim credit for it.
  • Every­one else should play arXiv vs. snarXiv.
arXiv vs. snarXiv is fun for a while (not for too long, though), and the game comes up with interesting assessments of your success at guessing. For example, there’s “Worse than a monkey” and “Nobel Prize winner” (that’s for 100% accuracy).

I wonder if I’ve ever been as hooked by anyone’s writing within about ten pages as I was when I just got into Arundhati Roy’s The End of Imagination (Included in the collection The Algebra of Infinite Justice). It’s brilliantly written and makes a cogent case against nuclear weapons. Having been born a few decades too late, I never really witnessed the anti-nuclear movement in full force. For the first time, I think I am suitably horror-struck by the knowledge of what nuclear weapons truly are.

Somehow it never really struck me that it was horribly wrong for the United States to have detonated nuclear bombs in the hope of getting Japan out of World War II. Let’s say two families, the As and the Bs are engaged in a century-long vendetta that’s claimed dozens of lives and caused immense  suffering. Let’s say family A realizes that if they kidnap the head of B’s newborn son and subject the infant to the most twisted forms of torture they can come up with, B will be horrified enough to abandon the vendetta forever. Does this make it right – or in any way excusable – for them to do that?

Although on the whole I really liked the essay, there are two points with which I do have issues. The first concerns the following passage, which describes Western society:

“These are people whose histories are spongy with the blood of others. Colonialism, apartheid, slavery, ethnic cleansing, germ warfare, chemical weapons – they virtually invented it all.”

Here’s the thing – I don’t like it when people seem to imply that the people of the East lived in some kind of idyllic Heaven on Earth before the West came along. The Arabs were avid traders of slaves, and the practice of slavery was nearly as common in the Chinese, Japanese, and North African kingdoms as anywhere in the West.

(And in case you’ve never heard of white slaves, I strongly recommend you visit this link:

White female slave captured by the Barbary Corsairs (see link above)

As for apartheid, how much older is the Hindu caste system? And how is it any better? And finally, just because empire-builders in the East (Genghis Khan, for instance, or the Mughals) did not build empires as large, or as recently as the Western nations did does not mean that the peoples of the East are inherently and universally quiet, non-materialistic, peace-loving creatures.

My second problem is with the following:

“…we embrace the most diabolic creation of Western science [i.e. the nuclear bomb] and call it our own.”

This is a fervent request to everyone reading this: please don’t use terms like “Western science”. There’s only one way of doing science. It’s called the scientific method; it involves principles such as experimentation, observation, and accurate prediction, repeatability, openness, etc. Nobody can own any part of the scientific method and there are no separate Eastern and Western versions of it. (Any body of “knowledge”  that is not built upon the scientific method – e.g. astrology, voodoo, homeopathy, feng shui, etc – is not science.)

Having said that, it is, of course, true that nuclear bombs were invented by Western scientists, and that Eastern scientists may never have done something like that if they were in the same position. So call them a “Western invention” if you will. That’s fine.

Apparently, the Ayatollah Ruhollah Khomeini, a revered Shiite cleric and the Supreme Leader of Iran following the Islamic Revolution, made the following prescription for Shiite Muslim men in his book Tahrir al-Wasila:

“A man is not to have sexual intercourse with his wife before she is nine years old; whether regularly or occasionally, but he can have pleasure from her, whether by touching or holding her, or rubbing against her, even if she is as young as an infant… If a man penetrates and deflowers the infant then he should be responsible for her subsistence all her life.”

And no, that was not written some time in the 7th century, but in the 20th.

I came across the quote in an interview of the Lebanese writer Joumana Haddad, who has become renowned (and notorious) for her efforts to bring about gender equality in the conservative Arab world. The quote is included in her book I Killed Scheherazade, which has received favourable reviews from the New York Times, the BBC, and other respectable media organizations.

I have to end this post by being very clear on one thing: I have not been able to find an authoritative online translation of the Tahrir al-Wasila, much less an authoritative version with the quote actually in it. So I can’t even say I’m sure it’s in there. My point however, remains. Religiousness is no substitute for morality.

Probably one of the best optical illusions I've ever seen. The squares A and B are the exact same shade of grey. Seriously. Try using MS Paint to check

The first correct illustration of the structure of DNA

The following link takes you to the article – by James Watson and Francis Crick – in which the structure of DNA was correctly identified for the first time. It was published in Science in 1953.

I just love the restraint they showed in writing about what their proposed structure might mean for the study of heredity in all living things: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” And that’s all they say on the matter, preferring to wait for experimental confirmation before pursuing the idea further.

Firstly, let me just say that it’s getting a lot harder to write the kind of full-length posts that I actually prefer writing; so from now on you might find that The Folly of Human Conceits will have a lot of mini-posts about whatever the latest thing to catch my fancy is. And here’s what it is as of right now : The Postmodernism Generator! (available at the following site:

So, really quickly, here’s the thing: the word “postmodernism”, outside the limited context of architecture, doesn’t actually mean anything. Yes, you might hear it dropped quite often in certain intellectual circles, but these are generally the kind of intellectual circles in which the intellectuals have quotation marks around them, if you know what I mean. They’re the kind of intellectual circles in which passages of writing such as the following might actually be considered impressive, rather than ludicrous:

“We can clearly see that there is no bi-univocal correspondence between linear signifying links or archi-writing, depending on the author , and this multi-referential, multi-dimensional machinic catalysts. 

You might be tempted to think that postmodernist thought such as that is just a bunch of complete nonsense – and you would pretty much be right! The physicist Alan Sokal concocted gibberish like the above and sent it to the American journal Social Text in 1996And despite the fact that it was nothing more than a carefully crafted parody of postmodernist writing, the editors of the journal accepted the article and published it!

In the words of Richard Dawkins [in his essay “Postmodernism Disrobed”]:

“Sokal’s paper must have seemed a gift to the editors because this was a physicist saying all the right-on things they wanted to hear… They didn’t know that Sokal had also crammed his paper with egregious scientific howlers, of a kind that any referee with an undergraduate degree in physics would instantly have detected.”

Sokal later expressed regret at having failed to jam even more syntactically correct, but completely meaningless, sentences into the parody article he sent to Social Text. Apparently, he just didn’t have the knack for creating them. And now, finally, enter The Postmodernism Generator – a program that generates an entirely new postmodernist essay every time you visit the site! Every essay is grammatically correct, but complete nonsense! You have to try it to get it. Enjoy!

N.B.: You might also consider submitting one of these to a college professor whom you suspect never reads any of the essays you submit!