Skip navigation

Remember how there’s a spell in the Harry Potter novels that allows you to erase someone else’s memories (I think you’re supposed to say “obliviate” when you cast it)? Well, it turns out that, once again, boring old Science has proved itself capable of replicating the effects of Magic. To some extent, anyway.

You may have heard of electroshock therapy; and you probably pictured something like this when you did hear of it:

Electroconvulsive Therapy - Frankenstein's legacy?

But the truth is, electroconvulsive therapy (ECT), as it’s known today, isn’t really that dramatic, and usually doesn’t require a murderously deranged doctor.

ECT is a psychiatric treatment that’s used to relieve the effects of some kinds of mental illnesses (including, prominently, severe depression). As far as I can tell, ECT is almost as simple as it looks: you hook up a few wires and induce a few seizures by passing electric currents through the patient’s brain. But before you let your imagination run away with you, note that the patient is under general anesthesia, is given muscle relaxants to prevent major convulsions, and the currents are usually tiny.

Although it’s great that people often feel a lot better after ECT, it’s a bit scary to think that no one knows why or how it works. We do know, of course, that electrical currents passing along neurons are part of the foundation of how the brain works; but that doesn’t explain why a sudden shock to a generalized area of the brain should relieve a wide variety of symptoms.

Anyway, here’s the interesting thing: experiments with patients who’ve received ECT have found that they often incur amnesia after the therapy, sometimes forgetting things as far back as a few years. This would be horrible if not for the fact that this kind of severe amnesia following ECT is almost always temporary – the lost memories usually do return in a few days or weeks (again, how this might happen is a mystery).

However, there are a few memories that usually do not return: those of events immediately before and after the administering of the ECT. And that’s what you use as a Memory Charm. This could, of course, be of immense practical value. For instance, so long as you’re quick about it, you actually can get that certain someone to forget something extremely embarrassing that you do in front of him/her.

Now if only we could fit ECT apparatus into a little wand-shaped thingy…

I think I might have a real self-control problem when it comes to books. The latest manifestation of this malady came when I just couldn’t keep myself from delving into Living High and Letting Die, by philosopher Peter Unger – even though I was already reading three other books at the time (and had an examination the next day). But then again, the central message of the book can hardly fail to attract attention. In a nutshell, it argues compellingly that if ordinary middle- and upper-class people like you and I thought clearly about the moral demands of living in a world in which literally hundreds of thousands of children starve to death every year, we would realize that just about every single one of us fails quite miserably at meeting those demands.

I’ll probably go into a few details on some of the book’s most important ideas in a later post, but right now I want to talk about a very interesting discussion I had with a friend after I told her about Living High and Letting Die. She responded with considerable skepticism to the idea that relatively well-off people should donate significant portions of their income to organizations that provide life-saving aid, such as UNICEF and OXFAM. Much of this skepticism stemmed from the fact that, in all probability, you will never know exactly what an organization such as UNICEF does with the money you donate to it.

This wouldn’t be much of a problem if you were confident that every single dollar you donated was spent on things such as oral rehydration therapy (ORT) packets, and hence directly responsible for saving the lives of children in many parts of Africa and Asia. But that’s clearly not true. There are administrative costs involved in keeping UNICEF standing, and some of the money donated to it must be used to cover those costs. This may still not seem like anything objectionable, but what if we focus on one particular administrative cost: the salaries of the top officials at UNICEF?

Here’s the thing: it’s quite possible that the top-ranking officers of a multinational organization like UNICEF draw relatively large salaries. And those salaries are paid, indirectly, by people who donate to the organization. But surely the people who donate to UNICEF do so out of concern for children at risk, rather than out of concern for whether or not top-ranking officials at UNICEF get to live in nice houses, buy expensive cars, and so on? Does this constitute a valid argument against donating to UNICEF?

My own instinctive reaction was that it did not. I tried arguing that running UNICEF is important and difficult work, and that no one should feel bad about paying the salaries of the people who do that work. I tried to focus on the fact that since nearly no one else does the work that UNICEF does, you have to make a choice between helping needy kids while paying large salaries; or doing nothing at all to help needy kids – which had to be worse.

But it still didn’t feel right.

The people who are supposed to be saving the world’s poor are themselves living very comfortable lives, with money that could have been directed to the poor instead of to them.

I could tell that this was definitely something worth thinking about. But I couldn’t come up with a satisfactory argument either way, so I’m glad I came across this, towards the end of Living High and Letting Die:

“If it’s all right for you to impose losses on some particular person with the result that there’s significant lessening in the serious losses suffered by others overall, then, if you’re to avoid doing what’s seriously wrong, you can’t fail to impose equal losses on yourself when the result’s an equal lessening of serious losses overall.”

[slightly paraphrased for clarity]

Unger calls this the Reasonable Principle of Ethical Integrity, and uses it to argue for his central idea that well-off people like you and I should donate significant amounts of our income and wealth towards the people who are most in need of it. But even without following through to that conclusion, I think this points towards a very satisfactory answer to our conundrum regarding rich UNICEF employees.

Let me try to explain. Basically, the Reasonable Principle of Ethical Integrity argues that you’re not special, and that if morality imposes certain demands on UNICEF employees, it imposes the same demands on you. Hence, it might be okay for you to demand that UNICEF employees should have caps imposed on their salaries “because there’s kids starving in Africa”. But this would only be fair if you accepted a cap on the salary you may earn, wherever you work, “because there’s kids starving in Africa”, and because your employer (rightly) is more interested in keeping them alive than keeping you living lavishly.

Do you see what I mean? The gut feeling that UNICEF employees should not be living lavishly does have a lot of moral weight, but only in the sense that you should also not be living lavishly, regardless of whether your job is to help poor kids or not.  In fact, it’s crazy to penalize the people who are actually doing something to save the starving kids, when you refuse to penalize yourself in the same way. They’re the ones who’ve already given their professional lives to lessening serious loss, which is something you haven’t done. Therefore, they’ve already met a moral standard that you haven’t.

If you’re really interested in lessening serious losses (such as the loss of children’s lives), you must first somehow meet that same moral standard, before asking that it be raised only for the people who’ve already met it.

So, no, we are certainly not in a position to demand that UNICEF employees must live ascetic lives.

If there’s anyone who feels that this is somehow vaguely demoralizing, I offer the following comforting truth: UNICEF employee or not, we are all in an equal position to make personal sacrifices to help the poor and needy; this means that anyone who draws a large salary from whatever organization he or she works at has the ability to give away most of that money towards helping the poor.

 

 

 

Sensible, sensitive, and always richly informed and appreciative of world history, Amartya Sen’s writing is a joy to read. In his eminently readable book Identity and Violence: The Illusion of Destiny, he makes the case that many undesirable consequences can result from certain ways in which people may be encouraged to see themselves. Most importantly, a “solitarist” approach to human identity, in which any human being can be seen as a member of only one particular group, usually defined by civilizational or religious divisions (e.g. Muslim, Hindu, Western, or non-Western), is particularly insidious.

Under a solitarist view of identity, rather than being appreciated for all that they are, human beings are crammed into little boxes that are usually not of their own choosing. Thus, while in actuality a person may have several important identities – for instance, as a woman, a mother, a teacher, a vegetarian, a person of African descent, a heterosexual, a supporter of gay and lesbian rights, a Muslim, an avid reader, and so on – the solitarist would claim that only one of those identities is of any importance in understanding that individual. Usually, it would be the fact that she was a Muslim, or that she was racially of African descent that would be used as a basis of a singular, all-encompassing identity.

Using mutually exclusive racial or religious categories as the only basis of understanding human beings is an inherently divisive view and, as Sen puts it, it is “a good way of misunderstanding nearly everyone in the world”. By ignoring the many shared identities that cut across the sharp dividing lines of religious or civilizational categorizations, a solitarist view of identity makes it much easier for groups of people to see each other as opponents or enemies, rather than fellow human beings.

Sen gives the example of the Hindu-Muslim riots of the last days of the British Raj, which he witnessed as a child. He points out that while, on one view (the solitarist view), the riots were between Hindus and Muslims, on another, they were between people who were practically indistinguishable. Most of the people who were doing the killing – and being killed – were members of the urban poor; they were day-laborers, they lived in similar neighbourhoods, spoke the same language, and so on. If they had been properly aware of the plural nature of their identities – that each of them was so much more than just a Muslim or a Hindu – and the fact that they had so much in common with one another, it probably would have been impossible to get them to commit such horrific acts.

However, the solitarist view is sometimes extremely hard to resist. And once people fall into its trap, it can lead to disastrous consequences. An acceptance of the idea that the only important defining characteristic of a person is his religion easily gives rise to the view that everyone from a different religious background has to be “the enemy”. It is not hard to see that nearly all of the terrorism in the world today is the result of people being encouraged to see themselves in this way. The 9/11 terrorist attacks are a good example of a solitarist approach being put to use – but not in a Muslim-vs- non-Muslim form, because many of the people killed at the World Trade Centers were, in fact, Muslims. Rather, this was an example of a “West-vs-anti-West” understanding of identity – one that is becoming increasingly popular today.

Apart from the divisiveness between mutually exclusive groups that it encourages, a solitarist view of identity also ignores the wide variation that exists within common categorizations such as “the Muslim world”, or “the Western world”. To claim that a person’s being Muslim is the only important attribute needed to understand him/her in a social context is to ignore the fact that Muslims vary considerably in their ideas, practices and beliefs.

Indeed, one of the most compelling arguments that Sen makes in the book is that the Western response to the rise of militant Islamic fundamentalism is thoroughly misguided. This is because it reinforces the solitarist view that the fomenters of terrorism advocate, rather than going against it. In Sen’s own words:

“…In disputing the gross and nasty generalization that members of the Islamic civilization have a belligerent culture, it is common enough to argue that they actually share a culture of peace and goodwill. But this simply replaces one stereotype with another, and furthermore, it involves accepting an implicit presumption that people who happen to be Muslim by religion would basically be similar in other ways as well… The arguments on both sides suffer, in this case from a shared faith in the presumption that seeing people exclusively, or primarily in terms of the religion-based civilizations that they are taken to belong is a good way of understanding human beings.”

The book goes on to explore how views of identity relate to issues such as global democracy and multiculturalism is societies with many different racial and religious minorities. Relating to the latter, Sen makes several more striking points about how Western governments seem to be doing things the wrong way. For instance, in Britain, rather than engaging with people from minority religious communities as citizens of the same country, many efforts are being made to engage with them through their religious representatives. Thus, rather than emphasizing common citizenship, these dialogues focus upon on a particular difference between people living in the same country, and take that difference to be of supreme importance.

The book ends by emphasizing the role that reason has to play in allowing people to choose whether they wish to associate themselves with any particular identity, and to assigning relative importance to their many different identities in different situations.

There’s a quiet thrill to be had from being able to spot less-than-obvious references in popular media. Does anyone remember this crossover ad on Cartoon Network that featured Johnny Bravo talking about his whirlwind romance with Velma Dinkley (from Scooby-Doo)?

It’s set late at night in a lonely cafe , perched at the intersection of two dark, empty streets. Somehow, it wasn’t the sort of thing you usually found in cartoons, and there’s unquestionably something that was very memorable about the scene, because I still do remember it, after all these years. In fact, it immediately came to mind when I chanced upon the painting that inspired it: “Nighthawks”, by Edward Hopper (1882-1967).

Nighthawks (1942), by Edward Hopper

The Cartoon Network version

“Nighthawks” is one of the most recognizable American paintings ever (here’s another one; you’ve almost certainly seen references to it, too, even if you never knew it) and it’s easy to see why. There’s a haunting sense of loneliness about it, but unlike the human figures in many paintings that sought to express the isolation that’s inevitably a part of modern urban life (see Mark Rothko’s “Subway Scene”, for example), the people here are real people, not just faceless creatures. You can actually imagine meeting the woman in the red dress, or ordering a drink from the waiter in a sailor’s uniform. And you can’t help asking yourself what the individual circumstances were that brought each of these people to this place.

“Nighthawks” was painted in 1942, almost immediately after the attack on Pearl Harbour (which was on December 7th, 1941). It was a dark time for the United States, and this is reflected in the mood of the painting – it’s hard to tell whether any of the people there are interacting with one another at all. That’s probably how it often is after a tragedy – there doesn’t seem to be anything worth saying to anyone.

Could "Minority Report" become reality?

Well, it’s probably not terribly likely that things’ll go that far, but consider this:

“There has been a long controversy as to whether subjectively ‘free’ decisions are determined by brain activity ahead of time. We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 s before it enters awareness. This delay presumably reflects the operation of a network of high-level control areas that begin to prepare an upcoming decision long before it enters awareness.”

That’s from a paper published in Nature Neuroscience (you’ll find it here) in which the authors used functional MRI scanning to peer into subjects’ brains as they were making the (rather simple) decision of whether to click a button next to their right hand or their left hand. And as they say in that quote from the abstract of the paper, the researchers could predict which hand would be used, up to 10 seconds before the subject himself became aware of his final decision.

That’s a little scary isn’t it? I mean, what if there comes a time when there are remote fMRI scanners that can be pointed at anyone to see what they’re about to do in the future? What if you’re snatched out of your bed and thrown into prison for a crime you don’t even know you’re about to commit (a la Minority Report)?

Admittedly, there are a few ameliorating factors to counter the scariness of that vision. Firstly, there’s the fact that predictions can only be made about actions to be undertaken in the next few seconds; not minutes, days or weeks. So if you’re planning to assassinate a high-ranking government official, or something like that, don’t worry, they don’t have any chance of catching you until you’ve already got his head centered behind your rifle’s cross-hairs, when it’ll probably be too late anyway (good luck with that, by the way).

Secondly, the accuracy of prediction isn’t terribly impressive right now, to be honest. It was about 60% in the experiments performed for that study. And finally, decisions in real life may be infinitely more complex than “right hand button/left hand button” – and that may make them impossible to predict using brain scanners.

But this research does still raise some very interesting questions about free will. Don’t be too alarmed, though – just the fact that you’re subconscious makes a decision long before you become aware of it doesn’t in itself preclude the possibility of free will’s existence. After all, from a non-dualistic (visit this link for more on dualism) point of view, your mind is simply the firing of a whole lot of neurons, so it’s not like your neurons are holding “you” hostage.

But what if there are physical reasons (maybe something about the particular patterns of neural interconnections in a person’s brain, for instance) for why all of us make the kinds of decisions that we do? Does this mean that there may be limits to what we can think? Or what we can feel?

I hope we do find out some day.

About two posts back (“What Am I” – a short play) I mentioned the question “Does the evidence of our eyes tell us about the true nature of the world?” as one that philosophers have been mulling over for more than 2500 years. I think the question itself might require a bit of explanation.

Let’s say you’re looking at an orange (there’s one right here for your convenience). I know that it’s perfectly natural to believe that your eyes aren’t lying to you about the fact that its colour is, well, orange. It feels as though the evidence of our eyes is enough for us to say something completely non-subjective about the orange – that it is, in fact orange.

An orange.

But that’s not quite true. What is non-subjective (or at least a little more so) is that the orange is reflecting light mainly in the 635-590 nm wavelength range. Your brain perceives that light as being orange in colour, but that doesn’t necessarily mean that “orange-ness” is in any way a fundamental property of that light.

Consider this: visual information is not carried from the orange in front of you to your brain through fibre-optic cables in your head; it’s converted to electrical charges in your eye and then sent along nerves (in a process called phototransduction; and by the way, isn’t it astounding that we actually know this stuff? Three cheers for medical science!) to the brain, where it’s processed to form visual imagery.

In other words, not a single ray of light has ever reached your brain (well, unless you’ve had a lobotomy, but let’s ignore that possibility for now). Light is something that we cannot experience directly. It’s as though we’ll never be able to listen to an opera, but we can make some sense of its sheet music. And more than that, it isn’t even true that the notation in our sheet music is somehow the “right” one and that there’s no other way to record the opera. Many animals see colour very differently from how we do; some can even see ultraviolet light, which we can’t.

As a matter of fact, colour as a whole is an illusion that our brains create in order to help us deal with the world around us. Look at the table below, and consider the “border” between orange and red light – the wavelength of 635 nm. Truth is, it doesn’t really exist. It’s not as though light with a wavelength of 634 nm has a property of orange-ness whereas as we cross over to 636 nm, the light acquires a property of red-ness. That’s just how our brains perceive things, for some evolutionary reason.

This is how human beings perceive light of adequate intensity between 450 and 700 nm in wavelength

Finally, here’s a treat for having stuck around for all this: a black-and-white photograph of a rainbow. Notice that there’s no way to tell that there are different colours in there – black and white cameras don’t have brains like ours, which lump different wavelengths into fixed categories of colour. A rainbow spans a continuous spectrum of colours; the distinct bands that we normally see are an artefact of human colour vision, and no banding of any type is seen in this black-and-white photo (only a smooth gradation of intensity to a maximum, then fading to a minimum at the other side of the arc).

A black-and-white image of a rainbow

I think it’s to J.K. Rowling’s credit that she managed to include so many classical monsters in her Harry Potter series- in a sense reviving them from obscurity and allowing them to be passed onto later generations of modern readers. Obvious examples include centaurs, the basilisk and the sphinx, all of which come from Ancient Greek myths. I’m sure there are a few others as well.

Until now, though, I didn’t realize that the Hippogriff wasn’t Rowling’s creation either. Somehow, the name “Hippogriff”” seems to convey the same sort of whimsy as “Dumbledore” (which I’m pretty sure is something that Rowling created by combining two other words – although, of course, I could be wrong about that, too), and I just assumed that she’d come up with it.

Apparently not. Seems that Hippogriffs have been around for quite some time. An early reference occurs in the epic poem Orlando Furioso, by Ludovico Ariosto, which was first published in its complete form in 1532. Orlando Furioso is, by the way, probably no less of a swashbuckling fantasy story than the Harry Potter series: the plot wanders from Scotland to Japan, and even to the moon; and of course, there is an admirable array of mythical monsters as well. Haven’t read it yet, but I probably should at some point.

I found out about the Hippogriff’s impressive ancestry when I came across two illustrations for Orlando Furioso by Gustave Doré (for more on him: Gustave Doré: Gloom and Glory). Here they are:

"Hippogriff", illustration by Gustave Doré for Orlando Furioso.

"Ruggiero Rescuing Angelica" by Gustave Doré; illustration for Orlando Furioso

By the way, my apologies to readers with delicate sensibilities, but nude women are pretty common in a lot of art.

UPDATE: Who knew Harry Potter could be analyzed this deeply (or weirdly)?

UPDATE 2: For an accessible overview of Orlando Furioso, and its prequel Orlando Innamorato, this is a great link.

“Philosophy isn’t practical, like growing food or building houses, so people say it is useless. In one sense those basic tasks are much more important, it’s true, because we couldn’t live without them… We eat to live, but we live to think. And also to fall in love, play games and listen to music, of course. Remember that the practical things are only about freeing us up to live: they’re not the point of life.”

That’s from If Minds Had Toes, by Lucy Eyre. It’s a wonderful book that works as a sort of advertising campaign for the subject of philosophy (which, let’s face it, is dismissed as a waste of time by a whole lot of people) while also being funny, imaginative and intensely thought-provoking. I wouldn’t quite rate it RREHR, but perhaps just a notch lower, as EM-BGIB: Educated Minds should have a Basic Grasp of the Ideas in this Book… Yeah, that’s not catching on, is it?

'If Minds Had Toes' a simply-written, but intriguing tale of a 15 year-old's introduction to the lofty ideas of philosophy

Anyway, quick summary: Socrates and Ludwig Wittgenstein – two giants of philosophy – are having an argument about whether knowing something about philosophy can actually make people happier. They decide to settle the matter with an experiment; they spend a few weeks guiding a fifteen-year-old boy, Ben, through the world of philosophy. Ben is introduced to timeless philosophical questions such as “Does the evidence of our eyes really tell us about the true nature of the world?” and “Is free will an illusion?”. And even though at first he would rather think about girls and football than that sort of thing, in the end, he does come away as a changed  person and has a new perspective on life. Socrates wins the bet and the “bad guy” Wittgenstein goes back to sulk in a corner somewhere.

Ludwig Wittgenstein (1889-1951), the bad boy of philosophy

In order to summarize one of the most interesting ideas expounded in the book, I decided to write a (short and inelegant) play. Here it is:

“What Am I”

A Short Play

Individual A: Hi there! I just wanted to ask you a simple question. What are you?

Individual B: Um, hey. You again. Well, since I know you’re not gonna leave me alone before I answer your stupid question – I’m a human being.

Individual A: But that doesn’t answer my question. That just places you in a category. It doesn’t tell me what you are. If I were an alien from outer space, who’d never even heard of Earth, do you think “I’m a human being” would tell me anything useful?

Individual B: Ugh. Fine. [Gestures towards himself/herself] This body, and everything in it, is me.

Individual A: [grabs B’s finger] What about this? Is this you?

Individual B: Sure, why not?

Individual A: So if I cut this off and sent it to France, would you say you had gone to France?

Individual B: Damn it, no! Only my whole body is me!

Individual A: So amputees aren’t complete human beings?

Individual B: Er, no, that’s not what I meant to say.

Individual A: Of course not. Let’s say you meant to say that your body is something very closely associated with you, but it’s not you. The most fundamental part of what you are is the part that you would recognize as you even in complete isolation. That’s why you wouldn’t consider a recently-dead corpse of your body to be you. So what about the stuff that’s in the particular part of your body that you call the brain? Your memories and experiences – are they what makes you you?

Individual B: [a little more interested now] Yeah, I guess that could be it. That must be it, right?

Individual A: Sorry, no, I don’t think so. We’re in murkier territory now, but I really don’t think it would be impossible to upload all your sensory and emotional data on to a computer’s hard disk (even if not now, in a decade or two). But that wouldn’t mean that you would become the computer, or that there’d be two yous.

Individual B: But, then…What the hell am I?… All that’s left is… I must have a non-physical, supernatural soul

Individual A: Thankfully, there’s a way to avoid that. But it’s not easy to grasp. You are not simply your past and your present, but also your future. You are a continuous event that spans a certain time period. This continuous event is made up of lots of little experiences. Even at the moment you are born, the experiences just before your death are a part of you. You are a pan-dimensional being, because your self-awareness transcends space and time.

Individual B: Whoaa. That is so cool. Philosophy rocks!

[Hugs and high-fives ensue…]

–The End–

What is it with German megalomaniacs and moustaches? Observe:

Left: Kaiser Wilhelm II; Right: Adolf Hitler

On the left we have Kaiser Wilhelm II (1859-1941), ruler of the German Empire during World War I (“the most evil German to ever live” – according to the Simpsons, anyway). And on the right is good old Adolf (1889-1945). You’ll notice from the dates that it’s quite possible that they met at some point to discuss their respective moustaches – oh, and World Wars; I guess that was a common interest too.

A friend recently brought to my attention the LHC@Home project, a distributed computing project dedicated to analyzing the data generated by CERN’s Large Hadron Collider (data totaling 15 million gigabytes a year!), and possibly even finding the Higgs boson. A distributed computing project is basically one that attempts to solve a problem using a large number of autonomous computers that communicate via a network (in the case of LHC@Home and other “citizen science” projects, this network is, of course, the Internet). The overall problem is divided into several small tasks, each of which is solved by one or more computers.

Modern PCs are powerful enough to be useful in solving extraordinarily complex problems, such as modelling the paths of beams of protons. And it’s not even like you’ll notice that your computer seems a bit sluggish and distracted (as human beings often get when thinking about things like the origins of the universe and the Higgs boson), because distributed computing projects use software platforms that only allow your PC’s resources to be shared when your system is idle – i.e. when you’re not doing anything. So you only really notice anything when you’re PC’s been idle for long enough for a screen saver to start up.

I had the Rosetta@Home project (more on that later) installed on my old PC, and I can tell you this: the visualizations that the software creates as a screensaver while working on the distributed project are actually quite mesmerizing. I expect the same will be true of the LHC@Home project.

Rosetta@Home screen saver

If you’re interested in joining a distributed computing network, I’d recommend first installing a software called BOINC – the Berkely Open Infrastructure for Network Computing. The BOINC was developed by the University of California, Berkely, and was first used as a platform for the SETI@Home project, but now it’s used in nearly all distributed computing projects.

The BOINC platform is used by many distributed computing projects

Finally, here’s a list of a few other notable distributed computing projects that you might consider joining:

1. Rosetta@home: Geared mainly towards basic research in protein structure prediction, but the results of this research have vast potential in curing dozens of diseases. Implemented by the University of Washington.

2. Folding@home:  Created by Stanford University to simulate protein folding. Note that this is one of the few distributed computing projects that does not use the BOINC framework.

3. SETI@home: The SETI Project does exactly what its name suggests- Search for Extra-Terrestrial Intelligence; it does so by analyzing radio telescope data.  Like the BOINC interface, it was created by the University of California, Berkeley.

4. Einstein@home: Analyzes data from the Laser Interferometer Gravitational-Wave Observatory (LIGO) in USA and the GEO 600 (another laser interferometer observatory in Germany) in order to confirm direct observations of cosmic gravitational waves, which Einstein predicted, but have never been observed.

5. MilkyWay@home: Milkyway@Home uses the BOINC platform to create a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey. This project enables research in both astroinformatics and computer science.

So there you have it: you can help cure cancer, discover alien life or radically change our view of the physical universe. What’re you waiting for? Screen Savers of the world, unite!

UPDATE: Here’s a more complete list of BOINC projects: http://lhcathome.web.cern.ch/LHCathome/Physics/

UPDATE 2: I now have Rosetta@Home installed again. Yay, I’m making an actual (although tiny) contribution to Science! I’ll just go put a tick on my List of Things To Do in Life next to that.

Wanna know how little physics you know?

Ever wondered how little sense cutting-edge physics research would make to a layperson like you or me? (I’m assuming that you don’t have a PhD in physics; if I’m wrong, please let me know, because – well, it would be pretty cool if there were any Physics PhDs reading this blog.) Well, find out now! – by playing arXiv Vs. snarXiv, a game in which you’re asked to choose which of two titles belongs to an actual research paper in physics and which one is made up.

Even though it’s sometimes pretty easy to get the answer right (hint: research papers in any science very rarely have titles that are just two or three words long), try to concentrate on the fact that you (and I) don’t have any clue what a lot of the words and concepts in the actual physics papers’ titles mean.

Just so you know (and learn something out of this endeavour), the arXiv (pronounced “archive”; the X represents the Greek letter “chi”)  is an archive for electronic pre-prints (i.e. not-yet peer reviewed drafts) of scientific papers in the fields of mathematics, physics, astronomy, computer science, quantitative biology and statistics.

The snarXiv, according to its creator, “only gen­er­ates tan­ta­liz­ing titles and abstracts at the moment, while the arXiv deliv­ers match­ing papers as well.” Here are the uses he suggests for the site:

  • If you’re a grad­u­ate stu­dent, gloomily read through the abstracts, think­ing to your­self that you don’t under­stand papers on the real arXiv any better.
  • If you’re a post-doc, reload until you find some­thing to work on.
  • If you’re a pro­fes­sor, get really excited when a paper claims to solve the hier­ar­chy prob­lem, the lit­tle hier­ar­chy prob­lem, the mu prob­lem, and the con­fine­ment prob­lem. Then expe­ri­ence pro­found disappointment.
  • If you’re a famous physi­cist, keep reload­ing until you see your name on some­thing, then claim credit for it.
  • Every­one else should play arXiv vs. snarXiv.
arXiv vs. snarXiv is fun for a while (not for too long, though), and the game comes up with interesting assessments of your success at guessing. For example, there’s “Worse than a monkey” and “Nobel Prize winner” (that’s for 100% accuracy).

I wonder if I’ve ever been as hooked by anyone’s writing within about ten pages as I was when I just got into Arundhati Roy’s The End of Imagination (Included in the collection The Algebra of Infinite Justice). It’s brilliantly written and makes a cogent case against nuclear weapons. Having been born a few decades too late, I never really witnessed the anti-nuclear movement in full force. For the first time, I think I am suitably horror-struck by the knowledge of what nuclear weapons truly are.

Somehow it never really struck me that it was horribly wrong for the United States to have detonated nuclear bombs in the hope of getting Japan out of World War II. Let’s say two families, the As and the Bs are engaged in a century-long vendetta that’s claimed dozens of lives and caused immense  suffering. Let’s say family A realizes that if they kidnap the head of B’s newborn son and subject the infant to the most twisted forms of torture they can come up with, B will be horrified enough to abandon the vendetta forever. Does this make it right – or in any way excusable – for them to do that?

Although on the whole I really liked the essay, there are two points with which I do have issues. The first concerns the following passage, which describes Western society:

“These are people whose histories are spongy with the blood of others. Colonialism, apartheid, slavery, ethnic cleansing, germ warfare, chemical weapons – they virtually invented it all.”

Here’s the thing – I don’t like it when people seem to imply that the people of the East lived in some kind of idyllic Heaven on Earth before the West came along. The Arabs were avid traders of slaves, and the practice of slavery was nearly as common in the Chinese, Japanese, and North African kingdoms as anywhere in the West.

(And in case you’ve never heard of white slaves, I strongly recommend you visit this link: http://www.bbc.co.uk/history/british/empire_seapower/white_slaves_01.shtml)

White female slave captured by the Barbary Corsairs (see link above)

As for apartheid, how much older is the Hindu caste system? And how is it any better? And finally, just because empire-builders in the East (Genghis Khan, for instance, or the Mughals) did not build empires as large, or as recently as the Western nations did does not mean that the peoples of the East are inherently and universally quiet, non-materialistic, peace-loving creatures.

My second problem is with the following:

“…we embrace the most diabolic creation of Western science [i.e. the nuclear bomb] and call it our own.”

This is a fervent request to everyone reading this: please don’t use terms like “Western science”. There’s only one way of doing science. It’s called the scientific method; it involves principles such as experimentation, observation, and accurate prediction, repeatability, openness, etc. Nobody can own any part of the scientific method and there are no separate Eastern and Western versions of it. (Any body of “knowledge”  that is not built upon the scientific method – e.g. astrology, voodoo, homeopathy, feng shui, etc – is not science.)

Having said that, it is, of course, true that nuclear bombs were invented by Western scientists, and that Eastern scientists may never have done something like that if they were in the same position. So call them a “Western invention” if you will. That’s fine.