Science and Religion – Awkward Bedfellows Through The Ages

Happy Sunday, everyone!

It’s been a while, I know, but I’ve been quite busy of late writing for other sites. First off, I’ve written another article for the University of Liverpool’s Becoming an Expert series, this time about my friend Andrew Swale’s work on Clostridium difficile. I’ve also written a guest post for Manchester University’s fantastic student-run blog, the Brain Bank, all about Komodo Dragons and the mechanism they use to kill prey.

But enough of the self-publicity! Let’s get onto the matter at hand. A while ago I gave a somewhat over-ambitious 5 minute presentation at a local café/bar about how the public perception of science has changed over time (you can check out a video of my talk here – I come on at 46.03). Given the positive reaction it received I felt like it was worth adapting into a blog post. However, as the talk covered a few important topics to which I only had time to pay lip service, I thought I’d focus on the first third of the talk only in this post. That’s a big enough challenge in itself!

So, here we’ll take a quick look at the changing relationship between Science and Religion over the ages. Obviously this in itself is still a massive topic so what I talk about here can only scratch the surface of this complex relationship. But I hope it will shed some light on three key periods in time and, perhaps, encourage you to read into these eras in more detail than I’m able to provide here. So, without further ado, let’s get cracking. Enjoy!

Changing Times

The way in which Science and Religion have impacted upon one another has changed hugely over the centuries. Every civilisation and religion comes with its own views, rules and consequences, affecting that relationship. Of course, somewhat inevitably, many of these cultures and religions have affected one another in turn as a result of wars, conquests and mergers.

One extraordinary example of this was the expansion of the Greek Empire under the rule of Alexander the Great. In the phenomenal decade-long campaign he led up until his death, Alexander laid claim to the entire Persian Empire, spreading his laws and religious beliefs as he went. Alexander seemed to genuinely believe he was the son of Zeus, and word of his divinity spread with his Empire. When he claimed Egypt, he was declared the son of the Egyptian God Amun, after which Alexander often referred to himself as the son of Zeus-Ammon, demonstrating how the two religions were affected by one another and merged together during his brief reign.

Ancient Egypt and Greece are actually two of the three eras I touched upon in my talk, as I believe they exemplify the dynamic relationship between Science and Religion.

The Ancient Egyptians (~3150 BC to ~30 BC)

The Egyptians didn’t have what we would call ‘scientific understanding’. Rather than working out the earthly and natural meanings for the phenomena they observed, they attributed everything to their Gods. Yet, despite not knowing the scientific cause for things, they learned an incredible amount about the world in their bid to understand their Gods’ wishes and to use natural phenomena in the pursuit of worship.

English: Egypt God Horus Deutsch: Ägyptischer ...

The Egyptian Gods impacted upon every aspect of life (Photo credit: Wikipedia)

One notable example of this is the way in which the great Egyptian minds made use of the stars. Through years of study, they mapped the paths of certain celestial bodies across the sky with such accuracy that they were able to predict their movements throughout the year. This may not seem like much now but a lot of our knowledge of the night skies has stemmed from Egyptian observations so their importance cannot be overstated.

A fantastic application of their knowledge can be seen in Karnak Temple in Luxor. The temple was built in order to worship the Sun God, Amun Re. The Egyptian astronomers, or ‘cosmologists’, realised that the sun rises at a different point along the horizon, depending on the time of year. So, when building the temple, the architects positioned the building in such a way that, on the Winter Solstice, the sun rose directly between the 2 pillars at the front, filling the main area of the temple with light. By all accounts it is a phenomenal sight and one that I’d love to see first-hand some day.

However, whilst the Egyptian architects and thinkers were considered great minds, they were always considered second to the Gods they sought to worship. Religion very much dominated the culture, leaving little room, or perceived need, for Science. As a result of the culture’s faith they never really learned the scientific reasoning behind the wonders they observed.

Karnak Temple, Luxor (Photo Credit: Andrea Prave)

Karnak Temple, Luxor (Photo Credit: Andrea Prave)

The Ancient Greeks (~800 BC to ~150 BC) 

Arguably, it wasn’t until the Ancient Greeks developed the first recognisable scientific methodology that things began to change. Amongst the Greeks were some of the greatest minds ever known – Thales, Pythagoras, Archimedes, Aristotle (who actually tutored Alexander the Great!)… the list goes on and on! Many of these men began to study the reasons behind phenomena they saw, not content to just accept them as the will of the Gods.

Hippocrates rubens cropped

Hippocrates (Photo credit: Wikipedia)

As a result of their desire and ability to understand the world around them, these philosophers and scientists gained a reputation for being geniuses, even in their own time. Awareness of scientific thinking and its importance in society was on the rise.

Now, the Ancient Greeks’ religion overlapped somewhat with that of the Ancient Egyptians. Their often-similar Gods were also thought to influence most aspects of life. As such, there were some things that people just weren’t ready for science to explain. For example, Hippocrates – author of the Hippocratic Oath upon which western medicine is founded – realised that disease wasn’t a divine punishment. It was, in fact, borne of earthly causes.

Obviously, such revelations didn’t always go down well. Hippocrates, whilst advancing his society’s understanding of the world, had just diminished the role of the Gods in that world. It is not surprising, therefore, that many fought against such notions. Eventually, however, these ideas took hold and arguably improved Science’s standing in society as it afforded a greater understanding of the world. Religion remained an integral part of society, but Science had now proved its fascinating worth. The ball had started rolling and Science’s role in society would increase throughout the centuries.

The 19th Century

Charles darwin

Charles Darwin (Photo credit: Wikipedia)

At this point, let’s skip forward a few centuries (I told you we’d only be able to scratch the surface here)! Specifically, let’s skip forward to 24th November 1859. This was one of the most important dates in the history of Science and a key turning point in the relationship between Science and Religion. It was the day that Charles Darwin’s seminal work ‘On the Origin of Species’ was first published.

At this time, despite the fact that Gods were no longer considered responsible for absolutely everything, certainly in Western cultures, contradicting religious teachings was still massively controversial. In Christianity, for example, it was an accepted fact that God created the Earth, the Moon and the Stars, as well as all of Life. Humans were created and placed upon this world in the form people currently saw. This was an inalienable truth.

During the first half of the 19th century, there had been rumblings amongst some scientists suggesting that this was not the case. However, scientific establishments had quite a close relationship with the Church of England, so these ideas never really went anywhere. Darwin, however, was so convinced of his own work’s importance that he had it published, courting massive controversy. The Church, naturally, rejected the theory, whilst many scientists also questioned or refuted it. There were, however, some who accepted the theory and, obviously, this number grew over time as his ideas were investigated.

The general public were, at this point, caught in the middle of a fascinating stage in the relationship between Science and Religion. Who did they believe? Did they trust the Church, which held such sway in their lives, or did they trust the ever-growing number of scientists, trusted and revered minds, who dared to disagree with the Church? Whereas Greek philosophers had, on a simplistic level, caused people only to question their religious views, the public was now forced to question both their religion and scientists.

Prompting Progress

It could be argued that the controversy caused by this revelatory work, and others like it, shaped Science’s progress for decades to come. Scientists were now forced to dig deeper and drive scientific understanding even further in an effort to answer the questions to which the public demanded answers. Scientists were, and would still be for many decades, considered incredible minds. However, people came to realised that blind acceptance wasn’t necessary. As a result, scientists learned more about the world we live in, advancing society in the process.

The product of all this is the world in which we live now, where Science is driving forward understanding at an ever-increasing pace. It is, however, also leaving its awkward bedfellow Religion behind in some respects. Certainly, Religion remains an ever-present aspect of many people’s lives and it will continue to do so for the foreseeable future. But, for many, Science has greater importance in their lives as it seeks to offer tangible evidence-based answers to the questions we have about the universe. The question now is how the relationship between Religion and Science will change in the future. It is a dynamic relationship, no doubt, with time and location playing massive roles in its development. Only time will tell how they will get along a century from now…

Leave a comment

Posted by on September 22, 2013 in Science...ish


Tags: , , , , , , , , , , , , , , , , , , ,

Just How Scientifically Possible Are Gremlins?: Part 3

Happy Monday, everyone!

It’s been a while since I’ve written anything here, as I’ve been pretty damn busy of late. I’ve written a couple of articles for different websites and have given a presentation, which I’ll be writing up into a blog post for later on in the week. Plus I’ve had to do a little something called a PhD! But here we are and it’s time for the long-overdue final instalment on Gremlins and Mogwai.

Given how long it’s been since I started this (I’ve had something of a mental block), I’d advise heading to Part 1 and Part 2 for a quick recap on what these posts are about….

…*twiddles thumbs; makes cup of tea….thinks about PhD work and adds whisky…*

All done?

Right then. Let’s get cracking with a reminder of the final rule for keeping a Gremlin or Mogwai. Enjoy!

Rule 3: Never get them wet

If a Mogwai or Gremlin gets wet then it will spontaneously spawn offspring, which pop out of its back.


The best Gremlin in the films – the Brain Gremlin (Photo Credit: Warner Bros/Amblin Entertainment & ‘’)

Given that there is no sexual intercourse involved here (unless something happened off-camera that we really don’t want to know about), it’s safe to say that Mogwai/Gremlins are asexual. So far so obvious I hear you say; but now we need to look at what type of asexual reproduction they undergo. It turns out there are quite a few types, but only two of them are seen in the animal kingdom. They are known as ‘Fragmentation’ and ‘Parthenogenesis’.

In the former, new organisms grow from a piece, or fragment, that has broken off from the parent organism. You know when you take a cutting of a plant and give it to someone else to grow in their garden? That’s artificial fragmentation and your friend’s plant will have the same genetic material as yours. However, outside of plants and fungi, this is obviously only a feasible method of reproduction in relatively simple animals like worms and starfish. It certainly wouldn’t be possible in something as complex as the mammalian Mogwai and reptilian Gremlins.

That leaves us with Parthenogenesis. In this case, females’ eggs can develop into embryos without needing to be fertilised. This is certainly seen in many different animals, including some sharks, insects and reptilian species, such as the Komodo Dragon. However, I came to the conclusion in Part 1 that Mogwai/Gremlins are mammals, so can a mammal reproduce through parthenogenesis? The answer is….sort of.


Kaguya the Parthenogenetic Mouse (Photo Credit: Wikipedia)

No mammal reproduces in this way naturally. However, in 2004, a Japanese research group did manage to produce a mouse from two mothers, with no sperm involved. The offspring, named Kaguya, even went on to have her own children…in a more conventional manner! The research group concluded that, at least in mice, fathers’ genetic material prevented parthenogenesis occurring naturally so as to ensure the need for males. Now, I’m feeling in a forgiving mood as I write this. Despite the fact that parthenogenesis doesn’t occur naturally in mammals, I’d say it’s a reasonably realistic explanation for how Gremlins reproduce. After all, they’re not your typical mammal! So, score one for realism!

Now let’s deal with this business about ‘spontaneously’ spawning offspring. If you haven’t seen the films, this video shows what happens when you get a Mogwai wet. Basically Mogwai and Gremlins that get wet immediately start shaking, at which point fur balls or sacks, respectively, pop out of their backs. These then grow quickly into full-sized Mogwai or Gremlins, depending on which made them. The whole thing takes about 1 minute.

If we ignore the ridiculous idea that dropping water on an animal that is 90% water would cause it to reproduce, just how realistic is this reproduction time? To answer this, let’s have a look at a few records in the field of reproduction.


Clostridium perfringens (Photo Credit: Marler Blog)

The organism with the fastest known reproduction time is the bacterium Clostridium perfringens (if we don’t count viruses as living organisms…ooh future post idea!). This single-celled organism is found pretty much everywhere and is a leading cause of food poisoning. It reproduces asexually, with a new cell budding off from the parent in just 10 minutes.

Now, given that C. perfringens is a unicellular organism, you’d expect it to be able to reproduce much faster than a complex organism like a Mogwai or Gremlin. Its method of reproduction – binary fission – isn’t burdened by the need for embryonic development either, so doesn’t take up as much time. As such, the idea that a mammalian creature could produce fully grown offspring in less than a minute is, I’m afraid to say, a figment of the imagination. To put it into perspective, the title of the shortest known gestation period for a mammal belongs to the Short Nosed Bandicoot. It pops out its sprogs (note, not a technical term) after just 12 days!

And that, as they say, is that. Over these 3 posts we’ve had a look at a fair range of biological processes and phenomena that Mogwai and Gremlins seem to demonstrate in the films. Hopefully you’ve enjoyed reading about them and learned a bit more about the world too – I certainly have! So, what’s the overall verdict? Are Gremlins realistic or not? I’ll answer by leaving you with the final scores – there are 2 points for each post: 1 for realism of the rule and 1 for realism of the sub-topic discussed. Till the next time!

Possible: 3          Not possible: 3

It’s a draw! So…erm…I guess they’re sort of scientifically possible but not quite… I was hoping for a clean outcome there…bugger!


Posted by on August 19, 2013 in Biology, Silly Science


Tags: , , , , , , , , ,

Excuses, excuses…

Happy Monday, everyone!

As I sit here, tucking into an impressively disgusting cheese and tomato sandwich (seriously, Tesco – does it really need a layer of mayonnaise?!), I realised that I haven’t posted anything here in AGES! So, this is just to let you know that I am, in fact, alive and will be blogging again in the near future.

I’ve been ridiculously busy lately what with writing a news article a couple of weeks ago (see link below), writing a piece about my research, preparing a presentation that I’m giving this week, helping a friend move home and doing a little thing called a PhD! Phew!

So, in short, I’ll be back blogging soon but, in the meantime, check out the article I wrote and these 2 excellent blogs run by people I’ve met through the Twitterverse: ‘Memetic Drifting‘ and ‘That’s Interesting’.







Leave a comment

Posted by on August 12, 2013 in Non-science posts


Cake, Computers and Curiosity (or, How Science-based PhD Students Survive)

Happy Thursday, everyone!

I’m experiencing one of those weeks from hell in work. You know the type – nothing’s going right; steps you hadn’t even considered a potential problem go awry… But, rather than throwing my computer out of the window in a fit of pique, I thought I’d channel my frustration into remembering why I’m doing my PhD in the first place. The result is this post, which I imagine will resonate with quite a lot of scientists out there! Enjoy!

One simple computer command with the tiniest of mistakes and one tap of the ‘Return’ key on my keyboard. That was all it took to erase the files I’d spent over a week creating. But, wait – a backup of the server existed. I was saved! Then I found out that the last backup was made before the files were created.


The perfect cure for stress: Scones with clotted cream and jam! (Photo credit: hijukal)

Cake was eaten; tea was drunk; and I won’t deny there was some tutting (we are British after all, dear). That was externally. Internally, my mind was a maelstrom of what can only be described as some of the fruitier epithets in the English language!

Regrettably, such experiences are not uncommon in the life of a PhD student. As a lecturer in my department tweeted in response to this latest setback, it’s “all part of the PhD process”. This inalienable truth is what, after nearly three years, I’ve finally learned to accept. My own project has, after all, afforded me many opportunities to ponder it.

I’m studying a human parasite called Entamoeba histolytica. It causes Amoebiasis, a disease characterised by symptoms ranging from mild diarrhoea to fatal liver abscesses. The problem is that nobody quite knows what triggers the different symptoms.

I’m comparing E. histolytica’s genes with those of its harmless relatives to find out which genes the killer has that the others don’t. As only E. histolytica causes Amoebiasis, it’s likely that some of the genes required for the disease’s progress will be in that exclusive gene set. I’m hoping to identify a list of genes potentially responsible for causing Amoebiasis that future medicines might target.

Curious George (Photo Credit: PBS and 'Cool Spotters')

Curious George – perfect, if mischievous, PhD student material! (Photo Credit: PBS and ‘Cool Spotters’)

My day-to-day life involves growing cells, extracting their DNA to have it read, or ‘sequenced’, and sifting through the mountains of data this produces on my computer. I’ve had cultures die for no discernible reason other than sheer bloody-mindedness; I’ve had DNA sequences yield data less useful than the Chinese Olympic badminton team; and I’ve spent weeks chipping away at data files to get to the result-filled goodness within, only to find that the results are about as interesting as a Dido album played at half-speed!

So why do I do it? Why do any of us? What drags PhD students up from the ground when our experiments leave us on our posteriors again and again? One word: ‘curiosity’.

No great scientist has ever carried out research for the money or for something to do. Hell, no mediocre scientist has! You simply cannot forge a career in science without being interested in, and enthused by, your research.

If we weren’t interested in reaching that elusive breakthrough we’d surely give up! A PhD isn’t like a normal job – for starters, we work bizarre hours and tasks take months to complete with little reward during that time. It’s our curiosity that gets us through the harder times. I know that, without mine, I couldn’t have created those files all over again to see if I’d discovered anything useful. I hadn’t…. But tomorrow’s another day and the allure of a new idea and possible success will inevitably drag me out of bed to try again!


Posted by on July 11, 2013 in Non-science posts


Tags: , , , , , , ,

Lucy’s Legs: The software showing us how our ancestors walked

Based on a presentation by Dr Karl Bates (UoL Institute of Ageing and Chronic Disease):
“Biomechanics – understanding the relationship between anatomy and function”

Happy Sunday, everyone! Hope you’re having a good weekend! Here we have a description of the 2nd piece of research presented at the evening of seminars I attended last week. Enjoy!

Now, the first thing you might be wondering after reading the title is, what exactly is the field of Biomechanics? I’ll admit that I’d never heard of it before this talk! Essentially, it’s the study of an organism’s moving parts in order to understand how their arrangement relates to their function. Dr Bates’ research group is looking at human legs, studying the relationship between their morphology (i.e. the shape and arrangement of muscles, bones and tendons etc) and the way humans walk.

The group wants to find the muscle activation pattern that produces the fastest, or most energy-efficient, way of running. They’re carrying out their work using a technology called ‘Evolutionary Robotics’. This involves a computer program that uses a mathematical model (code for ‘maths 99% of us can’t hope to grasp’!) with values for every single muscle in the pair of legs.

An example of a Gait Lab - this one is at Strathclyde University. The test subject walks/runs between the cameras with sensors attached to their legs so that a computer can recreate the motions they pick up. This is similar to the technology used to create film characters like Gollum in The Lord of the Rings.

An example of a Gait Lab – this one is at Strathclyde University. The test subject walks/runs between the cameras with sensors attached to their legs so that a computer can recreate the motions they pick up. This is similar to the technology used to create CGI film characters like Gollum in The Lord of the Rings. (Photo Credit: Strathclyde University)

The system takes recordings of a human running through the group’s ‘gait lab’ and matches the pattern of muscle activation it sees.  It then tests every possible combination of muscle contraction strengths and timings as it attempts to create the most energy-efficient version of the running motion it saw.

Sounds relatively straightforward – leave your computer running for a while and let it come back with a neat and tidy result, right? Well….no. There are millions and millions of possible combinations that the computer needs to work through. As such, they need one impressively powerful computer, and it still takes ages!

Brilliantly, the program builds a pair of virtual human legs, including tissues, joints and tendons, so you can see how its current optimum equation works. At this point, Dr Bates showed us a ridiculous video of a pattern the computer suggested early on. The legs rotated round the hips in 360° turns, moving along like some kind of grotesque ball! One of the latest suggestions shows the legs moving normally for a while…before falling over! But it is getting there…

Now, this is all well and good, but what’s the point? Well, once the computer program has mastered the leg movement, the group can use it to understand the changes humans undergo as we age. We know that we lose muscle mass and gain fat, meaning that our bodies can’t operate in exactly the same way as when we were young. But what we don’t know is in what ways our bodies have to compensate for these changes.

Dr Bates said that, once they know how the legs move and which muscles are needed, they can start playing around with the anatomy in a way they couldn’t do in real life. They can, for example, change tendon lengths and muscle masses in a virtual pair of legs to reflect an older person’s physiology. This will allow them to see how energy efficiency changes during a person’s life-time and how different parts of the legs must change to cope with, for example, reduced muscle mass. This will give us a greater understanding of the pressures our bodies come under as they age.

« Lucy » skeleton (AL 288-1) Australopithecus ...

Casts of Lucy’s fossilised remains. These bones were all the team had to work with! (Photo Credit: Wikipedia)

Model of the australopithecus Lucy in the muse...

A very happy-looking model of what we think Lucy looked like, at the Museum of Barcelona (Photo Credit: Wikipedia)

The group’s work will also help us understand more about how we have evolved as a species. A really interesting application of the group’s work so far has been to solve the controversy over how one of our ancient ancestors – Australopithecus afarensis – walked. The best-known fossil of the species is a partial skeleton, which has been named Lucy!

Lucy is 3.2 million years young and, despite how little of her was found, researchers have estimated that the lengths of Lucy’s humerus and femur leg bones are right in-between the lengths seen in humans and chimps. So, the question is, did she walk upright like humans or using her arms like chimps?

The group used their computer program to simulate Lucy walking in both ways. They worked out that it was far more energy-efficient for her to walk upright, given her bone structure. As animals very often adapt to be more energy-efficient, it seems likely that Lucy and her Australopithecus afarensis brethren walked upright like us.

To confirm this, the group compared the heel pressure Lucy was predicted to exert when walking upright with the pressure her preserved footprint implied. The two pieces of evidence matched. So, thanks to this research and the group’s remarkably clever computer software, we now know that 3.2 million years ago our ancestors were already walking upright. This suggests that we started walking upright when we were still living in the trees rather than when we’d moved down to the ground, as we previously thought!

I think this is a fascinating piece of research and the findings and potential applications are incredible, offering compelling evidence for how our ancestors have evolved. I look forward to hearing more about the group’s findings as their research continues.

Next up in this mini-series, we have a description of how ‘Personalised Medicine’ will work and how far away it is from being a reality. Come back next week for that one. Till then, have a great few days!

Leave a comment

Posted by on June 30, 2013 in Biology


Tags: , , , , , , , , , , , , ,

Maternity Care in Low Resource Settings – The art of achieving good outcomes with few resources

Based on an event called ‘Improving the Health of the Nation: An evening of presentations exploring the world’s biggest health challenges’

On Thursday last week I went along to a seminar session hosted by the University of Liverpool and the Medical Research Council (MRC). The MRC is the main government funding body for UK science research. The evening included a series of fantastic short presentations describing some of the MRC-funded work that’s being carried out in Liverpool.

Now, if you want to be cynical about things, it was quite a self-aggrandising exercise but that doesn’t detract from the amazing science that was described. I’ll be talking about the talks over the next few posts (before returning to silliness for the final Gremlins post!). This post describes the first talk of the evening. Enjoy!


Maternity Care in Low Resource Settings – The art of achieving good outcomes with few resources
By Prof Andrew Weeks (Professor of International Maternal Health)

In the UK, the death rate of expectant mothers has dropped massively over the last century. Just take a look at the graph below – look at it go! Between 1890 and around 1935, the number of pregnancies ending in the mother’s death ranged from 1 in 285 to 1 in 200. When you take into account that many women have multiple children, that was an incredible 1 in 10 women!

Graph showing UK death rates in pregnant women from 1890 to 1980 (Photo Credit: Women’s Health)

The tremendous decrease in the death rate came between the mid 1930s and 1960s. There were a number of factors that contributed towards this drop, including the founding of the NHS in 1948 and the introduction into general medical practice of penicillin and other antibiotics from around the same time. Then, of course, there were the massive developments in the field of midwifery and the improvements, and increase in number, of blood transfusions.

Today in the UK, the percentage of women who develop complications during their pregnancy is very much the same as it’s ever been (around 15%) but very few die thanks to these developments in medicine. That’s because most of the improvements described focused on dealing with complications, rather than preventing them.

Now, this is all well and good for the UK, but there’s a shocking difference between figures from the UK and sub-Saharan African nations. Nigeria and Ethiopia in particular still have similar death rates to those seen in Britain prior to the 1930s. This is all because of the poor quality of healthcare in those countries.

A powerful demonstration of the differences between British and sub-Saharan African healthcare is the treatment of pre-eclampsia – that is, high blood pressure during pregnancy. It’s a very common condition, affecting 1 in 20 expectant mothers. In the UK, it is treated easily and is hardly seen as a complication really. Yet, if left untreated, it can kill. Prof Weeks described a patient he’d seen on a recent trip to Nagpur, India who had suffered from untreated pre-eclampsia. The woman had lost her unborn child and had, herself, fallen into a coma from which she would be lucky to awaken.

So why is this easily treatable condition still such a big problem in these African countries? Unfortunately, as with so many things, it boils down to money. Despite the relative simplicity of treating pre-eclampsia, it costs the NHS a whopping £5,330 to treat every single case. Now, the NHS has that money. The sub-Saharan African countries with poor healthcare don’t. A shocking example is Malawi – the health service there is afforded a mere £43 to deal with each case of pre-eclampsia!

As Prof Weeks said, it’s damn near impossible to treat people with so little money. Indeed, the World Health Organisation’s (WHO) recommended way of the delivering the baby in women with pre-eclampsia would cost the entire Malawian budget per person. That leaves no money to treat the mother, who would likely die as a result.

Sanyu Research Unit logo (Photo Credit: Electronic Product Supplies / Sanyu Research Unit / University of Liverpool)

That’s where Prof Weeks’ research group at the Liverpool Women’s Hospital Sanyu Research Unit comes in. They’re working on ways to get as much as possible out of the meagre sums of money available to these African health services. To date, they’ve looked into two potential alternatives for inducing labour, that will leave enough money to hopefully treat the mother too:


This is a hormone made up of lipids, also known as a prostaglandin. It’s used to treat stomach ulcers but is not recommended for use by pregnant women, as it is capable of inducing labour. The group that looked into it realised that this side effect could in fact be turned into the main goal of Misoprostol’s use. After 15 to 20 years of research, the dose required to reliably induce pregnancies has been calculated and it is now used to do so. The great news is that it only costs £1 per course – a considerable improvement over the £43 WHO-recommended method!

Foley catheter

English: Foley catheter {| cellspacing="0...

A diagram of a Foley catheter (Photo credit: Wikipedia)

A Foley catheter is a paired set of rubber tubes that is more commonly passed through the urethra (the tube you urinate through) and into the bladder. Just the picture on Wikipedia makes me wince! Urine passes down one of the tubes only. There’s a small balloon at the bladder end of the second tube. Once the catheter is in place, sterile salt water is passed up the tube into the balloon, inflating it. This ensures that the catheter won’t fall out. The Foley catheter can be used to induce labour by inserting it into the cervix rather than the bladder. When the balloon is inflated, it stretches the cervix until it is the necessary size for allowing birth.

The MRC is currently funding a trial to work out whether the Foley catheter or Misoprostol is the best way to induce pregnancies. The trial, involving the University of Liverpool, the Medical College in Nagpur and Gynuity Health Projects in America, is comparing the treatments’ effectiveness in 602 women. You can read the trial proposal here – it’s a bit wordy, as these things often are, but it gives more details of the trial and the induction methods being compared.

I thought this was a really interesting overview of an area of science of which I had no prior knowledge. Prof Weeks explained things concisely, but clearly, and left the audience in doubt as to the global importance of his research. I hope you found the topic as interesting as I did!

The next post will describe the main points of a short talk about research into Biomechanics. Hope to see you then!

Leave a comment

Posted by on June 23, 2013 in Biology


Tags: , , , , , , , , , , ,

Migraines – what are they and why was I cursed with them?! (by Becky Brooks)

Happy Friday everyone!

As promised, we have here the Science Gremlin’s first ever guest post! It comes from one of the lovely ladies who run a fantastic blog called Memetic Drift.

Becky is currently in the second year of a PhD in Biochemistry/Cell Biology at Bristol (because apparently 3 years of Biochemistry in undergrad wasn’t hard enough!) and is a really enthusiastic science communicator in her spare time.

Emily, meanwhile, did the same undergraduate course as Becky and now works in the university’s teaching labs when she’s not teaching herself computer programming, writing about science and generally geeking out (she’s great to talk to if you have anything nerdy to discuss!).

You can follow both of the girls on Twitter at @EmilyCoyte and @Becky_Brooks6 and I wholly recommend checking out their blog here. If you need convincing, just read this brilliant post, written by Becky, on the science of migraines.


It was one of those bright, but cloudy days. I was in the University library, in my usual spot by the window, trying to get my head round my lecture notes. I looked out of the window, then as I looked away, a familiar blind spot started to form.

This quite accurately shows what the onset of migraine is like actually! Click on the picture for the artist Laura Causey’s website, she’s done a lot of amusing cartoons. (Photo Credit: A Perfect World).

This quite accurately shows what the onset of migraine is like actually! Click on the picture for the artist Laura Causey’s website, she’s done a lot of amusing cartoons. (Photo Credit: A Perfect World)


But the blind spot wasn’t going away. Instead, the edges started to dance, and the size of the blind spot increased to fill my vision until I couldn’t see. I rested my head on the desk. Gradually, the dancing went away and I could see again. I felt weird, vague. Every sense felt heightened – sounds were too loud, smells were too strong.

As I wandered home, I was suddenly aware of pain forming behind my right eye, which seemed to spread. Then the right side of my head started to pulsate with the agonizing pain of the worst headache I had ever had. I struggled to bed, and lay there in the dark for a few hours until it went away.

I later learned that I had been experiencing a migraine, preceded by an aura (the dancing blind spot). I’ve had a few of these now, with varying symptoms. But what are migraines? The standard definitions are “severe headaches” or “recurring headaches”, but this doesn’t really begin to cover it – people are often out of action for several days or a week. One of my friends suffers from a form of migraine called a “hemiplegic” migraine – he feels the effects for several months.

Trying to explain how a migraine feels to a non-sufferer is often tricky – not helped by the fact that nobody really knows for certain what migraines are in biological terms.

Let’s start with what we do know

Migraine is the most common neurological condition in the developed world – more prevalent than diabetes and asthma combined – and is most common in females.

There are several types of migraine – about a third of sufferers will experience them as I did, with a warning aura, followed by headache, but people can have just the aura, or just the headache. There are widely accepted to be 5 main stages of a migraine, though not everyone will experience each stage. These are nicely described here, so I won’t dwell on them.

Galen was a notable Roman/Greek physician, surgeon and philosopher. (Photo Credit: Wikipedia)

Galen was a notable Roman/Greek physician, surgeon and philosopher. (Photo Credit: Wikipedia)


Certain “triggers” often bring about migraines, although getting to know your own personal triggers can be challenging. My migraines seem to be induced by light, but I don’t get one every time I look at a bright light, so there must be other factors at play. Triggers can be emotional (such as stress or anxiety), physical (such as lack of sleep), dietary (such as low blood sugar, dieting and alcohol), environmental (such as light, and strong smells), and even some medications.

Records of migraine go back as far as the 2nd century, when Galen described a painful disorder that affects one half of the head – he called it “hemicrania”, and believed that it was due to humors that rose from the liver. The theory of humors is now discredited; Migraines are now thought to be the result of chemical changes in certain regions of the brain, which then wreak havoc and change the way our brains respond to sensory information such as light, pain and sound. The precise details of why these chemical changes occur, and why only some people are susceptible, are still unknown – but we do have some clues.

Genetics – studies on Familial Hemiplegic Migraine

Having migraines commonly runs in families (my mother suffers from them too), which points to a genetic basis. Migraine is currently believed to be “polygenic”, meaning that it is caused by mutations in many different genes, each contributing a little to the overall result.  A lot of our current understanding of the genetic basis for migraine comes from the studies on a particularly nasty type, familial hemiplegic migraine (FHM).

Hemiplegic migraine is a rare form of migraine where aura is accompanied by temporary weakness on one side of the body -“hemiplegic” means paralysis on one side of the body. Sufferers can experience speech difficulties, confusion and even coma, and is a really frightening experience, especially since the symptoms can be very similar to those of a stroke or epilepsy.

Mutations within 3 genes have been linked with FHM. Understanding how mutations in these genes might cause a migraine in FHM patients requires an understanding of neurotransmission, which in basic terms is the way the neurons in our brains communicate. When an electrical impulse passes down a neuron, it causes the release of a chemical messenger called a neurotransmitter into the synapse. This then causes an electrical impulse in the next neuron, and so on.

The 3 genes mentioned all code for proteins called ion channels that sit in the cell membrane, which are vital players in the release of neurotransmitter from neurons (see picture below – click to enlarge). The wisdom is that defects in these channels result in the increased release of glutamate (a neurotransmitter) from neurons. These then make the brain more susceptible to a phenomenon called Cortical Spreading Depression (CSD), which is essentially an intense wave of neuron activity, followed by depression of this activity. This is what is thought to initiate the aura symptoms, as CSD can spread through the areas of the cortex that control vision.

The process of neurotransmission – where signals are transmitted between two neurons via a synapse. You have about 100 trillion of these synapses in your brain! Click on the picture to make it larger.(Photo Credit: Memetic Drift)

The process of neurotransmission – where signals are transmitted between two neurons via a synapse. You have about 100 trillion of these synapses in your brain! (Photo Credit: Memetic Drift)


What about more common forms of migraine?

Although there is only a small amount of evidence for it, the causes of FHM might be similar to the underlying mechanisms of more common forms of migraine. For example, a study of more than 50,000 people in 2010 showed that patients with a particular variant in the sequence between two genes have a greater risk of developing migraine (journal article here and commentary here). What links it with FHM is that this region of DNA also seems to regulate the levels of glutamate (the neurotransmitter blamed for FHM). However, we know that genetics isn’t the whole story due to studies on identical twins – sometimes one twin suffers from them but the other one doesn’t.

FHM has given us an insight into the causes of aura and migraine in general, but where does the pain come from? Most areas of the brain do not register pain, but one network of nerves – the trigeminal nerve system – does. This is widely accepted to be the source of the pain during migraine, but what activates this system is unclear. One school of thought is that CSD stimulates the trigeminal nerve system directly. This would explain why it is that some migraine sufferers do not get an aura – it would depend on where the CSD occurred. If it occurred in a place unconnected to the visual side of things for example, you might not get an aura.

Another school of thought is that the trigeminal nerve system is activated not by CSD but by certain clusters of cells in the brain stem, that have been shown to be active during and after migraine. The brain stem is the central hub for information passing to and from the body. The clusters of cells mentioned normally inhibit the firing of the trigeminal nerve system (i.e. they tell the nerves not to fire). As yet unidentified changes in the behaviour of these clusters of cells might take the brakes off and allow the trigeminal nerves system to fire, causing the pain in migraine. What makes this idea attractive is that these clusters of cells control the flow of sensory information from things such as light into other regions of our brains, which would explain a migraine sufferer’s sensitivity to light, smells and noises. These cells can also be affected by our emotional state, which would explain why stress is a trigger for some people.


So we have some ideas about how the aura and the pain of migraine might arise.  The neurotransmitter glutamate might be the trickster involved in causing Cortical Spreading Depression and auras, although the link has not been definitively proven in the common migraine yet. What do seem to be important are changes in the normal workings of ion channels, possibly due to genetic changes. The resulting Cortical Spreading Depression, or other factors, might then be responsible for the pain itself.

Let’s hope that the causes of the common migraine will be made clearer in the near future – it could be important for designing new treatments. At the moment, there are no treatments available that are specific to migraine – most drugs used were originally developed for other diseases. I’m willing to bet a specific treatment would be a money-spinner! Moreover, it would be nice to know what’s actually going on in my brain when I’m having a migraine, as it is downright weird.

For anyone interested in finding out more about migraine, The Migraine Trust have a selection of informative and interesting articles about various aspects of it.

Sources & Further Reading

Russell, M.B & Ducros, A. Sporadic and familial hemiplegic migraine: pathophysiological mechanisms, clinical characteristics, diagnosis and management. (2011) The Lancet Neurology 10, Issue 5 p.457-470.

Anttila et al. Genome-wide association study of migraine implicates a common susceptibility variant on 8q22.1. (2010) Nature Genetics 42, p.869-873

Leave a comment

Posted by on June 7, 2013 in Biology, Guest Posts


Tags: , , , , , , , , , ,

Just how scientifically possible are Gremlins?: Part 2

And so we (finally) come to Part 2 of this series! If you haven’t read Part 1, I suggest you follow this link and get up to speed otherwise this might seem a tad strange! In this part, we’ll look into the second rule that the ‘Gremlins’ films set out for caring for a Mogwai (or Gremlin if you were unfortunate enough not to have read Rule 3 before giving your Mogwai a bath!). We’ll also be asking whether these creatures’ mischievous/dangerous behaviour is like anything seen in nature. We’ll start with a recap of the rule. Enjoy!

Rule 2. Never expose them to bright light.
Bright light scares Mogwai and Gremlins alike, whilst sunlight kills them.

The first question to ask is whether any species in nature is actually scared of light. The short answer is… sort of…but not really! We’ve all seen moths being drawn towards lights at night – this behaviour is called ‘positive phototaxis’. If there’s such a thing as ‘positive phototaxis’, then ‘negative phototaxis’ must exist too, I hear you cry! And indeed it does – many organisms can be seen to actively avoid exposure to light. Many species of cockroach, for example, will avoid lights and well-lit areas; and a tiny roundworm called Caenorhabditis elegans (catchy, I know) can be made to change direction by shining light on its ‘head’.

Now, none of these species are scared of light as such – their behaviour is simply instinctive (or ‘innate’) and designed to help them survive. Cockroaches are far more visible to many predators when exposed to light, so they instinctively avoid it. The roundworm, meanwhile, lives in soil and uses the detection of sunlight to determine in which direction it should move in order to stay buried in the soil where it feeds on bacteria. So, these creatures aren’t scared of the light itself; but they know to avoid it in order to improve survival. This tends to be the case in all ‘negatively phototactic’ species.

It could, of course, be argued that Gremlins aren’t scared of light in the way we think and that they just instinctively know to avoid it because sunlight kills them. If they have evolved to avoid all sources of bright light then they would increase their chance of survival. On balance, I think I’m going to say that this part is quite realistic and scientifically possible!

The next question is whether anything in nature can be killed by sunlight. Whilst no animals are directly killed by light, they can be killed by the long-lasting effects it has upon their bodies. We are, of course, talking about skin cancer caused by the ultraviolet light emitted by the sun. This radiation is split into 3 categories – UV-A, UV-B and UV-C – depending on the wavelength of the light. UV-A has the closest wavelength to that of visible light and, since visible light does no damage to animals’ skins, UV-A does relatively little. Meanwhile, UV-C is usually absorbed by the Earth’s atmosphere, so its effects on animals are rarely seen.

The light spectrum, showing the wavelengths of the 3 different types of UV light (Photo Credit: Ken Costello at

The light spectrum, showing the wavelengths of the 3 different types of UV light (Photo Credit: Ken Costello at

UV-B is the most common cause of skin cancer. Due to its wavelength, UV-B is absorbed by DNA. In my first post I talked about how DNA is made up of combinations of molecules called A, T, G and C. Well, UV-B radiation causes Ts to bind to one another when they shouldn’t. Now, small amounts of this DNA damage occur frequently with sun exposure, but it is rarely a problem as healthy cells are capable of repairing damage to their DNA. It is when the level of damage is too great or DNA repair mechanisms break down that mutations build up, increasing the possibility of a mutation leading to the development of skin cancer. So, whilst not exactly a direct cause of pain and death, sunlight is involved in killing many animals. The effects are not quite as severe as those seen when Gremlins are exposed to sunlight though, so I must admit that skin melting under sunlight exposure is quite unrealistic!

I should say, as a caveat, that ultraviolet light is capable of killing bacteria and viruses pretty much instantly (even faster than it can kill a Gremlin). However, given how long this post is already going to be, and given that these microorganisms cannot really be compared to Gremlins, I’d best save that one for another time…

Ultraviolet (UV) photons harm the DNA molecule...

UV light breaks the bonds that hold together DNA double helices. Most commonly, it breaks bonds involving thymine molecules, which then bind to one another instead. In this malformed state, the DNA cannot function. (Photo credit: Wikipedia)

What I’d like to go over next is why Mogwai and Gremlins are so aggressive and mischievous. As with many of the characteristics displayed in the film, this can be seen in many species in nature, albeit in a muted, less entertaining way.

In the films, the Gremlins have essentially been dropped into an unknown environment. You could view their anarchic behaviour as attempts to adjust to, and assert their place as a dominant species in, a new food chain. If seen this way, Gremlins could be said to be an ‘invasive species’. There are many examples of such organisms in nature (although obviously none of them take over cinemas or attack New York hotdog stands!). Often they are artificially introduced into an environment by human activity, either intentionally or accidentally.

A high-profile example of an intentional introduction is the Cane Toad, which has been a blight upon Australia ever since being taken there, from its native Hawaii, in 1935. Originally envisaged as a way of killing off Cane Beetles, which were destroying sugar cane crops, the toads acclimatised far more successfully, and with more severe consequences, than anyone imagined.

The massive and destructive Cane Toad ranks as...

The oh-so-beautiful Cane Toad (Photo credit: Wikipedia)

Since their introduction to their new environment, their numbers have swollen from a few thousand to over 200 million. They have spread diseases, outcompeted native species, poisoned almost anything that tried to eat them and generally disrupted the finely balanced ecosystem through their aggressive behaviour. And to add insult to injury, they’re not even effective at killing Cane Beetles!

So, clearly there can be serious consequences to mankind’s manipulation of nature. People are learning that we cannot predict every change that will be caused by introducing a foreign species into an ecosystem. Unfortunately, sometimes we cause ecological disasters without even intending to alter an ecosystem. Invasive species are often destructive and cause a great deal of harm to native species. In this respect, Gremlins very much fit the bill. As for the mischievousness… well, they aren’t the only naughty animals out there!

(Video credit: Hassanane’s YouTube channel)

That brings us to the end of Part 2 of this post. I hope you’ve enjoyed it, as Part 3’s coming whether you want it or not! Before that, though, I’ll be bringing you the first post from a couple of fellow bloggers, as promised earlier in the week. Keep an eye out for that one – these guys are good!


Posted by on May 28, 2013 in Biology, Silly Science


Tags: , , , , , , ,

The Necessity of Breadth in a Modern University Education

Based on a presentation on 21/05/13 by:
Greg Petsko (Prof of Biochemistry and Chemistry, Brandeis University & Member of the US National Academy of Sciences)

Happy Wednesday, everyone! I went to an absolutely fantastic seminar yesterday – one that was funny, insightful, informative and quite moving at points. It concerned the current state of university education and how it influences wider society. I’ll attempt to convey some of the passion that Professor Petsko injected into his talk as I explain his views, as well as offering up some of my own. Enjoy!

English: EPA Assistant Administrator for Resea...

Teaching science is important, but is it all that scientists should consider? (Photo credit: Wikipedia)

Yesterday’s speaker, Prof. Greg Petsko, was in Oklahoma the day before (for those of you reading this sometime in the future, that was the day a significant part of Oklahoma was decimated by a Category 5 tornado). He was 3 miles away from the tornado and had to travel for 24 hours to get to us in time, but he gave one of the most engaging talks I’ve ever attended. I strongly recommend reading his monthly column on Science and Society in the journal Genome Biology. Most of these columns are free to read and you can find a list of them here.

Prof. Petsko started by making a very interesting point. These days, we consider the arts and sciences to be very different beasts. I regularly blame my logical and analytical way of thinking, borne of a scientific education, for making me consider most modern art to be complete twaddle! Yet, many years ago, the famous scientist Galileo took great interest in the arts, whilst renowned artist Michelangelo was a keen follower of the sciences. How have the cross-disciplinary ideals exercised by two of, arguably, the greatest minds ever known disappeared over time?

The professor told us of an event that he believes to have been key in establishing this divide. In 1959, the British politician and chemist Charles Percy Snow delivered a controversial lecture entitled ‘The Two Cultures’. He claimed that followers of the sciences and the humanities had split apart, forming two distinct cultures between which communication was failing. He also declared the British education system to be on the decline. Petsko believes that the way Snow reported his views made the general public, both in the UK and the US, accept that that was the way society now worked and that it took hold from that moment.

I have to say, I don’t fully agree with Petsko on this point. Whilst I don’t deny that there is a clear divide between the sciences and the arts, I’d argue that the divide must have already existed for Snow to have observed it. His reporting of the matter doubtless increased public awareness of the fact, but I’d hardly label him as the single cause of modern day problems in university-level education. The chances are that, if it was possible for one lecture to make people notice the divide and largely accept it en masse, it was already well-established and known by people other than Snow.

Moving past the contentious issue of how we got here though, I have to admit that Petsko coined an excellent phrase during his lecture. He said that the idea that art and science should be separate is a “zombie idea” – that is, “an idea that should have died a long time ago but keeps coming back to eat our brains”. Brilliant!

This fantastic term applies to many different ideas, including the ridiculous, yet worryingly widely believed, thought that “only subjects of practical value have value”. As Petsko pointed out, in the 1970s almost nobody in America was interesting in studying viruses – vaccines existed for all the major viral diseases so it was seen as a pointless occupation to continue studying them. This blinkered attitude meant that when HIV started spreading through the population few were prepared for it.

This is a compelling demonstration of the fact that we cannot know what will be important to us in the future. As such, it can be argued that universities should be offering students the broadest education possible, whilst still maintaining quality, of course. Cutting edge science will almost always become outdated and intimate knowledge of its workings will become obsolete. Yet languages and politics, for example, remain useful for far longer than any of us will be around and may enrich our later lives in ways we can’t possibly conceive of yet.

Prof. Petsko wrote a hilarious, yet persuasive, open letter to the President of the State University of New York at Albany, regarding this matter. The President had recently announced that several of his arts departments were to be closed, essentially for cost-cutting measures to allow the sciences to continue and flourish. I bet he wished he hadn’t.

As Petsko summarised for us, assuming that a piece of information is correct, it is worth something – it has a value – regardless of the subject into which it can be categorised. The value of information from one academic field can, in fact, impact upon another subject. For example, in 1865, the German chemist Friedrich August Kekulé reported the correct structure of the compound benzene for the first time. Chemists had known which atoms made

up a benzene molecule but couldn’t work out how they fitted together. Kekulé realised that the atoms must be arranged in a ring, rather than being linear as everyone had supposed. His understanding came as a result of a dream in which he saw the ancient Ouroboros symbol of a snake eating its own tail, but made up of carbon atoms. Without an appreciation of the arts, it is possible that Kekulé would never have discovered benzene’s structure.

English: east German stamp of Kekulé, discover...

Kekulé and the cyclic structure of Benzene (Photo credit: Wikipedia)

Religious symbolism

The religious symbol, Ouroboros (Photo Credit: Wikipedia)

During the later stages in his talk, Prof. Petsko attempted to sum up his beliefs with two major reasons why breadth of education is important. I felt that this was the weakest part of the professor’s argument as it seemed odd to try and pick just two reasons for suggesting such an important shakeup in education; plus I didn’t necessarily agree with the reasons he chose.

His first reason was that, without studying the humanities and critical thinking, people would be more easily manipulated by others and taken in by those with authority. It was a fair point that science cannot teach us everything we need to know about the way the world works, though I didn’t feel it was his strongest argument.

Petsko’s second main reason for not separating the sciences from the arts was that, without an appreciation of both, one would risk becoming coldly scientific and the expense of one’s humanity. He illustrated this point by describing atrocities committed in the name of science such as the US Army spraying toxic chemicals over 6 US and Canadian cities to test dispersal patterns of chemical weapons during the Cold War.

I have to strongly disagree with the professor on this point. That suggests to me that he believes scientists lose all sense of morality and conscience by devoting themselves to science. I would argue that the examples he put forward dated no later than the Cold War and that they were merely products of their time and of desperate governments willing to exploit science’s darkest potentials for ‘the greater good’. I thought his was a rather odd statement to make, given the compelling arguments and undeniable benefits that he had previously put forward in an otherwise stellar presentation.

Whilst I may not have fully agreed with everything he said, I hope that, in this post, I’ve conveyed some of the enjoyment I gained from listening to Prof. Petsko talk. I am completely supportive of his overriding view that scope and breadth are essential in modern education and, for the most part, his points were made charismatically and eloquently. Despite my misgivings regarding some of his comments, I think you’d have to agree that he’s clearly struck a chord with me regarding this important modern debate. And I hope that I’ve passed some of that enthusiasm on to you. Whether you agree or disagree, this is a topic that needs to be discussed. For my part, I agree with Prof. Petsko that “the purpose of an education…is to make your life richer”.

Leave a comment

Posted by on May 22, 2013 in Non-science posts


Tags: , , , , , , , , ,

News and excitement!

Happy Monday, everyone!

As promised, here is the link to the article I wrote for The Conversation news website.

In addition to this, I have some exciting news! First off, I’m planning 2 major posts this week (I know, I know – you wait 3 weeks for a post and then 2 turn up at once…). I’ll be bringing you Part 2 of the Gremlins article and an overview of a seminar entitled ‘The necessity of breadth in a modern university education’. Not strictly on-topic for this blog but it’s in the same ‘Science and Society’ seminar series as my first post, and it promises to raise some very interesting questions.

Secondly, after attended the Science Communication Conference last week, I’ve invited several fellow bloggers to post on this blog! That’ll mean more topics can be covered here and you’ll be introduced to some outstanding science communicators and their take on the world.

All in all, exciting times are ahead! I’ll see you soon for the next full entry.


Posted by on May 20, 2013 in Non-science posts