Feeds:
Posts
Comments

Archive for the ‘Science’ Category

A few days ago, news broke of the genome-sequencing of DNA from a 7000-year-old skeleton found in Spain. While information about the ancient (variously described) man-or-boy’s genetic information is of course interesting on all sorts of levels — for instance, his lactose intolerance gives clues to the timing of pastoralism — both of the news sources I encountered focused primarily on his appearance. You see, he was (OMG) dark-skinned, but … get this … he had … BLUE EYES! I know right! Here’s the Guardian: Swarthy, blue-eyed cave man revealed using DNA from ancient tooth and the New Scientist: Ancient European hunter-gatherer was a blue-eyed boy.

The New Scientist also noted that in addition to dark skin, the man/boy had “hair like his African ancestors”. Both they and the Guardian chose to illustrate the story with this image:

Three days later, the Guardian ran this story about how nearly 20% of Neanderthal DNA lives on in modern humans. The article goes on to detail how much of the DNA that’s been retained is in keratin, a protein found in hair, nails, and skin. Now, I’m no geneticist, but to me that certainly implies that it’s at least possible that things like straight hair and relatively light skin — i.e., the traits shared by most non-African human populations, who carry Neanderthal DNA — might have come from Neanderthals. Indeed, the New Scientist’s version of the same story goes into detail specifically about Neanderthals passing on at least one of the genes involved in skin pigmentation, and speculates that Neanderthal keratin might have influenced Eurasians’ straight hair. The Guardian, though, chose to illustrate that story like this:

(The New Scientist, to their credit, used an illustration of a white guy of apparently indeterminate species.)

Now, look. I’m not an archaeologist, or a geneticist, or in any way qualified to comment on the actual science behind these stories. I’m not commenting on the science behind them. And it’s possible (though it seems unlikely) that the two illustrations above are fair representations of something that whatever the actual science behind these stories indicates. If so, though, it got well lost in translation. I try very hard, as a matter of general principle, to give people the benefit of the doubt, to extend maximal argumentative charity. But when one news story says “dark-skinned, blue-eyed man/boy with African-textured hair” and is illustrated with a drawing of a white guy with a tan, while another talks about Neanderthals having imparted skin and hair DNA to Eurasian humans, and is illustrated with a picture of a person with light eyes and Neanderthal brow ridges but who looks otherwise African, it’s hard to see that as anything but the tired, insidious repetition of the old idea that African people are somehow more “primitive” than others, particularly Europeans. The modern human, being human, is made as light as possible given the evidence presented in the story the illustration accompanies, and then a few shades lighter than that, just for good measure. While the Neanderthal, an extinct species whose very name has become synonymous with ‘primitive’, well, they’re well ancient, right? Better make them as brown as possible, no matter what the actual evidence being presented is saying. It is as though whoever makes (or matches) the illustrations for these stories did not even read their contents — they just went with whichever image “felt right”, which of course means “moar primitive = moar darker”. It is not only contributing to the stigmatization of Blackness (a drop in the bucket, maybe, but still); it is quite literally dehumanizing it.

Advertisements

Read Full Post »

There’s been a fair bit of recent and less recent hand-waving about the methodological flaws dogging medical science. The problems seems to be these: First, human trials are difficult and expensive, meaning that a significant chunk of them are done by private (or, perhaps worse, publicly-traded and thus shareholder-beholden) companies with a vested interest in the treatments they’re developing turning out to be effective. Second, where trials are done by universities, something about the structure of grant funding means that researchers are under tremendous pressure to publish positive results – the combined effect of the general academic pressure to publish and the literature’s strong and well-documented (if much-bemoaned) bias towards publishing positive results. Thus, negative and especially inconclusive trial results “slip through the cracks”, going unpublished and leading to an unconscionable level of seemingly avoidable human suffering.

This is clearly a serious problem, but thus far the only concrete solution I’ve seen proposed comes from this New Scientist article, which profiles a start-up agency whose remit is specifically to reproduce trials, with the power to award those that prove reproducible with some sort of “reproducibility certificate”. This sounds great, and I’m all for it, but surely the simpler and more obvious answer is to get in some mechanism that gets all those unpublished results published in the first place? Indeed, Ben Goldacre’s article notes: “In any sensible world, when researchers are conducting trials on a new tablet for a drug company, for example, we’d expect universal contracts, making it clear that all researchers are obliged to publish their results, and that industry sponsors – which have a huge interest in positive results – must have no control over the data.”

It’s so glaringly obvious that I’m hesitant even to write, feeling for sure that this must have already been proposed, or is already being proposed by hundreds of people who are much closer to the medical research industry than I am, but: what about a universal research results database? The information-cataloging technology for this certainly exists, and it seems like it would solve several problems simultaneously. All research would be visible, and research proving a negative wouldn’t feel “wasted”. Something like this would presumably benefit all areas of science, but it seems especially pressing in medical research, given the potentially life-threatening consequences of messing that up.

I’ve spent the last several years coming to the gradual and disappointing realization that scientific research doesn’t usually work the way they tell you it does when they teach you about the scientific method in elementary school – an idealized picture that seems to still inform quite a lot of professional philosophers’ picture of scientific research. I recognize that there are “real world constraints” that make perfect application of the scientific method impossible or unrealistic, or unethical with human subjects. But does it really have to be so bad? For one thing, there is presumably a regulatory body that approves experiments on human subjects. How on earth are the non-publishing gag orders that Goldacre describes making it past their ethics committees? Shouldn’t it be the opposite? And if the data-publishing aspects of the experiments aren’t part of the proposals that have to go before the ethics boards – well, why aren’t they?

Anyway, I’m coming at all this as someone who is not a practicing scientist myself, and would welcome any input or feedback from those of you who are.

Read Full Post »

Those who know me will remember how, well before finishing my degree, I was already regretting not pursuing my childhood passion of Zoology instead of following my nose into Philosophy — but by that point it was already too late to switch degrees, since I couldn’t afford another two years of undergrad, which is the minimum it would’ve taken to switch, assuming they’d even have let me do so. After finishing my degree, I started looking around for ways to somehow shoe-horn it into some sort of scientific discipline, mostly unsuccessfully. Besides which, I’d found that I pretty much couldn’t afford any kind of further study, since I would still be classed as an “overseas” student until I had been resident for three years “not primarily for the purpose of education”.
(more…)

Read Full Post »

Today I came across one of those hi-larious comic flowcharts, this one about alternative medicine. Now, it’s hardly new or innovative to make fun of ‘alternative therapies’ (though this is a fairly well-done piece of humour), but I want to draw your attention to one corner of it in particular. That is, the options for those wanting a “wholly ‘natural’ remedy” and who believe that “Yes, Big Pharma are the devil”. The choice is then based on the “Quantity of active ingredients required”. “Bugger all” leads to “Homeopathy”;* “An unknown, uncontrolled & untested amount” leads to “Herbal Medicine”.

This idea of testing has been at the centre of most of the more civil debates I’ve had or seen about herbal medicines, and it’s an important one. Many arguments are marred throughout by both sides’ tendency to argue as though more committed to being on a side than to striving towards Truth, no matter what they may claim. That is: typically, someone on the anti-herbal side will point out that little or no medical testing has been done for most herbal remedies. Then someone on the pro-herbal side will either bemoan the lack of funding for testing in most places — at which point arguments usually end because the opponents see that they are on the same ‘side’ really, the side of scientific testing, they are just coming into it with differing hypotheses — or else the pro-herbalist will question the validity of medical testing itself. And that is when it usually gets nasty.**

It’s this sort of oppositional attitude, I think, that leads people to ridiculously extreme positions of either disregarding all scientific research, or blindly accepting it all just because it’s *~*~science~*~* (though it’s worth noting that the latter view seems to be much more prominent among rationalistic non-scientists than practicing scientists or especially scientific researchers). The trouble, of course, is that a lot of scientific research, and — this excellent article in this month’s Atlantic magazine leads me to believe — medical research in particular, is often filled with methodological flaws. Some are the result of bias or fraud, but many are simply unavoidable, and probably many more are simply oversights. It is simply not healthy — literally or figuratively — to accept all research uncritically.

In the above-linked article, meta-researcher Dr. John Ioannidis claims, and has come up with a mathematical proof to demonstrate, that under normal conditions, most medical research turns out to be wrong. Moreover: “His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” And yet, of course, it would be wrong to say that this is a reason to automatically distrust all medical research — though it certainly appears to be a reason only to trust randomized trials, and even to take those with a grain of salt. It is still less reason to think we should abandon the concept of medical research altogether. It just means that we need to work to make that research better.

An example from my own life has been niggling at my conscience for years now. St Andrews is a major centre for certain kinds of psychological research, as well as having a host of psychology grad students with their own research projects, and as such it is fairly common for students to earn bits of extra money by participating in experiments. Now, St Andrews is also a small town, and the university is small, and quite a lot of students know each other, and even more students, I would think, will know each other within the set of current students who participate in these experiments, because a lot of them find out about them through friends who’ve done them too — I mean, really, what student anywhere would give up the chance, or fail to pass on word of the chance, to earn almost minimum wage for pressing buttons for 45 minutes?

For most experiments, the fact that a lot of participants know each other is surely a non-issue. But one of the bigger labs within the department is one that researches perceptions of faces. This surely must be affected by the participant’s familiarity with the faces they view within the experiment. In the one I did, I was first given a basic colour-blindness test and then asked to rate how “healthy” various faces looked. There were fifteen or twenty faces in the cycle, and I knew close to a third of them. Two were close friends! I’m sure this must have made a difference, because I could tell where my friends’ faces had been digitally manipulated or stretched or discoloured, which I generally couldn’t with the strangers’.

I tried to tell someone this at the time, but they were all so busy and I was so shy that I didn’t work up the courage to demand one of their attention long enough to point out this potential (and potentially serious) methodological flaw. Then they took my picture to add to their database, gave me my handful of coins and sent me on my way. Ever since, I’ve been idly wondering whether or not I should email someone, but I don’t know who I would email, and the more time passes the more embarrassing it would seem to be, to initiate the discussion. But REALLY. It’s probably not something that most experimenters would need to think to control for, if they were in larger cities or had larger or older databases or whatever, but in that particular situation, it seems like a gross oversight — and one quite easily corrected within the experiment, with just a button or something the participant could click if the face generated was an acquaintance’s. Or by having a time lag of a good few years in between entering a participant’s photograph into the database and having it show up in experiments. Or something.

The good news, though, is that Dr. Ioannidis’ work has been exceptionally well-received by the medical community. Yet there is apparently controversy within the meta-research community for exactly the reasons described above: some fear that seeding public doubts about scientific research will simply drive people to seek “alternative” therapies or ignore the medical establishment, or their own health, altogether. I much prefer his proposed solution. To quote the Atlantic article: “We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough.”

* As well it should.

** Let us be clear: it also gets nasty because of the anti-herbal camp’s tendency to lump herbal remedies together with all other “alternative” therapies, like homeopathy and crystal healing and bullshit like that, and equivocate between them in their refutations; and by the tendency of many proponents of herbal remedies to also believe in bullshit like homeopathy and crystal healing.

Read Full Post »

A Conjunction of Drones

As ever more agricultural land is used to grow fuel instead of food, all agriculture may soon become less productive due to the loss of bees. Yeah, the bees are dying, or disappearing, en masse. I have a laptop with an inbuilt wireless card. Since I started using high speedz internet, I seldom read for pleasure and my attention span has dwindled down from a lenghty, mighty and powerful focus to a half-absentminded 30 seconds (even now, I have two other tabs open and I’m listening to music while drinking chemical coffee, which I use to replace all the sleep I skip in my busybusy life). I can barely cook, and my instant porridge exploded in the microwave.

What do all these things have to do with each other? Maybe nothing. But this morning, I was leafing through the newspaper and came across an article with a possible explanation for the Case of the Disappearing Bees: (more…)

Read Full Post »

I’ve started re-reading Rousseau’s ‘The Social Contract’, preparing to write an essay on it. Like so many of his predecessors, Rousseau is concerned with determining which aspects of human society are ‘natural’ and which are ‘artificial’. But I think they are setting up a false dichotomy.

I was particularly struck by a bit in the beginning of Chapter II, in which he claims, first, that the only truly ‘natural’ society is the family. Fair enough. But he goes on to claim that if any connection is maintained between a father and his children after those children have reached adulthood, it is so “no longer naturally, but voluntarily”. As though voluntary human actions were somehow unnatural.

Now, before y’all start correcting me, I get what he’s saying: he’s using ‘nature’ to describe what comes forcefully naturally to our natures, like breathing. (We can control our breath to a certain extent, but we breathe without thinking about it and we cannot stop breathing or we die. I understand everything through analogy because I’m a bit simple like that.) Then he’s using ‘artifice’ to describe those things which we do only through the exercise of our minds.

Our natural, human minds. I suppose my objection stems from the general separation of ‘human’ vs. ‘nature’, which is often simply false. I suspect a bit of it is religiously-based (God created Man and the animals, not Man of the animals), though even without religion there’s a fair amount of egoism in our conception of species.

What I find particularly hilarious is the argument — based on agreement with the above — that ‘believing in’ global warming is somehow egoistic. “As though one single species could influence the whole planet so much!” What do these people think the atmosphere was like before algae and oceanic photoplankton?

Speaking of which: there was an article in the Guardian today saying that climate scientists are blaming global warming for the huge increase in Atlantic storms over the past decade. Well, freaking duh! I wonder how long it will be before someone takes [more explicit, publicised] note of the fact that the current changing rainfall trends, IIRC, pretty much mirror the changes seen at the end of the last Ice Age. Hmmf.

Anyway. The Guardian also had an article talking about Nature Writing as a genre. Which is all well and good. But what confused me was its tagline:

“A new genre of writing is putting centre stage the interconnectedness between human beings and the wild.”

Excuse me? Okay, granted I haven’t read any of the books it talks about, but in what way is writing about nature and natural things a NEW genre? Surely it’s one of the oldest there is! E.g.:

A noiseless patient spider,
I mark’d where on a little promontory it stood isolated,
Mark’d how to explore the vacant vast surrounding,
It launch’d forth filament, filament, filament, out of itself,
Ever unreeling them, ever tirelessly speeding them.And you O my soul where you stand,
Surrounded, detached, in measureless oceans of space,
Ceaselessly musing, venturing, throwing, seeking the spheres to connect them,
Till the bridge you will need be form’d, till the ductile anchor hold,
Till the gossamer thread you fling catch somewhere, O my soul.

– Walt Whitman (1819-1892)

Writing about humans’ interconnectedness with nature is new, then, eh?

Finally, there is the natural end of all life.

R.I.P. Ingmar Bergman.

Read Full Post »