Sunday 30 March 2014

A Very Reasonable And Moderate Position On Root Canal

     I figured it was about time I weighed in on the root canal controversy. I'm going to assume you know what root canal is, and spare you the gory details the anti-root canal crowd like to use to shock people. Sometimes they carry around posters, big magnified closeups of actual root canal procedures, to  raise awareness of just how horrible it is. Personally, I think that's more than enough to persuade me that I'd never want to have one, but some of the anti-root canal extremists even spread stories about root canal causing cancer and things like that, which doesn't seem to have lots of science behind it but they like to play it up to scare people out of having root canal.
     Anyway, I'm not an extremist. Although I would never choose to have root canal myself, I think it should be up to the individual to decide whether or not to have it done for himself. HOWEVER, I really don't think people should be allowed to use root canal as an alternative to proper dental hygeine. I mean, if you brush and floss regularly, or better yet if you avoid cavity-causing foods entirely, and through no fault of your own you still get a decayed tooth bad enough to need root canal, then you should not be punished for that. But people who don't brush or floss really have no right to then turn around and have an endodontist clean up after their carelessness.
     So I think people who want root canal should need a letter from their dentist confirming that it's actually medically necessary, and that they weren't at fault for the damaged tooth in the first place. If you're not going to take care of your teeth, you shouldn't expect other people to do it for you.

Saturday 29 March 2014

That's not what she said.

     One of the basic rules of quotation marks is that they're supposed to go around actual verbatim quotes, not paraphrases of what you think someone means. This morning a Facebook acquaintance shared an image macro (I hate to call them memes) which showed the following quote over a picture of Senator Diane Feinstein at a committee hearing:
"Freedom of speech is a privilege intended for educated professionals. It should be illegal for a high-school dropout to promote anti-government propaganda on his 5 dollar blog, if he cannot properly verify his statements."
     That is so preposterous a statement that I could not believe it would have been uttered by anyone with any awareness of basic 1st Amendment thinking. So I tracked down the YouTube video [edited April 2 to add link] from which the still picture was taken, and watched it, and of course she never actually utters those words in that grammatical order. Some of the words appear, but not in the way implied by the false quote.

     For example, she does use the word "privilege", although she never says "Freedom of speech is a privilege". In fact, she's talking about privilege in the legal sense, a special exemption from certain kinds of legal obligations that would otherwise apply.
     Consider "attorney-client privilege". There are circumstances in which a person may be legally required to answer questions truthfully, such as when testifying in court or being examined on an affidavit. Privileged communications, however, are exempt; your lawyer cannot be obliged at law to disclose things you tell her in connection with seeking legal advice. If you go to your lawyer to ask how you can hasten your rich uncle's death so you can inherit, and she tells you "Don't do it, because it's against the law," then you know that's not an option and you can obey the law. If your uncle then dies under suspicious circumstances that you actually had nothing to do with, the fact that you sought legal advice should not be used as evidence to bring you under suspicion. We want people to seek legal advice.
     So what Ms. Feinstein was talking about here was whether or not there should be a similar sort of privilege for people who publish official secrets, and if so, what criteria there should be for someone who would enjoy that privilege. You can promote anti-government propaganda on your blog all you want, but if you publish actual secret documents, you might well be subject to criminal proceedings.
     There's a difficult line to draw here. On the one hand, there is sometimes a legitimate state interest in secrecy (although I tend to suspect the state claims this interest more often than they should). Obviously, we do not want people to be able to publish the new names and locations of people in the witness protection program, and claim freedom of speech as their defence. People who expose legitimate secrets ought to face legal consequences, and people who come into possession of such stolen documents might well be legally compelled to say where they got them.
      On the other hand, government secrecy always carries a great risk of protecting corruption and abuse of power. Sometimes the only defense against this is for a whistleblower to leak the information to the press, but if the reporter you give the documents to can be forced to reveal your identity, you're less likely to blow the whistle in the first place. So, because it might sometimes be in the public interest to facilitate such whistleblowing, maybe there ought to be a form of privilege for actual journalists to protect their sources. (Presumably, actual journalists will be bound by some form of professional ethics when they decide whether or not to publish something.)

     Now, I am not saying anything about whether or not Senator Feinstein's position is an appropriate one. How much government secrecy there should be and how it should be protected is a complicated issue, and you can certainly disagree with Ms. Feinstein's approach to it. You can even use the strawman if you want, and say what you think her position is. But putting quotes around your own paraphrased (mis)interpretation of her position in order to imply that you're quoting her actual words? That is, in a word, lying.

Wednesday 26 March 2014

On Cultural Appropriation


     So, a couple of weeks ago, I saw an article by Randa Jarrar on Salon called "Why I can't stand white belly-dancers", in which she complains about cultural appropriation. Not surprisingly, it provoked a lot of response, and much of that negative. My own reaction was a roll of  the eyes, as it was so full of historical grievance rhetoric. Not that the historical grievances don't exist, but I am wary of the righteous victim card; when you devote more ink to how angry you are than to articulating exactly why other people should feel the same way, I feel the eyerolling urge. Magnitude of emotion is not evidence that a position is sound. Arguing passionately is one thing, but too much anger tends to make one incoherent.
     I have long suspected that the concept of cultural appropriation was one of those bits of self-vindicating pseudobabble that escapes serious analysis by dint of angry incoherence. I recall being mystified at the rage with which some people attacked Paul Simon for his album Graceland, accusing him of stealing African music. Part of why have little sympathy for this view is that I object to the concept of intellectual property generally, and I think "stealing" is just the wrong paradigm. But at the same time, I think there is a legitimate complaint here, even if it isn't being formulated well.


     Back in the mid 1980's, a friend gave me a 300 baud modem for my Apple ][ and the phone number of a few local dial-up bulletin board systems he thought I might enjoy. That evening, when I got home from my weekly RPG, I called one up for the first time, and when asked to specify a user name, the first thing that popped into my head was "Bald Dwarf", because the character I had been playing that evening was a dwarven fighter. (Rebelling against class and race stereotypes, I decided to make my character completely bald, but he always wore a wig and beard to conceal this embarrassingly undwarven trait.) I've been online ever since.
     Now, online trolls are not at all a new thing, and there was one fellow who seemed to make it his mission to disrupt any meaningful conversation he deemed pretentious, elitist, or just too serious, I suppose. Then as now, I loved to debate issues like abortion, gun control, economics, theology and so on, and so I was a frequent target. And, for some reason, he started drawing a comic strip for the university newspaper, and named one of the characters "Bald Dwarf".
     I don't know what sort of reaction he was looking for. Perhaps I was supposed to feel insulted, or honoured, or both. I was annoyed, but it actually had nothing to do with the depiction of me or the character as such. Rather, it was that my handle had been co-opted, and would ever more be associated with something I had nothing to do with, and didn't particularly want associated with me. I didn't even dislike the comic -- sometimes it was very funny -- but I knew that people would now think I had taken my name from the comic, and not the reverse. In short, my name had been appropriated. (I have not mentioned the name of the author or the comic strip because, oddly enough, he is trying to distance himself from it; it turns out that making something to deliberately offend people isn't something you want to be associated with when you're looking for charitable grants to support your medical research.)

     This happens all the time, and in many different ways. The swastika was a beautiful, elegant geometric design, but ever since it was adopted by the Nazis, it has taken on an overwhelming new and hateful meaning that effectively prevents anyone from using it for anything else. The concept of the meme, as originally described by Richard Dawkins in The Selfish Gene, is a powerful concept for understanding cultural evolution, but whenever I talk about memes to people today, they think I'm talking about Facebook macros and I have to spend ten minutes explaining. I get annoyed when journalists talk about something "begging the question" when they mean "raising the question", but it's probably a losing battle trying to make people understand that particular logical fallacy.
     So I acknowledge that there's a real and sometimes infuriating phenomenon here. But calling it "cultural appropriation" is really nothing more than an attempt to shoehorn a few instances of a general process into an ideological narrative of oppression and imperialism. To be sure, oppression and imperialism exist, and play an enormous role in how this process of cultural evolution unfolds (including explaining why certain instances of cultural evolution cause more pain than others), but a white woman who decides she wants to learn how to belly-dance is not engaging in a systematic attempt to disempower women of color.

     I'm not saying one should just meekly accept it when someone else redefines a cherished symbol or practice in a way one doesn't like. Not at all. We can and should engage to vigorously promote the conventions we want to see adopted, and if you think that "belly dancing" carries a particular meaning, then by all means disseminate that meaning so that it might catch on and become the dominant convention that we all think of when we see a belly-dancer. Decry the poseurs who are dumbing it down or missing the critical nuances. Educate. But complaining because you don't like the colour of their skin is no way to stake out a defensible moral claim.

Sunday 16 March 2014

Does God Lie?


     I have on several occasions over the course of this blog referred to Genesis 2:17, where God appears to tell Adam something that is literally not so. God tells Adam that in the same day he eats of the fruit of the tree of knowledge of good and evil, he will die. Adam eats, and promptly dies nine hundred years later.
     Christian apologists attempt to resolve this apparent contradiction by arguing that God was telling the truth, but in a non-literal fashion. That is, they will say that Adam died a spiritual death at that moment, though his body lived on and sired children and so on. Or they may say that prior to that moment, Adam was immortal, but eating of the tree made him mortal, so in a sense that was the day he was killed, even if it took took him 900 years to finally succumb to his wounds. 
     All of this rationalization is fine, and may well be true, but the very fact that it has to be undertaken at all only goes to illustrate the fundamental paradox of biblical literalism: in some sense, you still have to choose between God and the Bible. The New Testament itself says you cannot serve two masters, and here is a demonstration of why that is so: at the very least, you have to adopt different standards of truth for assessing the claims of each. At most one of the following two postulates can be true:
  1. The Bible is an accurate account of everything it addresses when interpreted literally.
  2. God's utterances are accurate on everything they address when interpreted literally.
     Both postulates might well be false, or only one of them, but they cannot both be true. You have to accept, at a bare minimum, that at least one of God and the Bible should be interpreted figuratively or metaphorically. If the Genesis account is literally true, then God must be speaking figuratively when He says when Adam will die. If God's warning to Adam is to be understood as literally true, then either the Bible must be in some sense speaking figuratively when it exaggerates the last day of Adam's life into 900 years, or it was speaking figuratively when it described God as making that claim in the first place. 

     Now, I don't see anything wrong with figurative or metaphoric truth, so I don't think there's anything especially impious about trying to explain God's warning to Adam that way, and obviously, neither do the Christian apologists who do so. But I do find it very strange that they would hold God (who by definition is supposed to be divine perfection itself) to a lower standard of truth (one in which Bill Clinton's "I did not have sex with that woman!" gets a pass) than the Bible. 
     The apologists aren't really doing God any favours here by adopting a more relaxed, literary rather than literal standard of truth. Sure, it gets Him out of the charge of lying to Adam, but it also undermines His authority to say What Really Is, because it means that if God clearly says something we find hard (or even just inconvenient) to reconcile with other beliefs, well, we can go ahead and interpret it to fit those other beliefs as we like. 
     And they understand that concept just fine as soon as you suggest that maybe the Bible needs to be interpreted flexibly as a work of literature rather than divinely authoritative dictation. They're all too happy to zealously defend the Bible as the Word of God and above any human reinterpretation or excuse-making: it says what it means, dammit, and that's all there is to it! 
     Yet God doesn't? If the Bible says "God said 'Let there be light!'" then by gum, that's exactly what God said, four English syllables and there was light and it was good, and it's not open to debate that maybe what God said was "Hey, how about some light in here?" 

     The point I am making here is that literalism subjects God to the Bible, rather than vice versa. If the Bible says God lies, then God lies, but if God says the Bible lies, then God is either lying or must mean something else. Even as an atheist, I find that a profoundly impious, idolatrous and deeply offensive idea. 
     And you don't actually get away from that by pretending that God and the Bible are one and the same, that the Bible is just God's Word. Not only is that explicitly idolatrous, but it also entails God Himself saying that God lies, or at least that you shouldn't always take Him to be telling the literal truth. Either way, you're stuck with the conclusion that the Bible, either as a book with fallible human authors or as God's Very Owen Utterances, should not be taken as literal truth.


Wednesday 12 March 2014

The "Fair Elections Act"

    Yesterday I wrote to my MP to register my opposition to the government's proposed "Fair Elections Act". I have edited out the portions specific to my MP, and post the rest here:

I write to express my profound opposition to the Fair Elections Act, which seems to have been drafted for the purpose of further freeing the government from the obligation to listen to its citizens. While I am concerned about the whole bill, the part that most upsets me is the attempt to tighten identification requirements and remove the ability to vouch for one's neighbours, because erecting barriers will prevent some eligible voters from voting. I have worked as a deputy returning officer in elections, and had to turn away people who had forgotten their wallets, or whose temporary drivers licenses did not show their current address, or who for whatever reason simply didn't have the paperwork needed to prove they were able to vote. The point here is that whatever rules you have, some people will occasionally be prevented from exercising their legitimate right to vote, and the tighter you make those rules, the more legitimate voters will be prevented from voting.

Now, you will argue that these rules aren't intended to prevent legitimate ballots, but rather to prevent illegitimate ones. Okay, I agree that it's important to prevent illegitimate ballots from being cast. But is it worth preventing legitimate ballots, and if so, how many? If we manage to prevent one person from voting illegally, well, that's good, but is it good enough to outweigh the bad of preventing another person from voting legally? How many false positives are worth how many true positives?

I submit that it does MUCH more harm to the integrity of our democracy to exclude the voice of one legitimate voters than it does to hear the voice of one illegitimate one. The current rules are more than adequate to prevent ballot stuffing, and the only illegitimate voters casting ballots today (if any) are people who have a genuine connection to Canadian democracy, but may not vote for some technical reason: they're just shy of their 18th birthday, or they moved to the riding just short of 6 months ago, or they came to Canada as landed immigrants as infants and don't even realize they're not citizens. Rules are rules, and of course these people should not expect to be permitted to vote if they are ineligible, but if there is doubt as to their status, I say it is better to risk counting a technically illegitimate ballot than to risk excluding a legitimate one.

It is hard to ignore the similarities between this bill, aimed at tightening up identification requirements for voting, and the Republican push for stricter voter ID laws in the U.S. There is little doubt that such attempts to limit the franchise in the U.S. have been driven by partisan advantage; indeed, you can look at the records of state legislature debates to prohibit felons from voting and see that it was sometimes explicitly intended to prevent blacks from voting, when coupled with a campaign to charge blacks with "vagrancy". More recently, legislators have tried to be more careful in framing the justification for ID laws, but the intent in the land of gerrymandering still shines through; the voters most likely to have trouble coming up with the right ID are also those least likely to vote Republican.

Is it truly a coincidence that the Fair Elections Act will introduce a number of changes that just happen to favour the party in power? I want to believe it is. I want to believe that you genuinely care about preserving and enhancing our democratic governance. I want to believe that it honestly hadn't occurred to you just how insidiously this bill undermines our democracy. 

I urge you to vote against this inherently corrupting bill.

Monday 10 March 2014

On Valuing Truth

     It's too late to change my entry to Sam Harris' Moral Landscape Challenge, but I've just thought of a pithier way to put part of my argument: Science does not lead to values; values lead to science.

     Let me give some background to that first. Sam Harris argues in his book, The Moral Landscape: How Science Can Determine Human Values, that the fact/value distinction is illusory, and that the naturalistic fallacy, that one cannot derive an 'ought' from an 'is', is therefore also invalid. In short, Harris believes we can derive our morality from science.
     In his argument he also points out that science already includes certain values, such as a respect for evidence and logical consistency. But this is where I think he goes wrong, because those values are prior to the choice to use science in the first place. That is, one does not find oneself suddenly valuing logic and evidence because one happens to be a scientist; rather, one becomes a scientist because one already values evidence and logic. You can look at science, sitting there on the shelf among various alternative ways of dealing with the world, read the label and see that it offers reproducible results and reliable predictions, but you won't buy it if you don't care about those things.

     A recent exchange with one of the anonymous commenters to this blog has helped to dramatize this point for me. He or she castigated me for referring to evolution instead of attributing everything to God's will, and I replied that the latter approach was literally stupid.
     Now, as I've written before, the word "stupid" has unfortunate pejorative connotations, and so I tried in my comment to stress that I was using the word in a clinical rather than moral sense. "Clinical" stupidity I define as a deliberate preference of ignorance over understanding. I do not mean it as an expression of contempt or condemnation in and of itself.
     Obviously, of course, given my value system, I see stupidity as something to be avoided, but that's based on my value preferences. I can only persuade you that stupidity is something you should avoid by appealing to shared values of logical consistency and objective truth; if you just don't care about logical consistency and objective truth, there's nothing I can say to make you value them, and your proper response to my calling your beliefs stupid is, "So what? What's wrong with stupid?"
     (This is a symmetrical relationship, of course. If someone values something other than logical consistency, and appeals to me to embrace some paradoxical belief system because it is paradoxical, my proper response is "So what? What's wrong with not being paradoxical?")

     In the real world, there aren't a lot of people who explicitly value stupidity, at least not in themselves.  Where I say I value "objective reality", they aren't going to say, "What's so great about objective reality?" (unless they're maybe Karl Rove); rather, they're more likely to offer an alternative definition of objective reality. In the present instance, they'll say that the Bible just is objective reality, and that any belief system that doesn't recognize it as such is, as I have put it, clinically stupid.
     Yet the problem is still basically the same: we have fundamentally different values, even if we assign them both the same name of "truth". When I appeal to rationality and empirical evidence as part of the value of "truth" to argue for why I believe we're very probably the products of an evolutionary process, the creationist doesn't just disagree about my conclusions; he literally does not care about my standard of "truth"; I could only conceivably persuade them by showing how the Bible supports my view. And my anonymous critic cannot convince me to accept the Bible as literally true without appealing to the particular set of truth values I happen to hold. My response to "but the Bible says..." is necessarily going to be "So what?" To them, rejecting the Bible is clinically stupid, but so what? I do not -- indeed cannot -- care about their definition of stupid if it has no connection to mine.

     The point of all this is not to argue for my particular conception of Truth. If you happen to share it, then great, we can have a meaningful discussion and maybe some of what I have to say will be useful to you, and I can learn from your comments. If you don't, then it's pretty much a waste of your time and mine to try to persuade me to abandon it.
     No, the point of all this is my argument with Sam Harris about whether or not science can lead us to moral truth. It can certainly help us to answer moral questions about how we ought to behave in accordance with our values, but it cannot tell us what those values ought to be. True, almost any person who has any respect for science will probably have values that include a respect for objective truth, but that's a sort of selection bias; people who don't care about this conception of objective truth also won't have any interest in science. (It's actually a variation of the anthropic principle; the fact that everyone who values science also values truth is no more a coincidence than the fact that we happen to live on a planet that has just the right conditions for us to live here; if it didn't we wouldn't be asking the question here.)

Wednesday 5 March 2014

Can Bees Imitate?


I have been thinking a great deal about the evolution of memes, and wondering in particular about just how big a brain has to be in order to support memetic replication. Humans aren’t the only animals that imitate, after all. Many birds are notorious mimics, and not in the way a viceroy butterfly just happens to resemble a monarch butterfly without ever observing monarchs. The viceroy looks like a monarch because predators avoided eating its ancestors who resembled the very yucky-tasting monarch, and the more close the resemblance, the more likely there’d be uneaten offspring. In contrast, birds genuinely imitate, producing sounds that are actually copies of the sounds the birds hear; if the bird hears a different sound, it produces that sound instead. A parrot might hear a sound, imitate that sound and be heard by another parrot, who in turn might imitate it, and so on; the imitated sound then can replicate as a meme.

But birds have pretty advanced brains in the grand scheme of things, and parrots and crows in particular are remarkably clever animals. What about something much smaller? While pondering this question, I remembered the humble honey bee and its remarkable waggle dance.

When a foraging worker ant finds a good source of food, it can leave a scent trail on the way home so that other workers can quickly find that source and better exploit it. Bees, however, fly through the air, which tends to move around and carry away scent trails, so they need another way of communicating the location of resources to their hive. They do this by way of a special dance they perform when they return home laden with food. The dance indicates a vector: the direction to the food source (relative to the position of the sun) and the distance. Other bees observing this dance then know to fly this far in this direction to find the food and bring back more for the hive.

So what we have here is an ability to communicate. Information is transmitted from one tiny bee brain to another, and manifests itself in behavior to the benefit of the hive. These are really, really tiny brains we’re talking about here, but they still demonstrate the basic mechanical ability to formulate, transmit, and replicate behavior. Bee One does the dance communicating vector A and distance B, and a little while later, after observing this dance, Bee Two does a dance communicating the same vector and distance.

But is this actually memetic? That is, is this truly an instance of imitation?

I don’t know, but I suspect not.  I think that if this happens, it's because Bee Two is reporting its own visit to the same food source, not because it is repeating the first bee's report. I expect the bee brain has two distinct registers for storing these vectors, one for navigating to the food site and one for directing the waggle dance, and these registers are isolated from each other; the navigation register can only be loaded by observing the waggle dance, and the dance register can only be loaded by actually flying back to the hive. I predict that a bee cannot watch another bee perform the dance and then turn around and imitate the same dance for a latecomer who missed the original performance.

Why do I think this? Because there is no evolutionary advantage to imitation here, and indeed considerable disadvantage.

First of all, the waggle dance is analog, not digital, which means that with every replication there is a loss of fidelity. Bees indicate direction relative to the current position of the sun by the angle of the straight run in the middle of the figure eight relative to the vertical. (The dance is performed on a vertical surface, a wall in the hive.)

Now, the angle is accurate enough to get the observer near enough to find the target, but it’s probably not much more precise than that. A bee observing the dance may also misread the direction by a few degrees, and then misjudge a crosswind, but still get within spotting range. Also, since the direction is pegged to the current position of the sun, the bee has to adjust for the passage of time, which it does very well but that’s another area for error to creep in. If a bee were to watch and then imitate the dance, all these distortions would multiply, so that the information content of the dance would soon become unreliable; the hive would waste time and energy sending workers to the wrong place.

Even if the copied dance were perfectly accurate, an imitated dance would allow another problem common to internet chain letters: the propagation of messages long after the information within is outdated. If it were possible for a bee to repeat a second or third hand report, other bees might still be following those directions even though the food source in question had been depleted. It makes much more sense to have each and every performance of the waggle dance be a first-hand report.

I would be delighted to learn that bees are capable of supporting memes. Their brains are big enough to do so, but I predict that as a matter of fact they do not. And that is the falsifiable prediction of my theory. If there are any entomologists or apiologists out there who have actually observed one bee imitate another bee’s waggle dance, I would love to be proven wrong.