Since I last posted I’ve spent weeks recovering from graduate school (still not done yet), moved 500 miles and started a job, and spent the last week being upset about the recent church shooting in Tennessee.  I was not there and don’t want to borrow someone’s tragedy in the “my cousin’s sister’s mother goes there and I could have gone and been killed!” sense, but I know some of the people involved and have been worrying enough about everyone there that it hurts.  Not a useful thing to do for anyone including me, but hard to stop.

Anyway.  Here’s an interesting article: What don’t we know about the pharmaceutical industry? A Freakonomics Quorum. From Freakonomics (as you might guess) at the New York Times.

The first and last author describe pharmaceutical  industry practices that I think a lot of people actually do know about.  The other three involve chain pharmacies making huge profits off generics (surprise to me), and market forces and incentives for research and development of new drugs (two authors, both pharmaceutical employees, you might consider reading them nonetheless).

I’ll be done with grad school in about two weeks and able to start posting more frequently – hooray!  Today’s entry is a response I wrote for a friend who asked why the thimerosol-in-vaccine-causes-autism movement doesn’t go after thimerosol in household products to anywhere the same degree as in vaccines (especially in vaccines that no longer contain thimerosol).  This entry contains speculation, although it’s speculation based in existing cognitive science, primarily in cognitive anthropology work on what cognitive faculties make some beliefs catchier than other beliefs.  In the field this work gets called “epidemiology of beliefs”: what characteristics of human minds some beliefs compelling in a way that others are not?

The anti-vaccine movement comes in part from anecdotes of kids regressing at about the time they get the MMR vaccine. That’s actually accurate; when I was taking a cognitive development graduate class, the percentage estimated to regress in that way was about 25%. There was some speculation at the time that it’s due to kids’ brains undergoing a major reorganization at (coincidentally) about the same time they get the MMR.

(Kids make a bunch of neuronal connections, then prune out the less useful ones, and the speculation was that the children who lost language and social skills had not pruned as extensively. There was some research on head circumference at the time looking at whether kids with autism had greater head circumference – kind of a crude measure if you ask me but it did seem to be panning out.)

My own impression is that the anecdotes about regression after vaccine (but not anecdotes about regression before vaccine) kick off contamination fears in some people. In the literal “we have a mental faculty that’s highly alert to dangerous contamination by non-visible substances” sense. Which then kicks off a search for an essentialized underlying substance that will explain/justify their intuitions (also an extremely common thing for people to do).

What I’m getting at is that thimerosol isn’t a trigger for concerns, thimerosol-in-vaccines is an explanation for them. It’s the endpoint of a search. Why doesn’t it generalize from there? My speculation is that vaccines are required by authority and contact lens solution etc is not, and stories about having dangerous things forced on you are much more mentally catchy and conducive to righteous indignation and fear than are stories about stuff you can voluntarily avoid. So a lot of people don’t know about it, and it’s not that their lack of concern comes from lack of knowledge, but that their lack of knowledge comes from fears of thimerosol-in-household-products never taking root strongly enough to become widespread.

From Salon: Doctors need to be aware of widespread health misinformation on the Web, because patients are going to find it.  Although the article is pitched as “Internet information is good for patients and doctors”, the misinformation aspect it its major point.

And a good point.  This is a basic tenet among the people who study end users of software and web applications, who do user-centered design, human-computer interaction, usability studies, etc:  You have to design your approach for what people are going to do.  Designing (whether it be software/web applications or any kind of information delivery) for what you think they ought to do is an approach destined for failure.

And you can’t stop the Internet.  It spreads information, and misinformation, like nothing else.

(On the other hand, calling doctors who don’t get this – like the article does – ow.  Good way to piss people off.  Doctors are end users of information, too, and starting off your approach with an insult, maybe not so effective.)

I didn’t realize this until I’d spent a summer around dogs, but they’re like people in some very relevant ways.  They get enthusiastic, they get angry, they feel down and icky, they misbehave and know it, they understand a small English vocabulary, and they look up at you adoringly whenever you have control over who gets chicken scraps and who doesn’t.

Let’s say you have a dog (let’s make him a mutt) named Barky (your six-year-old named him, not you).  You’ve had him for five years, you love him and your kid does too, and you know quite well that lying in one place, resisting going out into the sunshine for walks, and eating very little is not at all normal behavior for rambunctious, cheerful Barky.  He’s more than unhappy and you know him well enough to know that this is well outside the normal.  You take him to the vet and nothing’s physically wrong, but the vet suggests that he’s depressed and might benefit from Reconcile, the Prozac for dogs.

What do you do?  Your options include (this is not a complete set): reject the option because he’s a dog, not a human, and only humans feel real emotional pain and/or deserve treatment; reject the option because you ask about the side effects and decide they would be worse for Barky than how he appears to feel now; or accept the suggestion because you think that reducing pain, even animal pain, is desirable, and can be helped by medication. You can also go home and make fun of it. I know what I’d choose to do for someone I cared about, even if they weren’t human, or weren’t adults, or whatever we choose as the boundary line between living beings whose pain matters, and those whose pain we don’t consider real enough to matter.  It wouldn’t necessarily be to give them psych meds (that really would depend on the expected benefit and side effects), but I wouldn’t reject it based on the notion that dogs cannot have serious problems or painful emotional experiences.

The availability of Prozac for dogs is unquestionably an attempt for Eli Lilly to expand their market.  This is not different from other companies.  When you see an ad for life-saving drugs, it’s because they want more people to get those drugs.  When you see an ad for a new sports drink, it’s because they’re trying to get more people to buy it.  And so on.

The question here isn’t whether drug companies are exploitative – we already know they are, so that’s not really a question – but whether we want to use their products to reduce suffering.

Some therapists do not “get” the difference between having life problems, and having mental illness (plus, often, life problems).  My suggestions for dealing with this, if you are mentally ill and seeking therapy, are:

1) Look for therapists affiliated with hospitals, they tend to have more experience with psych patients.

2) Be leery of anyone who makes a big deal out of “situational” versus “chemical” depression.  It’s all chemical; it all happens in your brain and body.  Peter Kramer, a psychiatrist who is pro-therapy and writes books about therapy and books about medication, argues that research shows that depression triggered by repeated situational events comes to look no different from depression with a heavier genetic component.  It’s just that some people start higher or lower down the slope.

Those who start further down the slope – who some people would say have “chemical” depression – can still be helped by therapy.  For example, people who are more prone to depression following negative life events can learn to better anticipate and/or avert those events, and cognitive therapy can help people interpret those events in ways that are less damaging.  It’s not always enough, but it can be very helpful.

Continue reading ‘Situational versus chemical depression, and what it (doesn’t) mean for treatment’ »

I have taught mentally ill college students, given advice to mentally ill college students, and been one myself. Based on those experiences, here is a guide to not tanking your exam / your class / your degree while you’re having a meltdown. This is probably most applicable to college students at universities in the United States, but there’s generally applicable stuff in here too.

None of this is easy to do when things get rough, but many of these suggestions are minor time commitments that can make a huge difference.

1. Decide on your major goal

If you are clear on what you want, it will be easier to decide what to do while things are going south. If your major goal is to get a good education, get a good job, or go to graduate school, your top priority should be using relevant resources (including those related to mental illness). If your major goal is to avoid treating your mental illness (or using accommodations, extensions etc), evaluate what you are willing to sacrifice (grades, time, extracurricular activities, your job, future salary, etc) and plan what you will sacrifice when.

See the bottom of this entry for my opinionated take on this.

2. Know your available resources

If your school has a students with disabilities service, a counseling center, or a health services, and you are not already using them, look them up and find out what they offer.

3. Know your course policies

Take time out of your day to go over your syllabuses with a fine-tooth comb. You need to know your teachers’ policies about emergencies, grade appeals, accommodations, and extensions/make-ups. Plan to make use of these where they can help you.
Continue reading ‘A college instructor’s guide to not tanking your grades while having mental illness issues’ »

I looked up celiac disease and autism on pubmed the other day.  My mother’s secretary has a daughter recently diagnosed with autism, and diagnosed several years with celiac disease.  So we were sitting in the dining room and I’m snacking and looking up stuff on pubmed.

“There’s not much research, but the two studies I’ve found on autism and celiac did not find a link, except for this one quack guy,” I say, meaning Andrew Wakefield.

“Okay,” my mom says.  “So they haven’t done the research confirming it yet.”

“No,” I say.  “There are two existing studies that have looked, and they did not find evidence of a link.  I’m looking on pubmed, so if there were more studies they would very likely be there.”

“Ah,” my mom says. “So all we’ve got now to go on is anecdotal evidence.”

“No,” I say again.  “Studies looked.  They looked for a link, and they didn’t find a link, suggesting there’s not a link.”

I think at that point we detected mutually incompatible approaches to uncovering truth, and dropped the conversation.

——–

It’s a normal human thing to figure out what you believe is true (often by assuming anecdotes are representative of overall reality), and then seek out social back-up to help convince others of it.  In this approach to truth (which drives scientists up the wall), if science is used, it is used to support one’s own truth claims to others.  If someone is not trying to make truth claims to others, then there is no need for science; it doesn’t tell you anything you don’t already know.

In contrast, the role of science as-generally-agreed-upon is to test what we believe to be true to see whether it really is true. In this approach to science, science can disconfirm anecdotes, and its role is to drive what people believe, not just to back what they already do believe.  So it’s much less useful for normal human goals.

Frustrating, that.   Also frustrating that we don’t have much research on a lot of things.  It could be true that the two studies on autism and celiac disease didn’t pick up on an actual connection…but the point is, two studies that show no connection are a lot more meaningful than no studies that show no connection.

A lot of kids diagnosed with autism would previously have been diagnosed with general mental retardation.  Now genetic testing is finding that some kids with autism diagnoses have specific genetic deletions/duplications.  Does that mean that they’re not really autistic and “autism” was a misdiagnosis? Or that “autistic” will turn out to be a useful umbrella term for a bunch of different things?  My vote is for the latter…

Searching for similar diagnoses through DNA testing

The article stresses both the relief families find at meeting other families with similarly affected kids, and the distress they experience at seeing the degree of impairment that older children with the same problems (still) have. And there’s a demonstration of the way gender roles play in to who can back out of their responsibilities and who can’t – note the one father who decided to pick up and leave a week after a conference for families with affected kids, leaving the mom to raise their affected daughter alone.

Ok, probably only partly.

ADHD kids have higher levels of lead in their blood than normal kids – even though their levels are higher than “safe” levels.

It would be great if  ADHD turned out to have an avoidable (or at least reducible) cause.  Seriously great.

Some stuff we still need to know (and the kind of questions you should ask yourself whenever you see a claim about etiology (cause of a disorder) based on a correlational study):

Is it causation: Are kids with ADHD more often from homes with lead-based paint?   (Or lots of toys from China, maybe?) If not, that suggests some other cause for higher levels of lead in their blood.  For example, they may be more likely to lick the walls.  (Not joking – kids with ADHD, at least those who are hyperactive, tend to seek stimulation.  And a major cause of higher lead levels in blood is licking lead paint because it tastes good.)

Are there kids with ADHD who don’t have higher levels of lead in their blood: It could be a cause, but not *the* cause.  It’s entirely possible ADHD has more than one cause.  As a quick analogy, take pneumonia – it’s a condition that can be caused by bacteria, viruses, physical injury…  We only have so many pathways in the body (including in the brain), and pathways can be interrupted by more than one cause.

And, of course, does it replicate?

I’m glad that bullying is getting national attention.  It seems weird that it has to piggyback on our fears of new technology to do so, though – like the lesson is “new technology is dangerous and we need to protect our children from it” rather than “kids can be mean as fuck and maybe we should pay more attention to that.”

We tend to use technology for the same old human kinds of things.  Staying in touch with each other; hurting each other; scamming each other; etc.  Technology facilitates some ways of doing these things, but the real problem is not kids using technology, it’s abuse and lack of social support for victims.