Intro Stats Videos on YouTube

I spend a lot of my time (no, I mean a lot of my time) preparing my stats class this semester, because I reconceived it a bit, based on feedback from last semester. As part of that, I’ve been revising and rewriting lectures, and re-recording videos.

And this is to say, just in case anyone finds them useful, that I’m posting the videos I make for my class in this playlist on my YouTube channel. I am absolutely not going to win any awards for production values or performance thrills (despite the feats of animation I occasionally achieve with PowerPoint), but perhaps the way I explain things could be useful to students who have had things explained a different way–I am a big believer in the principle of explaining tricky concepts as many ways as possible.

America’s STEM obsession is not only narrow, it is dangerous

In this excellent article, Fareed Zakaria argues that our recent practice of valuing STEM education at the expense of a more broad-based liberal education will screw America over, in the long (and perhaps not-so-long) run. For one thing, it is becoming obvious that critical thinking, creativity, “people skills,” etc. are what make economies flexible and successful (a point that emerging powerhouses like China are starting to emphasize)–and America has been a flexible, vibrant economy for a century, despite the fact that our students’ math and technical skill test averages have never been particularly good, compared to other nations’. Another point is that STEM skills are exactly those skills next on the chopping block for computerization. Computers will soon be able to write their own code, for instance, but they will not (yet) be able to write their own rich, engaging narratives. So we are pushing students increasingly toward those fields whose skills are most likely to be obsolete in a generation or two, and systematically strangling the university programs that teach the skills most likely to help our current and future graduates survive the economic and career upheavals that will only increase in frequency and intensity from here on out.

 

There are two increasingly cynical possibilities for why the American government and people are so willing to ignore the (potentially obvious) points Zakaria made: First, nobody currently calling the shots for higher education funding has any incentive to think about other people or about the long run, because they are blinded by their own short-term financial motivations and/or their own thinking, crippled by a lack of the benefits of a liberal education.  Second (and this one is much more cynical; I wish I could find the source, where it was said far better than this): The last thing people currently in power in this government want is a populace capable of imagining alternative systems. I’m pretty sure, however, that even the most Machiavellian government administrator is perfectly OK with a population full of highly skilled technicians who have never taken a humanities or social science course.

Reproducible Research – a big Gotcha

This post by Jeff Leek nicely sums up one of my anxieties about the reproducible research movement. A snippet:

for high-profile and important problems, people  largely use reproducibility to:
1. Impose regulatory hurdles in the short term while people transition to reproducibility…
2. Humiliate people who aren’t good coders or who make mistakes in their code…

I am 100% in support of reproducible research, but I’ve been worried about this; I’m not a coder, so I worry my code will be criticizable (or, worse, mockable). What I suppose everyone is worried about is that we all have warts and scars on our data, so to speak, and we have ways we’ve dealt with these. I suspect that, if the full truth were known, most researchers would have several decisions per published analysis that don’t fit the (largely false) idealized prototype for how a research study should go. I also suspect that, in most cases, we have dealt with these issues in reasonable ways that are very similar to how others have done so. However, publishing all of our data, procedures, and analyses will leave these open to criticism based on a failure to meet that perfect, idealized method. If everyone’s flaws and responses were known, then we could start an important conversation about how to deal with the inevitable glitches in research projects; but if only a few people do it–who, almost by definition, will probably be the most conscientious researchers, as evidenced by their concern for reproducible research–then those people will become targets for absolutist sniping, personal humiliation, and professional ridicule.

Correlation and Causation: Getting to know each other

Thanks to Graham Toal for pointing this out!

As has been noted frequently by others, correlation absolutely does (usually) imply causation (just not necessarily the simplistic X → Y model that immediately forms in our head after reading “X is correlated with Y”). The problem has always been that correlation itself is never enough to know where this causation came from. There are too many possibilities, such as:

  • X → Y
  • Y → X
  • X + Z → Y
  • X + Z – (H + L) / J + (K + S)TU → Y (I mean it can be really complicated; you get the picture…)

Apparently, however, in relatively simple two-variable systems, causality can be identified accurately about 80% of the time, from purely observational data. A recent paper, from a team of German, Swiss, and Dutch researchers, reports the findings. Using a variety of known and simulated cause-effect situations, and utilizing only the observational aspects of the data (not using any experimentally-manipulated situations, for example), the researchers report a very high success rate at figuring out whether X caused Y or vice-versa, by analyzing asymmetries in the “noise” (or error variance) associated with X and Y. The process is called the “additive noise method.” Because, as it turns out, noise in the causal variable can influence the noise in the effect, but not the other way around.

I suppose it’s still possible that this is bad science or bad reporting or something, but to my not-very-astute eye it looks legit. If so, I’m sure it will be developed quickly for other applications. If it is even moderately effective with the very noisy variables many of us in the behavioral sciences deal with, I think it will get itself integrated into SEM methodology and its many descendants and cousins. This would be a huge step forward, as SEM models are currently criticizable for almost certainly mis-specifying cause and effect a lot. This would reduce that a lot. And would probably reduce the number of SEM analyses appearing in behavioral sciences journals, once it became extremely difficult for one’s model to fit both the covariance structure of the data and the patterns of error variance suggesting which variables were later in the causal chain and which earlier.

I am excited 🙂

Shell Shock in WWI: Nobody Has Studied This Until Now

Great historical psychological research on “shell shock” among British and German soldiers in WWI. Stefanie Linden, a UK psychiatric researcher, is doing some much-delayed work on the actual symptom presentation of Shell-Shocked soldiers from WWI, instead of what I suppose we’ve all been doing for a century–assuming we knew what was up.

 

Fredonia students vs. UTPA students

I’ve been teaching here at Fredonia for only a few weeks, but it certainly seems to me that the students are incredibly similar, as a whole, and in their diversity of personality, educational preparation, etc., to students I got to know in my nine years in Texas. This is hugely reassuring–I came to love the RGV student population, and had started to worry that perhaps other students would be so alien to me, after nearly a decade in Texas, that I could not work with them. I’m glad to have that fear resolved 🙂

Crimes I don’t even remember

I’ve recently learned that I can’t get a driver’s license in NY (though I’m required to within 30 days) because of an unpaid speeding ticket. In Charlotte, North Carolina. In late summer, 2006. While I was apparently driving a Mercedes.

  1. I wasn’t anywhere near Charlotte, North Carolina in 2006.
  2. I don’t have a Mercedes.
  3. I’ve never driven a Mercedes.
  4. Honestly, I don’t even think I’ve touched a Mercedes since approximately 1986 (when I washed a guy’s car). I mean, even in a parking lot, you avoid the rich cars because you figure that if you even brush against them they’ll probably damage your hearing with some hair-trigger alarm system, right?

So, yeah. I gotta get that taken care of.

Taking off… to the Great White North

Two of the three readers of this blog might find it informative (the other one already knows) that I will be leaving my beloved UTPA. I’ve accepted a job at a university in Western NY (more about that, I’m sure, later). It is very similar to UTPA in the kind of job it represents for me, the kind of teaching and research I’ll be doing, and even the background of many of the students I’ll be working with; in other words, it seems like it will be at least as satisfying to me as the last nine years at UTPA have been–as soon as I stop missing people from UTPA, that is. Because of the many similarities, the main factors pushing me toward NY were work opportunities for Alex (my wife) and closeness to her family, who are also, of course, our daughter’s grandparents/aunts/uncles/honorary family. We will be a mere 2 to 5 hours (<< 36!) from her hometown, her parents, her sister’s family, and many of her friends. We’ll be one hour (plus border delays) from her eternally-beloved Canada (technically, as I’m a dual citizen, it’s also my beloved Canada, but she probably beloves it more than I do, growing up there and all).

In the vein of assuaging guilt at abandoning students and colleagues I’ll say that this was absolutely not an easy decision to make. It will not represent any kind of clear financial or career gain for me (it’s a lateral career move), it will cost us a lot of time, hassle, and money to relocate, it will delay aspects of my research for a little while and send it in a slightly different direction, and I will miss important milestones for some students I care about.  On the other hand, the department up there seems to have all the positive characteristics I’ve come to love about the UTPA Psychology Department, and I have no doubt that I will become as involved with students and colleagues there as I have been here. I’m sure the unique characteristics I will miss from the RGV will be replaced by unique characteristics of Western New York. I’m both sad and hopeful. I’m excited for the change, despite the difficulty.

Anyway, change is hard. We all know this. I’ll be around (except the last couple of weeks of June) until August 1, more or less.

Stats and sports: Always a good match

Here is a nice writeup of a statistical analysis of whether you should bench a star player in, say, basketball, when that player is having “foul trouble.” When is the optimal time to pre-emptively bench the player? Answer: never. Don’t bench them at all.

Although parts of this blog post have words I barely understand, the rest is quite understandable. I think most folks can get the gist of the reasoning behind the author’s insistence that benching a “foul trouble” player is always a bad idea.

Reminds me of the “hot hand” research showing that there’s no such thing as a “hot hand.” 🙂

Dear Cengage: I don’t think so.

I’m thinking of teaching child and adolescent abnormal psychology at the undergraduate level again, sometime in the near future. I last taught it about two years ago. I think. Anyway, I figured I’d check out the new edition of the textbook I used previously, maybe get an instructor copy sent so I could start revising the powerp–

Holy crap!

The textbook is over $200 wholesale! Frankly, I think that’s insane. So it sells for $217 on Amazon.com and up to $280 elsewhere on the web. Pure insanity. I think this represents nearly a 100% increase in price from the last edition.

So, obviously, I’m not going to do that to my students. It’s a pain in the butt to find another textbook and develop the content and revise the course and everything, but no way am I asking students to pay over $200 for a textbook. That’s nearly a full-time week of work at the minimum-wage jobs most students have.

So, Cengage, with all due respect (and it seems there isn’t as much due as I would have thought), freak this shoot. I do realize that the economies of scale for textbooks are quite different from those for, say, a J.K. Rowling novel that costs 1/10 as much, brand new, in hardcover. But it’s really hard for me to imagine that this kind of price is justified. But perhaps the Free Market both giveth and taketh away. Cengage, if cornered, might say something completely unverifiable about their costs, to justify the price; but Adam Smith probably described the operational concepts here: Cengage is charging this insane amount because they think they can (and they’re probably right, because publishers aren’t dumb). But there are other publishers out there, willing to undercut traditional prices, so I will look for one of them. This is ridiculous, and I wouldn’t be able to sleep at night if I asked my students to pay this.

Not that anyone reads this blog, but if you do, and have ideas for cheaper (sustainably cheaper) textbooks that are still high quality and reasonably easy to learn from, please let me know. Under $100 for the latest edition would be ideal.

Validation! I’m not faking all those hours.

This article in Vox describes the typical academic’s work week through a variety of studies. Overall, they fit my experience. When things are at their slowest, my week is maybe around 45 hours. This sounds like a whine, but it’s not about my job: I really enjoy what I do, and don’t mind 45-50 hours most weeks. When my week goes over 60 hours, which happens fairly often, I occasionally question my life choices, and during those insane weeks (like grant preparation weeks) when I sleep 4-6 hours per night and do almost nothing else (i.e., taking time to pee or eat breakfast seems like minutes I can’t possibly spare) I want to beat myself into unconsciousness with a shovel. I’m really trying to avoid having those weeks ever again… but, realistically, maybe I can just keep them down to maybe once a year. Continue reading Validation! I’m not faking all those hours.

Correlation => Causation

This excellent blog post makes–much more eloquently than I ever did–a point that I often assert: the “correlation does not imply causation” mantra is technically wrong. Correlation almost always implies some kind of causation somewhere, and in a Bayesian sense it’s also, technically, “evidence” for even the simplistic A –> B type of causation we are trying to warn our undergrads against assuming. Neat stuff.

Videos! On YouTube! And Tegrity!

The exclamation points in the title are probably not really warranted. Here is a link to my YouTube channel. It’s getting somewhat populated by videos I’m making for my students, mostly for introduction to statistics for the behavioral sciences.

On a related note, our university uses Tegrity (linked through Blackboard) for recording lectures, etc. The system records the instructor’s voice, plus whatever is on the screen (e.g., PowerPoint slides). Well, after you finish creating a Tegrity video, there’s a slightly Fullerian process for linking to it within Blackboard so your students can see it, but there’s also a handy “upload to YouTube” link. Neat! Except that when I use that link, the ultra-high-res video from my desktop computer (or even minimally-hi-res from a classroom LCD projector and computer monitor) gets uploaded to YouTube as… 480p. I’m sure there’s a way to do it high-res, but instead of spending ten more minutes hunting for those details I did something else.

To upload Tegrity video to a High-Definition YouTube video (using Windows 7):

  1. Record the video on Tegrity (OK, this step was to make fun of about.com)
  2. Dig down into your file system on the computer where the video was recorded (for me this is my office desktop); the video is there, somewhere. For me, it’s in
    C:\ProgramData\Tegrity\recordings\<name of the recording you just made>\Class\Projector
    and the file will be called “screen0.asf” (that’s a zero, not an “o”). Anyway, that’s what it’s always been called for me, but maybe just look for the biggest *.asf file.
  3. Upload that sucker to YouTube!

And that’s all. The upload, for me, is always at least 1080p resolution.  Hooray for Google casually solving problems in their sleep.

 

Yes, there is some bad statistics advice on the internet.

I just checked out this post on yhathq.com. about conducting interpreting linear regression in R. I thought it was okay, if a little expertnoobish (i.e., the author seems to want to explain what regression is at the same time as explaining how to do it in R… probably not useful for true beginners). AND THEN I got to these sentences:

So if a variables has 3 stars (***), then it means the probability that the variable is NOT relevant is 0%.

That’s so wrong it hurts. Problem #1 (and it’s not even the worst one!) is that “***” means “p<.001.” The astute mathematics student will note that “less than .001” and “zero” are not the same thing. Perhaps the author thinks that, since “<.001” is really small, it might as well be rounded down to zero, but that’s a huge mistake. In some studies, differences between .001 and .0001 are actually quite important. In physics, for example, a standard benchmark for really important, critical results is “five-sigma,” which means (I think) five standard errors from the null-hypothesized expected value. The difference between four-sigma and five-sigma is often of great concern, but this author would just call them both “zero.”

The more fundamental problem (even worse than thinking .0001 = 0) is either a fundamental misunderstanding of, or a deeply misguided attempt to oversimplify, the nature of p-values themselves. In the article, the next reference to p-values, tucked into a table of how to interpret regression output, demonstrates the problem once again:

6 Variable p-value Probability the variable is NOT relevant. You want this number to be as small as possible. If the number is really small, R will display it in scientific notation. In or example 2e-16 means that the odds that parent is meaningless is about 15000000000000000

AAAAAAAAAAAAAAA!

That means “run away, screaming!” This is not, to the best of my knowledge, an accurate interpretation of a p-value*. Furthermore, the overall tone of imbuing p-values with such import in the interpretation process is arguably pretty misguided.

Perhaps someone should suspend that author’s driverR’s license for a while… heh heh… okay, that was lame. Anyway, don’t read that post except to learn how NOT to interpret p-values.

 


 

*FYI, my interpretation of what the author on yhathq.com has done is to forget that he or she is writing about a frequentist concept. A p-value’s interpretation, to be correct, has to involve the conditional concept of the null hypothesis being true, which this author has forgotten. Additionally, even if the null is true, the p-value only corresponds to the probability of the data being observed, not the “importance” or “relevance” of the data… there is no direct connection to effect sizes (which is what importance and relevance are about). p=.03 means that, if H0 is true, then (and only then) there is a 3% chance that these results might still have been observed due to the vagaries of random sampling from the population (which, if we’re assuming H0 is true, is now, in the universe we are working in, the null hypothesis specification of the population). And then there’s the other possibility… that the null hypothesis is actually false, in which case the p-value means nothing, because (a) it was calculated in reference to the mean of H0, which doesn’t exist in this universe and (b) the effect you’re searching for is most certainly there (because the alternative hypothesis is true!).

So I suspect the author believes him- or herself to be a Bayesian, despite evidence to the contrary, and thinks he or she is saying something about posterior probabilities. However, if I’m wrong in my criticism of the approach taken in this article, I hope someone will let me know.

Random… I don’t think that word means what you think it means, Blackboard.

At this point I do (I admit) get a bit of smug satisfaction every time Blackboard Learn fails me, but the disappointment in the failure is still much stronger.

Today’s episode: “random” assignment of number sets in calculated formula questions. In Blackboard Learn (BbL) you can use variables in questions (“calculated formula” questions) and set parameters for those variables so that many number sets can be (purportedly) randomly assigned to test-takers. The process is annoying in a few ways (like the fact that the beautiful graphic formula interface is only for the instructor’s use and can never be seen by students, or that it all depends on Java and therefore on Java’s latest security flaws, or that it’s painfully slow to generate number sets, or that you can’t edit the question wording without going through the whole generate-the-numbers rigamarole…). However, those are the kinds of things one gets with the current incarnation of BbL. The real frustration is that the number sets, in a recent exam I administered, do not seem to be really randomly distributed.

I’m not talking some pedantic difference between pseudorandom and true stochastic processes–I’m talking glaring, in-your-face, why-did-I-even-bother nonrandomness. It’s possible I just set something wrong, or Bb had a one-time glitch. But I’ll report my results here, anyway. With a graph.

 

I gave an exam to two sections of students, with two extremely similar questions to each section, so that’s four “conditions.” For each condition, I had Bb generate ten (10) different number/answer sets (I’ll designate them a through j here and in the graph, where they are called value sets). The exam as a conceptual entity was actually two separate exams in Bb, one for each section. Here is a lattice plot of the frequency with which Bb assigned the ten different value sets to the students in the four different conditions. q1 and q2 refer to the questions. s1 and s2 refer to the sections.

Notice anything? Ten number sets. Only 1 through 4 were used in any of the four conditions/questions.
Notice anything? Ten number sets. Only 1 through 4 were used in any of the four conditions/questions.

Perhaps you see the problem. Ten sets of numbers generated (I have gone back several times to verify this). Only the first four used, in each question/condition. Two separate exams. Yeah, that chi-square is significant.

So that’s that. It should be noted that I have not replicated this, and I created an initial question then copied and tweaked (within Blackboard) until I had the four represented here, so maybe something was off with the first one and it got duplicated. But it’s darn annoying. So much for making sure no student sits next to someone with the same number set on their screen.

Amazon MP3 Downloads: Linux users add $2.00 extra per album.

I like Amazon’s MP3 downloads. They’re cheaper than iTunes, they’re often on super-sale, and they’re DRM-free. However, there are two costs they carry that are a worth noting:

1. Non-obvious DRM-like restrictions:  When you buy albums, you don’t (as you might naively suppose) get to just download them. Oh, no. They are added to your “cloud player,” and you can only download them to an approved device. And if you want to download the whole album in one operation, you need to install their bloatware downloader application on your computer.

2. Linux users get a bit screwed. If you happen to be using this OS, you’ll find you can’t download an entire album at once. No workarounds (that I know of); you just can’t do it. Amazon briefly had a Linux downloader then, for unexplained reasons, discontinued it. You can download your album from your cloud player one. song. at. a. time. That means each album is more expensive for Linux users, because we pay in time what others save in cash.

The math: Let’s say I buy an album (I just did). Let’s say it was about $10. Let’s say it has about 15 songs on it. That takes about 5 minutes of my clicking time to download on Linux, assuming all goes smoothly (it was actually closer to 10 minutes in this particular case).

The average American’s time is worth about about $24.00 per hour (in purely money-based income terms).

As a Linux user my album costs me $10.0o in cash plus 1/12 of an hour of work (i.e., 5 minutes). The extra time cost is about $24.00 * 1/12 = $2.00

So now, when I look at albums on Amazon that I like, I will mentally add $2.00 to the price.