• taiyang@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    3 months ago

    As with others, I also say ignore the polls. Even done right, we’re a bit too far to say how it’ll go. And they generally aren’t done right. But here’s a rant anyway, since it’s on my mind:

    Pay attention to who is asked, and pay attention to the margin of error. The latter is just a simple truth about sampling error: small-ish samples get a lot of noise, especially with yes/no statements. I’ve actually seen news report statistically insignificant findings before, especially if it fit their narrative (what otherwise should be rejected as too close to call). These can be false positives, but pundits aren’t exactly scientists and there’s incentive to report it anyway.

    But, the biggest issue is validity. Two forms matter here: external validity is in regards to if results generalize correctly (e.g. a poll using only land lines means you exclude a large chunk of people, ruining generalizability); and construct validity, which is if the question/meteic used is really getting at the researchers question. Such as, if a question includes different language or has something prime answers, like asking questions about Gaza and then asking about Biden may lead to different results than asking about abortion rights and then asking about Biden. (One can argue this is reliability, and it is, but the two concepts are related and you can’t have validity without reliability).

    Plenty of well meaning pollsters fall into both traps, either from lack of resources or lack of critical thinking about metrics used. Doing it right also requires control over confounding variables, which requires advanced models they simply don’t know how to use.

    That’s my little PSA while I get ready to teach my stats class this evening, haha.