dagblog - Comments for "Bayesic Instinct" http://dagblog.com/bayesic-instinct-21699 Comments for "Bayesic Instinct" en Well, polling practices http://dagblog.com/comment/232445#comment-232445 <a id="comment-232445"></a> <p><em>In reply to <a href="http://dagblog.com/comment/232443#comment-232443">Most of what you wrote is</a></em></p> <div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Well, polling practices greatly affect the validity of the poll. The current practice of mashing up a bunch of polls and hoping it leads to some validity is certainly a problem. If everyone uses the same flawed method or assumption, then the errors simply appear across all. The guy at Huffpost thought not massaging data a PLUS, while Nate Silver is continuously stressing the importance of the human oversight and tweaking based on intelligent evaluation (while trying to acknowledge where personal bias might enter).</p> </div></div></div> Thu, 12 Jan 2017 13:52:14 +0000 PeraclesPlease comment 232445 at http://dagblog.com Most of what you wrote is http://dagblog.com/comment/232443#comment-232443 <a id="comment-232443"></a> <p><em>In reply to <a href="http://dagblog.com/comment/232437#comment-232437">Exactly. My poorly placed</a></em></p> <div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Most of what you wrote is somewhat over my head, but I get the gist.  I also found OGD's reference below very helpful.  I was</p> <p> hopeful on November 8th until I went to vote.  I had never had to wait to vote before, but that day the line was very long.  I had a sinking feeling as I realized that most polls are taken of "likely voters," and so all the people who were "inspired" by trump but usually did not vote were not counted.  </p> <p>I went home feeling very uncertain.  That was my gut reaction, and though late in the game, I don't think it was wrong.  What do you think about polling practices as a factor in getting things wrong?</p> </div></div></div> Thu, 12 Jan 2017 13:20:25 +0000 CVille Dem comment 232443 at http://dagblog.com Wow, good followup, thanks. http://dagblog.com/comment/232442#comment-232442 <a id="comment-232442"></a> <p><em>In reply to <a href="http://dagblog.com/comment/232439#comment-232439">Peracles... Sam Wang a PEC...</a></em></p> <div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Wow, good followup, thanks.</p> </div></div></div> Thu, 12 Jan 2017 09:52:08 +0000 PeraclesPlease comment 232442 at http://dagblog.com Peracles... Sam Wang a PEC... http://dagblog.com/comment/232439#comment-232439 <a id="comment-232439"></a> <p><em>In reply to <a href="http://dagblog.com/bayesic-instinct-21699">Bayesic Instinct</a></em></p> <div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><strong>Peracles... Sam Wang a PEC...</strong></p> <p>There are some great take-aways in this... And Sam does interact in his comment section.</p> <p><a href="http://election.princeton.edu/2016/12/20/defenses-of-institutionalism-and-the-year-ahead/">What data got right in 2016 – and what’s ahead for PEC</a></p> <p>December 20, 2016</p> <p>--snippet--</p> <blockquote> <p>Usually, PEC would close down after the election for two years. But this year I’ve heard from many of you about your continued appetite for data-based analysis. More than ever, data is necessary to understand public life. Here are some examples of what we learned this year:</p> <ul><li>Data showed us Trump’s strength in the primaries <a href="http://election.princeton.edu/2016/01/05/what-december-polls-can-tell-us-about-the-gop-nomination/">in January</a>. They showed us Clinton’s inevitability in her party’s primaries as well.</li> <li>Simulation showed how the GOP nomination rules were <a href="http://election.princeton.edu/2016/01/13/full-simulation-of-gop-nomination-rules/">stacked in Trump’s favor</a>.</li> <li>In the general election, data showed us just how entrenched voters have become <a href="http://prospect.org/article/hardened-divide-american-politics-0">starting in the 1990s</a>, and how close and <a href="http://election.princeton.edu/2016/10/17/the-polarization-hypothesis-passes-the-access-hollywood-test/">unmoving</a> the race was.</li> <li>Detailed time-series analysis shows that <a href="http://election.princeton.edu/2016/12/10/the-comey-effect/">late-deciding voters post-Comey</a> were a key factor in the home stretch.</li> </ul><p>That is just the analysis done here – there was also much excellent work done at FiveThirtyEight and The Upshot.</p> </blockquote> <p>---snip---</p> <blockquote> <p>The estimate of uncertainty was the major difference between PEC, FiveThirtyEight, and others. Drew Linzer has <a href="https://twitter.com/DrewLinzer/status/804824244748099584?lang=en">explained very nicely</a> how a win probability can vary quite a bit, even when the percentage margin is exactly the same (<a href="http://election.princeton.edu/wp-content/uploads/2016/12/CytOnKQUAAEHlDv.jpg">to see this point as a graph, see the diagram</a>). At the Princeton Election Consortium, I estimated the Election-Eve correlated error as being less than a percentage point. At FiveThirtyEight, their uncertainty corresponded to about four percentage points. But we both had very similar Clinton-Trump margins – as did all aggregators.</p> <p>For this reason, it seems better to get away from probabilities. When pre-election state polls show a race that is within two percentage points, that point is obscured by talk of probabilities. Saying “a lead of two percentage points, plus or minus two percentage points” immediately captures the uncertainty.</p> <p>Even a hedged estimate like FiveThirtyEight’s has problems, because it is ingrained in people to read percentage points as being in units of votes. Silver, Enten, and others have taken an undeserved shellacking from people who don’t understand that a ~70% probability is not certain at all. Next time around, I won’t focus on probabilities – instead I will focus on estimated margins – as well as an assessment of which states are the best places for individuals to make efforts. This won’t be as appealing to horserace-oriented readers, but it will be better for those of you who are actively engaged.</p> </blockquote> <p><a href="http://election.princeton.edu/2016/12/20/defenses-of-institutionalism-and-the-year-ahead/">Read the entire piece here--&gt;</a></p> <p> </p> <p>~OGD~</p> </div></div></div> Thu, 12 Jan 2017 08:46:37 +0000 oldenGoldenDecoy comment 232439 at http://dagblog.com Exactly. My poorly placed http://dagblog.com/comment/232437#comment-232437 <a id="comment-232437"></a> <p><em>In reply to <a href="http://dagblog.com/comment/232429#comment-232429">Post presser, I&#039;m in a pretty</a></em></p> <div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Exactly. My poorly placed optimism in the end was that the polls were shifting towards Hillary the last 3 or 4 days, presumably showing she was countering the Comey and Wikileaks bit well, and maybe with another 3 or 4 days... But the magnitude of uncertainty was still there, including the cross-correlation between Midwest states, along with others. If we thiught she was safe in Wisconsin and she wasn't, the similar assumptions and like-minded polls may have missed the same sentiments or voters in other midwest states. The errors reinforce themselves, and when they collapse or prove flawed, it can be across the board. Silver tried to dial in that uncertainty and herald too many unknown unknowns to even be confident in the prediction. But we demand confidence, a single number, not a range of actual likely and less likely outcomes. We're victims of our own wishful thinking here and in everything we do in life - it's our nature, but doesn't have to be (as much).</p> <p>People are saying "Trump likely won't be so bad" or "Trump will be worse". We need to dial in real probabilities to the level of how bad he may be under different scenarios, using our prior bias, and then calibrate those predictions by actual evidence for and against moving forward. That's one way how not to get burned (as bad).</p> <p>The Buzzfeed release is interesting - we similarly can assign a probability it's true, mathematically acknowledging our heavy bias towards it being so, and gauge our predictions as more evidence comes out. That the FBI and news agencies sit on stuff like this and not on others, and still others we've (and they've) no clue gives an idea how incomplete our data set and truth models are. In a Bayesian world, Buzzflash would say what likelihood it a priori assigns the document and how likely it believes the different incidents described to be - similar to Wikileaks'little quiz of which disease Hillary might be suffering from, though they didn't assign their own probability - they left it up to the reader and didn't include the possible e answer, "none".</p> <p>Anyway, kudos to Buzzfeed even though it will complicate our world or not - we already know news orgs have their fingers on the scale, and their "confirmation" is less gold standard than believed. Often it really does come down to 1 person, just like this, and like the curveball guy in Iraq, often much dodgier. What were his odds of being right?</p> </div></div></div> Thu, 12 Jan 2017 06:05:00 +0000 PeraclesPlease comment 232437 at http://dagblog.com Post presser, I'm in a pretty http://dagblog.com/comment/232429#comment-232429 <a id="comment-232429"></a> <p><em>In reply to <a href="http://dagblog.com/bayesic-instinct-21699">Bayesic Instinct</a></em></p> <div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Post presser, I'm in a pretty bad mood, so I'm not sure I followed your argument as a whole. But here's what occurred to me about Silver. Toward the very end, I believe he was giving Hillary a 60% chance of winning. So when she didn't win, people said, "So much for Nate the great."<br /><br /> But if he gave Hillary a 60% chance of winning, he gave Trump a 40% chance of winning--which isn't nothing. And the election just barely (based on the 70,000 vote margin in those key states) fell onto the 40% side. IOW, 60/40 does NOT mean that the 40% outcome won't happen. It just means there's a 40% of it happening. Less chance of it happening than the 60% outcome.</p> </div></div></div> Thu, 12 Jan 2017 02:09:22 +0000 Peter Schwartz comment 232429 at http://dagblog.com