{"id":2185,"date":"2013-06-19T17:09:06","date_gmt":"2013-06-19T21:09:06","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/?p=2185"},"modified":"2013-06-19T17:09:06","modified_gmt":"2013-06-19T21:09:06","slug":"2185","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2013\/06\/19\/2185\/","title":{"rendered":"Probability Spaces"},"content":{"rendered":"<p> Sorry for the slowness of the blog lately. I finally got myself back onto a semi-regular schedule when I posted about the Adria Richards affair, and that really blew up. The amount of vicious, hateful bile that showed up, both in comments (which I moderated) and in my email was truly astonishing. I&#8217;ve written things which pissed people off before, and I&#8217;ve gotten at least my fair share of hatemail. But nothing I&#8217;ve written before came close to preparing me for the kind of unbounded hatred that came in response to that post. <\/p>\n<p> I really needed some time away from the blog after that. <\/p>\n<p> Anyway, I&#8217;m back, and it&#8217;s time to get on with some discrete probability theory!<\/p>\n<p> I&#8217;ve already written a bit about <em>interpretations<\/em> of probability. But I haven&#8217;t said anything about what probability means formally. When I say that the probability of rolling a 3 with a pair of fair six-sided dice is 1\/18, how do I know that? Where did that 1\/6th figure come from?<\/p>\n<p> The answer lies in something called a <em>probability space<\/em>. I&#8217;m going to explain the probability space in frequentist terms, because I think that that&#8217;s easiest, but there is (of course) an equivalent Bayesian description.)\tSuppose I&#8217;m looking at a particular experiment.  In classic mathematical form, a probability space consists of three components (&Omega;, E, P), where:<\/p>\n<ol>\n<li> &Omega;, called the <em>sample space<\/em>, is a set containing all possible outcomes of the experiment. For a pair of dice, &Omega; would be the set of all possible rolls: {(1,1), (1,2), (1,3), (1,4), (1,5), (1, 6), (2,1), &#8230;, (6, 5), (6,6)}.<\/li>\n<li> <em>E<\/em> is an equivalence relation over &Omega;, which partitions &Omega; into a set of <em>events<\/em>. Each event is a set of outcomes that are equivalent. For rolling a pair of dice, an event is a total &#8211; each event is the set of outcomes that have the same total. For the event &#8220;3&#8221; (meaning a roll that totalled three), the set would be {(1, 2), (2, 1)}.<\/li>\n<li> <em>P<\/em> is a <em>probability assignment<\/em>. For each event <em>e<\/em> in <em>E<\/em>, <em>P(e)<\/em> is a value between 0 and 1, where:\n<p>\t\t<center><img src='http:\/\/l.wordpress.com\/latex.php?latex=%20Sigma_%7Bein%20E%7D%20P%28e%29%20%3D%201&#038;bg=FFFFFF&#038;fg=000000&#038;s=0' title=' Sigma_{ein E} P(e) = 1' style='vertical-align:1%' class='tex' alt=' Sigma_{ein E} P(e) = 1' \/><\/center><\/p>\n<p> (That is, the sum of the probabilities of all of the possible events in the space is exactly 1.)<\/p>\n<\/li>\n<\/ol>\n<p> The probability of an event <em>e<\/em> being the outcome\tof a trial is <em>P(e)<\/em>.<\/p>\n<p> So the probability of any particular event as the result of a trial is a number between 0 and 1. What&#8217;s it mean? If the probability of event <em>e<\/em> is <em>p<\/em>, then if we repeat the trial <em>N<\/em> times, we expect <em>N*p<\/em> of those trials to have <em>e<\/em> as their result. If the probability of <em>e<\/em> is 1\/4, and we repeat the trial 100 times, we&#8217;d expect <em>e<\/em> to be the result 25 times.<\/p>\n<p> But in an important sense, that&#8217;s a cop-out. We&#8217;ve defined probability in terms of this abstract model, where the third component is the probability. Isn&#8217;t that circular?<\/p>\n<p> Not really. For a given trial, we create the probability assignment by observation and\/or analysis. The important point is that this is really just a bare minimum starting point. What we really care about in probability isn&#8217;t the change associated with a single, simple, atomic event. What we want to do is take the probability associated with a group of single events, and use our understanding of that to allow us to explore a complex event. <\/p>\n<p> If I give you a well-shuffled deck of cards, it&#8217;s easy to show that the odds of drawing the 3 of diamonds is 1\/52. What we want to do with probability is things like ask: What are the odds of being dealt a flush in a poker hand?<\/p>\n<p> The construction of a probability space gives us a well-defined platform to use for building probabilistic models of more interesting things. Give a probability space of two single dice, we can combine them together to create the probability space of the two dice rolled together. Given the probability space of a pair of dice, we can construct the probability space of a game of craps. And so on.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Sorry for the slowness of the blog lately. I finally got myself back onto a semi-regular schedule when I posted about the Adria Richards affair, and that really blew up. The amount of vicious, hateful bile that showed up, both in comments (which I moderated) and in my email was truly astonishing. I&#8217;ve written things [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[53],"tags":[],"class_list":["post-2185","post","type-post","status-publish","format-standard","hentry","category-probability"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/s4lzZS-2185","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/2185","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=2185"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/2185\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=2185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=2185"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=2185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}