Skip to content

Commit ba81dad

Browse files
author
Hugo Bowne-Anderson
authored
Correct minor typos
1 parent 664f796 commit ba81dad

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

notebooks/2.Parameter_estimation_hypothesis_testing.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -92,11 +92,11 @@
9292
"cell_type": "markdown",
9393
"metadata": {},
9494
"source": [
95-
"**Key concept:** We only need to know the posterior distribution $P(p|D)$ up to multiplication by a constant at the moment: this is because we really only care about the values of $P(p|D)$ relative to each other – for example, what is the most likely value of $p$? To answer such questions, we only need to know what $P(λ|D)$ is proportional to, as a function of $p$. Thus we don’t currently need to worry about the term $P(D)$. In fact,\n",
95+
"**Key concept:** We only need to know the posterior distribution $P(p|D)$ up to multiplication by a constant at the moment: this is because we really only care about the values of $P(p|D)$ relative to each other – for example, what is the most likely value of $p$? To answer such questions, we only need to know what $P(p|D)$ is proportional to, as a function of $p$. Thus we don’t currently need to worry about the term $P(D)$. In fact,\n",
9696
"\n",
9797
"$$P(p|D) \\propto P(D|p)P(p) $$\n",
9898
"\n",
99-
"**Note:** What is the prior? Really, what do we know about $p$ before we see any data? Well, as it is a probability, we know that $0≤p≤1$. If we haven’t flipped any coins yet, we don’t know much else: so it seems logical that all values of $p$ within this interval are equally likely, i.e., $P(p)=1$, for 0≤λ≤1. This is known as an uninformative prior because it contains little information (there are other uninformative priors we may use in this situation, such as the Jeffreys prior, to be discussed later). People who like to hate on Bayesian inference tend to claim that the need to choose a prior makes Bayesian methods somewhat arbitrary, but as we’ll now see, if you have enough data, the likelihood dominates over the prior and the latter doesn’t matter so much."
99+
"**Note:** What is the prior? Really, what do we know about $p$ before we see any data? Well, as it is a probability, we know that $0\\leq p \\leq1$. If we haven’t flipped any coins yet, we don’t know much else: so it seems logical that all values of $p$ within this interval are equally likely, i.e., $P(p)=1$, for $0\\leq p \\leq1$. This is known as an uninformative prior because it contains little information (there are other uninformative priors we may use in this situation, such as the Jeffreys prior, to be discussed later). People who like to hate on Bayesian inference tend to claim that the need to choose a prior makes Bayesian methods somewhat arbitrary, but as we’ll now see, if you have enough data, the likelihood dominates over the prior and the latter doesn’t matter so much."
100100
]
101101
},
102102
{
@@ -119,7 +119,7 @@
119119
"metadata": {},
120120
"source": [
121121
"Now let's generate some coin flips and try to estimate $p(H)$. Two notes:\n",
122-
"- given data $D$ consisting of $n$ coin tosses & $k$ heads, the likelihood function is given by $L:=P(D|p) \\propto p^k(1p)^{n−k}$;\n",
122+
"- given data $D$ consisting of $n$ coin tosses & $k$ heads, the likelihood function is given by $L:=P(D|p) \\propto p^k(1-p)^k$;\n",
123123
"- given a uniform prior, the posterior is proportional to the likelihood."
124124
]
125125
},

0 commit comments

Comments
 (0)