Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

STATS 731, 2022, Semester 2

Tutorial 2

Question 1: Binomial Forecasting

Let θ be the success probability parameter of an experiment, and let x be the number of successes achieved out of N trials. The sampling distribution is

x|θ ∼ Binomial(N,θ).                                                     (1)

Let the prior for θ be Uniform(0, 1).

(a) Show that, if the prior for θ is Uniform(0, 1), the prior predictive distribution for x is a

discrete uniform distribution. Bayes justified the uniform prior for θ on the grounds that it implies this uniform prior for x.

(b) Show that the posterior predictive probability for success on the  (N + 1)th trial is

(x + 1)/(N + 2) — this is Laplace’s ‘rule of succession’.

(c) Find the posterior predictive probability that the next two trials are both successes.

Question 2: Equidae on a Grid

For this question, include your code (neat and tidy please) in your answers.  You may use R, Python+NumPy+Matplotlib, or Matlab.

Suppose that x = 94 successes are observed out of N = 100 trials in a binomial experiment with parameter θ . Let the prior be θ ∼ Beta(20, 80).

(a) In R, calculate and plot the prior, likelihood, and posterior for θ on common axes. Make

sure the prior and posterior are normalised properly (as densities), but arbitrarily scale the likelihood function so the plot looks nice.

The plot from part (a) should explain the bad joke about Bayesians who expect to see a horse, catch a glimpse of a donkey, and therefore believe they have seen a mule.

(b) Re-do (a) but with a heavy-tailed prior (specifically, a truncated Cauchy distribution),

with probability density function:

p(θ)

(2)

for θ ∈ [0, 1]. Does the joke still work?

(c) Compare the values of the marginal likelihoods in (a) and (b). Which is higher? Why?