Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

Project 1: Simulation and sampling

Use the template material in the zip file project01 .zip in Learn to write your report. Add all your function definitions on the code .R file and write your report using report .Rmd. You must upload the following three files as part of this assignment: code .R, report .html, report .Rmd. Specific instructions for these files are in the README .md file.

The main text in your report should be a coherent presentation of theory and discussion of methods and results, showing code for code chunks that perform computations and analysis but not code for code chunks that generate figures or tables.

Use the echo=TRUE and echo=FALSE to control what code is visible.

The styler package addin is useful for restyling code for better and consistent readability. It works for both .R and .Rmd files.

The Project01Hints file contains some useful tip and CWmarking contain guidelines. Both are attached in

Learn as pdf files.

Submission should be done through Gradescope.

1    Confidence interval approximation assessment

As in Lab 4, consider the Poisson model for observations y = {y1 , . . . , yn }:

yi  Poisson(λ),    independent for i = 1, . . . , n.

that has joint probability mass function

p(y|λ) = exp(−nλ)  

In Lab 4, one of the considered parameterisation alternatives was

1. θ = λ, and ML  =   yi  = y

Create a function called estimate_coverage (in code .R; also document the function) to perform interval coverage estimation taking arguments CI  (a function object for confidence interval construction taking arguments y and alpha and returning a 2-element vector [see Lab 4]), N (the number of simulation replications to use for the coverage estimate), alpha (1-alpha is the intended coverage probability), n (the sample size) and lambda (the true lambda values for the Poisson model).

Use your function estimate_coverage to estimate the coverage of the construction of confidence intervals from samples Poisson(λ) and discuss the following:

• Since the model involves the discrete Poisson distribution, one might ask how sensitive these results are to the precise values of λ and n.  To investigate this, run the coverage estimation for different combinations of model parameters λ and n (fix N = 10000 and α = 0.1)

•  Present your results of estimated coverage in two plots, (1) as a function of λ for fixed n = 2, and (2) as a function of n, for fixed λ = 3.

• Discuss the plots in regards to whether the coverage of the intervals achieve the desired 90% confidence level, and if not identify under which cases and provide a suggestion as to why.

2    3D printer materials prediction

The aim is to estimate the parameters of a Bayesian statistical model of material use in a 3D printer. The printer uses rolls of filament that get heated and squeezed through a moving nozzle, gradually building objects. The objects are first designed in a CAD program (Computer Aided Design) that also estimates how much material will be required to print the object.

The data can be loaded with data("filament1",  package  =  "StatCompLab"), and contains information about one 3D-printed object per row. The columns are

•  Index: an observation index

•  Date: printing dates

•  Material: the printing material, identified by its colour

•  CAD_Weight: the object weight (in grams) that the CAD software calculated

• Actual_Weight: the actual weight of the object (in grams) after printing

If the CAD system and printer were both perfect, the CAD_Weight and Actual_Weight values would be equal for each object.  In reality, there are random variations, for example, due to varying humidity and temperature, and systematic deviations due to the CAD system not having perfect information about the properties of the printing materials.

When looking at the data (see below) it’s clear that the variability of the data is larger for larger values of CAD_Weight. The printer operator has made a simple physics analysis, and settled on a model where the connection between CAD_Weight and Actual_Weight follows a linear model, and the variance increases with

square of CAD_Weight. If we denote the CAD weight for observations i by x_i, and the corresponding actual weight by yi , the model can be defined by

yi  Normal [β1 + β2 xi , β3 + β4 xi(2))].

To ensure positivity of the variance, the parameterisation θ = [θ1 , θ2 , θ3 , θ4 ] = [β1 , β2 , log(β3 ), log(β4 )] is introduced, and the printer operator assigns independent prior distributions as follows:

θ 1  ∼ Normal(0, γ1 ),

θ2  ∼ Normal(1, γ2 ),

θ3  ∼ LogExp(γ3 ),

θ4  ∼ LogExp(γ4 ),

where LogExp(a) denotes the logarithm of an exponentially distributed random variable with rate parameter a, as seen in Tutorial 4. The γ = (γ1 , γ2 , γ3 , γ4 ) values are positive parameters.

The printer operator reasons that due to random fluctuations in the material properties (such as the density) and room temperature should lead to a relative error instead of an additive error, which leads them to the model as an approximation of that. The basic physics assumption is that the error in the CAD software

calculation of the weight is proportional to the weight itself.

Start by loading the data and plotting it.

2.1    Prior density

With the help of dnorm and the dlogexp function (see the code .R file for documentation), define and document (in code .R) a function log_prior_density with arguments theta and params, where theta is the θ parameter vector, and params is the vector of γ parameters. Your function should evaluate the logarithm of the joint prior density p(θ) for the four θi  parameters.

2.2    Observation likelihood

With the help of dnorm, define and document a function log_like, taking arguments theta, x, and y, that evaluates the observation log-likelihood p(y|θ) for the model defined above.

2.3    Posterior density

Define and document a function log_posterior_density with arguments theta, x, y, and params, which evaluates the logarithm of the posterior density p(θ|y), apart from some unevaluated normalisation constant.

2.4    Posterior mode

Define a function posterior_mode with arguments theta_start, x, y, and params, that uses optim together with the log_posterior_density and filament data to find the mode µ of the log-posterior-density and

evaluates the Hessian at the mode as well as the inverse of the negated Hessian, S .  This function should

return a list with elements mode (the posterior mode location), hessian (the Hessian of the log-density at the mode), and S (the inverse of the negated Hessian at the mode). See the documentation for optim for how to do maximisation instead of minimisation.

2.5    Gaussian approximation

Let all Vi  = 1, i = 1, 2, 3, 4, and use posterior_mode to evaluate the inverse of the negated Hessian at the mode, in order to obtain a multivariate Normal approximation Normal(µ, S) to the posterior distribution for θ . Use start values θ = 0.

2.6    Importance sampling function

The aim is to construct a 90% Bayesian credible interval for each βj  using importance sampling, similarly to

the method used in lab 4.  There, a one dimensional Gaussian approximation of the posterior of a parameter

was used.  Here, we will instead use a multivariate Normal approximation as the importance sampling distribution. The functions rmvnorm and dmvnorm in the mvtnorm package can be used to sample and evaluate densities.

Define and document a function do_importance taking arguments N (the number of samples to generate), mu (the mean vector for the importance distribution), and S (the covariance matrix), and other additional parameters that are needed by the function code.

The function should output a data .frame with five columns, beta1, beta2, beta3, beta4, log_weights, containing the βi  samples and normalised log-importance-weights, so that sum(exp(log_weights)) is 1. Use

the log_sum_exp function (see the code .R file for documentation) to compute the needed normalisation information.

2.7    Importance sampling

Use your defined functions to compute an importance sample of size N = 10000. Plot the empirical weighted

CDFs together with the un-weighted CDFs for each parameter, with the help of stat_ewcdf, and discuss the results.  To achieve simpler ggplot code, you may find pivot_longer(???,  starts_with("beta")) and facet_wrap(vars(name)) useful.

Construct 90% credible intervals for each of the four model parameters, based on the importance sample. In addition to wquantile and pivot_longer, the methods group_by and summarise are helpful. You may wish to define a function make_CI taking arguments x, weights, and prob (to control the intended coverage probability), generating a 1-row, 2-column data .frame to help structure the code.

Discuss the results both from the sampling method point of view and the 3D printer application point of view (this may also involve e.g. plotting prediction intervals based on point estimates of the parameters, and plotting the importance log-weights to see how they depend on the sampled β -values).