关键词 > EMET3006/4301/8001

Applied Micro econometrics, EMET3006/4301/8001 Semester 2, 2022 Tutorial 5

发布时间:2022-08-30

Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

Applied Micro econometrics, EMET3006/4301/8001

Semester 2, 2022

Tutorial 5 (Week 6)

Write a program that you can use to replicate the gures and tables in Cheng and Hoekstra (2013)

1.  Start your script with the regular preamble that installs and loads all the programs you need. See the lecture notes for what you need. (plm package)

2. If you haven’t already then save the castle-doctrine-2000-2010.dta data set on your h-drive, I have a folder called data.   (You download the data from Wattle)

3. Peruse the data in R so that you can see what the variables are.  Use the commands from Tutorial 0.

4. Table 2 in the paper provides summary statistics for the dependent and control variables. Replicate the unweighted means.

5. Let’s replicate the weighted means. They are weighted by the size of the          population. wmean<-summarise(data,  weighted .mean(homicide, population),  weighted .mean(jhcitizen c,  population),

weighted .mean(jhpolice c,  population),  weighted .mean(robbery, population),  weighted .mean(assault,  population),

weighted .mean(burglary,  population),  weighted .mean(larceny, population),  weighted .mean(motor,  population),

weighted .mean(robbery gun r,  population))

Why do we need to weight the means by the size of the population in a state?

6. Log all of the outcome variables for those that are an amount per 100,000 population. You will need to have installed the logr” program.

lhomicide=log(data$homicide)

ljhcitizen_c  =log(data$jhcitizen_c)

ljhpolice_c  =log(data$jhpolice_c)

lrobbery  =log(data$robbery)

lassault  =log(data$assault)

lburglary  =log(data$burglary)

llarceny  =log(data$larceny)

lmotor  =log(data$motor)

lrobbery_gun_r=log(data$robbery_gun_r)

lpolice=log(police)

lprisoner=log(prisoner)

llagprisoner=log(lagprisoner)

7. Replicate the rst column in the unweighted part of Table 3 for larceny. What do you nd? Use either pooled OLS with xed effects and clus- tered standard errors or convert the data to panel data.  My lecture slides have this code.

8.  Continue adding variables as in the specification in Table 3.  Discuss the coefficient on cdl in each specification.  (I can’t get the exact same results for some of the columns, don’t let this worry you if it happens to you.)

9. Rerun everything using population as the weight.  Discuss the coeffi- cient on cdl.

#With  weights

pooling_weights  <-  plm(llarceny˜cdl+

factor(sid)  +  factor(year),  data=pdata,

weights  =  population,  model=  "pooling") summary(pooling_weights)

10. Run the regressions in Tables 4 and 5. (Do some of this so you get the hang of it, don’t spend ages doing it)

11. Discuss the results of the regression in terms of placebo, deterrence and homicides.

12. Replicate the graphs in gure 1 and discuss.   (I regret asking this.) Here is a code I came up with that just does 2005 for gure 1. Ideally I would have points joined by lines but I gave up. Figure 2 is optional.

#Figure  1

#  I’ll  just  do  one  graph  from  figure  1

#  I’m  using  the  Castle  data  which  I’ve  renamed  data  and  I’ve    logged  the  variables  from  earlier,  everything  else  is  the  same as  in  the  original  data  set .

#We’ll  use  the  effective  year  variable  for  this  but  some #  states  never  change  their  law  and  so  their  value  is  NA #  We  want  it  to  be  zero  instead .

data[is .na(data)]  =  0

data<-data  %>%

mutate(year_2005=ifelse(effyear==2005,  1,  0),

control=ifelse(effyear==0,  1,  0))

data_reduce<-data[c("year",  "sid",  "lhomicide",  "effyear", "control",  "year_2005")]

data_reduce<-data_reduce  %>%

group_by(year,  control)  %>%

mutate(mean_lhom_control=mean(lhomicide))

data_reduce<-data_reduce  %>%

group_by(year,year_2005)  %>%

mutate(mean_lhom_2005=mean(lhomicide))

#I’m  making  a  new  dataset  of  just  the  means .                                 mean_lhom_control<-aggregate(data_reduce$lhomicide,                  by=list(data_reduce$year,  data_reduce$control),  FUN=mean)    mean_lhom_2005<-aggregate(data_reduce$lhomicide,                         by=list(data_reduce$year,  data_reduce$year_2005),  FUN=mean) mean_lhom_control<-subset(mean_lhom_control,  Group .2>0)         mean_lhom_2005<-subset(mean_lhom_2005,  Group .2>0)

mean_2005<-merge(mean_lhom_control,  mean_lhom_2005, by .x="Group . 1",  by .y="Group . 1")

ggplot(aes(Group . 1),  data=mean_2005)  +

geom_line(aes(y=x .x),  colour="red")+

geom_line(aes(y=x .y),  colour="blue")+

geom_vline(xintercept  =  2005,  colour  =  "black",  linetype  =  2)+ xlab("Year")  +

ylab("log(Homicide  Rate)")  +  ylim(1,  2)

13. What effect does this analysis say reforming castle doctrine laws has on homicides?

14. What are the key parts of these legislative reforms that you think may be causing this result?

15. Explain what SUTVA requires in order for these estimates to be causal?

16. Assume there are spillovers to neighboring states created by castle doc- trine reforms. Does that imply that Cheng and Hoekstra’s result is too large or too small? Why/why not?