MRP Redux

Using fake data simulations to understand the our MRP model.

Michael DeWitt https://michaeldewittjr.com
04-05-2019

Background

I recently got a question about using MRP and I thought it would be worthwhile to share some of the additional explanation of using this approach with a simulated data set. Simulating your data and testing your method is a really good way to understand if your model is sensitive enough to detect differences. This kind of fake data simulation will allow you to see if your model fails, before using it production or in the field where the cost of failure is high.

Population Data

I’m going to generate some synthetic data for this example. This will represent our population and provide a benchmark for “truth.” The data are completely made up and don’t represent anything in particular.

These data represent a population of 1 million persons, of binary gender, four different races, living in the US. Again, the proportions are made up.

Let’s imagine that each person has a probability , \(\theta\) of supporting a given opinion. Again, let’s suppose that the probability of support is partially determined by some combination of gender, race, location, and of course some random noise.

Now I’m going to draw my sample for my analysis. This would represent a completely random sample of my population.

Multi-Level Regression

Now we can step into the first component of MRP, the multi-level or hierarchical regression modeling. Here based on literature and inference that race, gender, state, and census division may be important, or help me make inference on the probability of supporting the given option. Additionally, I will use partial pooling to help make inferences for some of the small cell sizes that exist in my survey. I can build that given equation using brms.


library(brms)

my_equation <- bf(true_opinion ~ (1 | race * gender) + (1 | state) + (1 | division))

Now to see what priors I have to set. This step is important as building the above model with lots of variables may have difficulty converging.


get_prior(my_equation, data = survey) %>% 
  select(prior, class, coef, group)

                 prior     class      coef       group
1  student_t(3, 1, 10) Intercept                      
2  student_t(3, 0, 10)        sd                      
3                             sd              division
4                             sd Intercept    division
5                             sd                gender
6                             sd Intercept      gender
7                             sd                  race
8                             sd Intercept        race
9                             sd           race:gender
10                            sd Intercept race:gender
11                            sd                 state
12                            sd Intercept       state
13 student_t(3, 0, 10)     sigma                      

Now I can set my priors for the different coefficients in my model.


my_priors <- c(
      set_prior("normal(0,0.2)", class = "sd", group = "race:gender"),
      set_prior("normal(0,0.2)", class = "sd", group = "race"),
      set_prior("normal(0,0.2)", class = "sd", group = "gender"),
      set_prior("normal(0,0.2)", class = "sd", group = "state"),
      set_prior("normal(0,0.2)", class = "sd", group = "division")
    )

Now we can run the model in brms.


fit <- brm(my_equation, survey, prior = my_priors, 
           chains = 2, iter = 1000, cores = 2, family = bernoulli(),
           silent = TRUE)

Now we can visualise the outputs.


library(tidybayes)

fit %>%
  gather_draws(`sd_.*`, regex=TRUE) %>%
  ungroup() %>%
  mutate(group = stringr::str_replace_all(.variable, c("sd_" = "","__Intercept"=""))) %>%
  ggplot(aes(y=group, x = .value)) + 
  ggridges::geom_density_ridges(aes(height=..density..),
                                rel_min_height = 0.01, stat = "density",
                                scale=1.5)

Additionally, we should do some additional posterior checks which includes checking our Rhat values for convervenges and our effective sample size. Additionally, some posterior predictive checks would also be helpful to ensure that our model is performing well. I won’t do that here, but it is a good practice to do those things.

We can also check some of the intercepts to see if the model detected some of the changes that we introduced.


library(bayesplot)
posterior <- as.matrix(fit)
dimnames(posterior)

$iterations
NULL

$parameters
 [1] "b_Intercept"                             
 [2] "sd_division__Intercept"                  
 [3] "sd_gender__Intercept"                    
 [4] "sd_race__Intercept"                      
 [5] "sd_race:gender__Intercept"               
 [6] "sd_state__Intercept"                     
 [7] "r_division[New.England,Intercept]"       
 [8] "r_division[Middle.Atlantic,Intercept]"   
 [9] "r_division[South.Atlantic,Intercept]"    
[10] "r_division[East.South.Central,Intercept]"
[11] "r_division[West.South.Central,Intercept]"
[12] "r_division[East.North.Central,Intercept]"
[13] "r_division[West.North.Central,Intercept]"
[14] "r_division[Mountain,Intercept]"          
[15] "r_division[Pacific,Intercept]"           
[16] "r_gender[Female,Intercept]"              
[17] "r_gender[Male,Intercept]"                
[18] "r_race[Asian,Intercept]"                 
[19] "r_race[Black,Intercept]"                 
[20] "r_race[Hispanic,Intercept]"              
[21] "r_race[White,Intercept]"                 
[22] "r_race:gender[Asian_Female,Intercept]"   
[23] "r_race:gender[Asian_Male,Intercept]"     
[24] "r_race:gender[Black_Female,Intercept]"   
[25] "r_race:gender[Black_Male,Intercept]"     
[26] "r_race:gender[Hispanic_Female,Intercept]"
[27] "r_race:gender[Hispanic_Male,Intercept]"  
[28] "r_race:gender[White_Female,Intercept]"   
[29] "r_race:gender[White_Male,Intercept]"     
[30] "r_state[AK,Intercept]"                   
[31] "r_state[AL,Intercept]"                   
[32] "r_state[AR,Intercept]"                   
[33] "r_state[AZ,Intercept]"                   
[34] "r_state[CA,Intercept]"                   
[35] "r_state[CO,Intercept]"                   
[36] "r_state[CT,Intercept]"                   
[37] "r_state[DE,Intercept]"                   
[38] "r_state[FL,Intercept]"                   
[39] "r_state[GA,Intercept]"                   
[40] "r_state[HI,Intercept]"                   
[41] "r_state[IA,Intercept]"                   
[42] "r_state[ID,Intercept]"                   
[43] "r_state[IL,Intercept]"                   
[44] "r_state[IN,Intercept]"                   
[45] "r_state[KS,Intercept]"                   
[46] "r_state[KY,Intercept]"                   
[47] "r_state[LA,Intercept]"                   
[48] "r_state[MA,Intercept]"                   
[49] "r_state[MD,Intercept]"                   
[50] "r_state[ME,Intercept]"                   
[51] "r_state[MI,Intercept]"                   
[52] "r_state[MN,Intercept]"                   
[53] "r_state[MO,Intercept]"                   
[54] "r_state[MS,Intercept]"                   
[55] "r_state[MT,Intercept]"                   
[56] "r_state[NC,Intercept]"                   
[57] "r_state[ND,Intercept]"                   
[58] "r_state[NE,Intercept]"                   
[59] "r_state[NH,Intercept]"                   
[60] "r_state[NJ,Intercept]"                   
[61] "r_state[NM,Intercept]"                   
[62] "r_state[NV,Intercept]"                   
[63] "r_state[NY,Intercept]"                   
[64] "r_state[OH,Intercept]"                   
[65] "r_state[OK,Intercept]"                   
[66] "r_state[OR,Intercept]"                   
[67] "r_state[PA,Intercept]"                   
[68] "r_state[RI,Intercept]"                   
[69] "r_state[SC,Intercept]"                   
[70] "r_state[SD,Intercept]"                   
[71] "r_state[TN,Intercept]"                   
[72] "r_state[TX,Intercept]"                   
[73] "r_state[UT,Intercept]"                   
[74] "r_state[VA,Intercept]"                   
[75] "r_state[VT,Intercept]"                   
[76] "r_state[WA,Intercept]"                   
[77] "r_state[WI,Intercept]"                   
[78] "r_state[WV,Intercept]"                   
[79] "r_state[WY,Intercept]"                   
[80] "lp__"                                    

mcmc_areas(posterior,
           pars = c("r_race[Asian,Intercept]",
                    "r_gender[Female,Intercept]",
                    "r_state[SC,Intercept]"),
           prob = 0.8) 

It looks like the model picked up the gender differences as well as the specific difference for Asians. However, it looks like the model did not do a great job discriminating on differences for South Carolina. We could explore this further, but it is important to check that our model is performing as expected.

Create the Census Data

Now we step into the post-stratification step. Here we have the population values from our fake data. In reality you would probably use estimates from a census. Here we are interested in predicting state level opinion, so we we want to stratify at that level. If we wanted to make inferences about a different level we would stratify to that given level. I’m going to do both state level overall and race within state for this example.


(
  post_strat_values <- population_data %>%
    group_by(division, state, race, gender) %>%
    summarise(n = n()) %>%
    group_by(state) %>% # The level at which you want to measure support
    mutate(perc = n / sum(n)) %>%
    ungroup() %>% 
    group_by(state, race) %>% 
    mutate(perc_2 = n/ sum(n)) %>% 
    ungroup()
  
)

# A tibble: 400 x 7
   division    state race     gender     n   perc perc_2
   <fct>       <chr> <chr>    <chr>  <int>  <dbl>  <dbl>
 1 New England CT    Asian    Female   956 0.0480  0.496
 2 New England CT    Asian    Male     970 0.0487  0.504
 3 New England CT    Black    Female  2011 0.101   0.503
 4 New England CT    Black    Male    1990 0.100   0.497
 5 New England CT    Hispanic Female  3053 0.153   0.512
 6 New England CT    Hispanic Male    2913 0.146   0.488
 7 New England CT    White    Female  4009 0.201   0.501
 8 New England CT    White    Male    3996 0.201   0.499
 9 New England MA    Asian    Female  1023 0.0512  0.502
10 New England MA    Asian    Male    1014 0.0507  0.498
# … with 390 more rows

Now we can add some draws from the posterior distribution to our dataset and then make inferences on them.


pred<-fit %>%
  add_predicted_draws(newdata=post_strat_values, allow_new_levels=TRUE, n = 100) %>%
  mutate(individual_support = .prediction) %>% 
   rename(support = .prediction) %>%
  mean_qi() %>%
  mutate(state_support = support * perc) %>% # Post-stratified by state
  mutate(state_race = support *perc_2) # Post-stratified by gender within state

Now we can do whatever we want to do with regard to inferences:


by_state_estimated <- pred %>%
  group_by(state) %>%
  summarise(estimated_support = sum(state_support)) %>%
  left_join(population_data %>%
              group_by(state) %>%
              summarise(true_support = mean(true_opinion)))

by_state_estimated_2 <- pred %>%
  group_by(state) %>%
  summarise(estimated_support = sum(state_support)) %>%
  left_join(population_data %>%
              group_by(state) %>%
              summarise(true_support = mean(true_opinion))) %>% 
  left_join(survey %>%
              group_by(state) %>%
              summarise(survey_support = mean(true_opinion))) %>% 
  gather(method, prediction, -true_support, - state)

Now we can look to see how our prediction did for the population, though we missed the Southern states. Probably because our decision to partial pool on division was a bad one given the effects we introduced at the state level did not necessarily coincide with the census division.


by_state_estimated %>% 
ggplot(aes(true_support, estimated_support, label = state))+
  geom_label()+
  geom_abline(slope = 1)+
  theme_minimal()+
  xlim(.35,.55)+
  ylim(.35,.55)

But we can at least be calmed by the fact that if we made direct prediction from our survey we would have been way wrong!


by_state_estimated_2 %>% 
  ggplot(aes(prediction, true_support, color = method, label = state))+
  geom_label()+
  geom_abline(slope = 1)+
  theme_minimal()+
  xlim(.35,.55)+
  ylim(.35,.55)

Looking at support by Race within a given community requires that we use a different post-stratification variable which we created earlier.


(by_state_race_estimated <- pred %>%
  group_by(state, race) %>%
  summarise(estimated_support = sum(state_race)) %>%
  left_join(population_data %>%
              group_by(state, race) %>%
              summarise(true_support = mean(true_opinion))))

# A tibble: 200 x 4
# Groups:   state [50]
   state race     estimated_support true_support
   <chr> <chr>                <dbl>        <dbl>
 1 AK    Asian                0.477        0.415
 2 AK    Black                0.459        0.488
 3 AK    Hispanic             0.480        0.509
 4 AK    White                0.481        0.506
 5 AL    Asian                0.512        0.414
 6 AL    Black                0.573        0.511
 7 AL    Hispanic             0.470        0.502
 8 AL    White                0.602        0.486
 9 AR    Asian                0.460        0.403
10 AR    Black                0.552        0.494
# … with 190 more rows

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

DeWitt (2019, April 5). Michael DeWitt: MRP Redux. Retrieved from https://michaeldewittjr.com/dewitt_blog/posts/2019-04-05-mrp-redux/

BibTeX citation

@misc{dewitt2019mrp,
  author = {DeWitt, Michael},
  title = {Michael DeWitt: MRP Redux},
  url = {https://michaeldewittjr.com/dewitt_blog/posts/2019-04-05-mrp-redux/},
  year = {2019}
}