Inference and Modeling

Updated: 03 September 2023

Based on this EdX Course

Inference

Inference is using information about a sample as being representative of the whole

Parameters and Estimates

We can plot the results of a random ‘election poll’ draw with the following

1
library(ggplot2)
2
library(tidyverse)
3
library(dslabs)
4
5
ds_theme_set()
6
take_poll(1000)

Sample Election Poll

The goal of statistical inference is to predict the parameter pp using the observed data in the sample nn

We would like to predict the portion of blue beads which is pp, based on this we can identify the proportion of red beads and the spread

Proportion of Red Beads

1p1 - p

Spread

p(1p)p - (1 - p) 2p12p -1

The Sample Average

The sample average is the proportion of a certain perameter pp which is calculated as follows

Xˉ=X1+X2+X3+X4+...+XNn\bar{X} = \frac{X_1 + X_2 + X_3 + X_4 + ... + X_N}{n}

In this case, the value of an individual XX is 1 if it is our outcome of interest, or 0 if not

Polling vs Forecasting

A poll is taken at a specific time, but forecasting takes into consideration the fact that the probability will change in the future and therefore aims to predict the probability of some event at that time

Properties of an Estimate

The Expected value of our estimate is the same as the parameter of interest pp

Expected Value

E(Xˉ)=pE(\bar{X}) = p

We can decrease our standard error by increasing our sample size as can be seen by

Standard Error

SE(Xˉ)=p(1p)/NSE(\bar{X}) = \sqrt{p(1-p)/N}

Due to the Law of Large Numbers we can know that our standard error will be smallest as we increase the sample size

Central Limit Theorem in Practice

Suppose we want to know whether or not our estimate is sufficiently accurate (i.e. the standard error) but we do not know the actual probability? Well we can estimate that with the following

Estimate of the Standard Error

SE^(Xˉ)=Xˉ(1Xˉ)/N\hat{SE}(\bar{X}) = \sqrt{\bar{X}(1-\bar{X})/N}

Using this we can see what the estimate for our probability being correct within 1% is by

1
pnorm(0.01/se) - pnorm(-0.01/se)

Margin of Error

The margin of error is two times the standard error, using this we can see that there is a 95% chance that we will be within two standard errors

The Spread

Because we only have two parties we know The Spread can be estimated by

d=2Xˉ1d = 2\bar{X} -1

Since we are multiplying a random variable by two the standard error of this new variable is also multiplied by two

2SE^(Xˉ)2\hat{SE}(\bar{X})

Bias

Polling is more complex than random selections as we do not necessarily know if we are reaching all groups equally. For example an internet poll may only be as accurate as that as we are excluding people without access to the internet

Intervals and P-Values

Confidence intervals are the region in which we can have a 95% chance that pp will be within this range

Pr(2XˉpSE^(Xˉ)2)0.95Pr(-2 \leq \frac{\bar{X}-p}{\hat{SE}(\bar{X})}\leq 2) \approx 0.95

It is the intervals that are random, not pp. The 95% relates to the probability that the random interval we selected contains pp

Power

Power can be thought of as the probability of detecting a spread different from zero

P-Values

These are related to the confidence interval.

Poll Aggregation

Poll aggregation is the task of combining the results of multiple polls to get an overall result which would be more accurate than each individual result

Poll Data and Pollster Bias

We can run into differences between polls that do not seem to have expected values that are aligned. This can be known as Pollster Bias

Data Driven Models

If we make use of a random selection of the different poll data, our standard error will now include pollster to pollster variability, this standard deviation is now an unknown parameter. Because we are still using independent random variables our CLT still works.

We can still however estimate the sample’s standard deviation with the following

Sample Standard Deviation

s=1N1i=1N(XiXˉ)2s = \frac{1}{N-1} \sum_{i=1}^{N} (X_i - \bar{X})^2

Using the sd function in R we can still calculate the sample standard deviation by making use of

1
> sd(polls$spread)

Bayesian Statistics

We speak about probability on the basis that the probability is not a fixed value.

Bayes’ Theorem

PR(AB)=Pr(A and B)Pr(B)PR(A|B) = \frac{Pr(A\space and \space B)}{Pr(B)}

Or rather

PR(AB)=Pr(BA)Pr(A)Pr(B)PR(A|B) = \frac{Pr(B|A)Pr(A)}{Pr(B)}

The Hierarchical Model

Provides a mathematical description for why results may not seem to correlate with what we expect. This takes into consideration subjective data, like an individual’s ability to play a game, and then the associated randomness or luck

Posterior Distribution

The probability distribution of pp where we have an observed distribution yy

Posterior Distribution

E(py)=Bμ+(1B)YE(p|y)=B\mu +(1-B)Y E(py)=μ+(1B)(Yμ)E(p|y)=\mu + (1-B)(Y\mu) B=σ2σ2+τ2B=\frac{\sigma^2}{\sigma^2+\tau^2}

From this we can see that BB is close to one when σ\sigma is larger

Standard Error for Posterior Distribution

SE(py)2=11/σ2+1/τ2SE(p|y)^2=\frac{1}{1/\sigma^2 + 1/\tau^2}

This is known as an empirical Bayesian approach which is based on observed data. This will deliver a better confidence interval known as the Bayesian Credible Interval

Note that the posterior distribution is normally distributed

Mathematical Representation of Models

Given a of polls from which we sample an random value from a random poll, we can describe the variability of that data with

Xi,j=d+ϵi,jX_{i,j} = d + \epsilon_{i,j}

Where ϵ\epsilon is the an associated error value

In order to adjust this value for pollster to pollster variability we can make use of an adjustment based on the house bias

House Bias Adjusted Sampling

Xi,j=d+hi+ϵi,jX_{i,j}=d + h_{i} + \epsilon_{i,j}

In order to compensate for the general bias that may exist in all polls

General Bias Adjusted

Xi,j=d+b+hi+ϵi,jX_{i,j}=d + b + h_{i} + \epsilon_{i,j}

The reason we add these biases, though unknown, will have a significant effect on the standard deviation of our data

Adjusted Average Value

Xˉ=d+b+1Ni=1NXi\bar{X} = d + b + \frac{1}{N}\sum_{i=1}^{N} X_i

Adjusted Standard Deviation

σ2/N+σb2\sqrt{\sigma^2/N+\sigma_b^2}

Note that because the bb value is the same in every poll, this does not affect our variance

Forecasting

Forecasting is about making predictions based on the variability of poll results over time.

Time Variation in Model

Yijt=d+b+hi+bt+ϵijtY_{ijt} = d + b + h_i + b_t + \epsilon_{ijt}

Model Trend

Yijt=d+b+hi+bt+f(t)+ϵijtY_{ijt} = d + b + h_i + b_t + f(t) + \epsilon_{ijt}

The T Distribution

Because we are introducing additional variability when estimating the σ\sigma we result in over-confidence confidence intervals which are not sufficiently large to take into consideration this additional variability

Confidence Interval

Z=Xˉdσ/NZ = \frac{\bar{X}-d}{\sigma/\sqrt{N}}

Confidence Interval with ss instead of σ\sigma

Z=Xˉds/NZ = \frac{\bar{X}-d}{s/\sqrt{N}}

This theory tells us that ZZ follows a t-distribution with N1N-1 degrees of freedom which controls the variability of our system. This holds for data which is still somewhat different from a normal distribution

Chi Squared Test

Aims to calculate how likely it is that we see a deviation as large or larger than identified by chance, in the case of categorical or binary data