- Academic content

- Populations and samples
- Parameters and statistics
- Inference and description
- Why not go straight to the population?

- Data types
- Data from more than one variable
- Data sources

- Observational studies and experiments
- When should we experiment, and when should we observe?
- An advantage of experiments over observational studies
- An advantage of observational studies over experiments

- Bias
- Making a sample random:
- Sampling plans
- Simple random sample
- Systematic sample
- Stratified sample
- Cluster sample

- Applying treatments
- Randomizing the groups
- Blocking the groups
- Placebos and control groups
- Blinding the experiment

- Counting
- Bar charts
- Relative frequency bar charts
- Pie charts

- Initial summary of the data
- The histogram
- Reading the histogram
- Reading the histogram - the middle of the data
- Reading the histogram - symmetry and skew
- Manageable amounts of data: using a stem-and-leaf plot
- Time plots

- Scatterplots
- Studying scatterplots
- Types of relationship
- Strength of relationship
- Relationships between one numerical variable and one categorical variable
- Relationships between two categorical variables

- Mean
- Median
- Mode
- Comparing the measures of center
- Sensitive and insensitive measures
- Outliers
- Measures of center and pictures
- Measuring the center of data: a conclusion

- Range
- The inter-quartile range
- The five-number summary
- Box plots
- A closer look at variation
- Developing a formula for variation
- Variance and standard deviation
- Using the mean and standard deviation

- Population measures
- Why population measures?

- Correlation
- Interpreting correlation
- Calculating a correlation
- The nature of the relationship
- Straight lines
- Line of best fit
- Population measures

- Patterns in randomness
- Relative frequency as probability
- Probability rules: those that we have already and those that we’d like

- Events and outcomes
- Special kinds of events
- Properties of events
- Extending the notion of probability
- The rules of probability

- A guiding example
- Counting events
- Examples of calculating probabilities
- The general addition rule

- Conditional probability
- Decision trees
- The general multiplication rule
- Independence

- Relating conditional probabilities
- Developing Bayes’ Theorem
- Using Bayes’ Theorem
- Full version of Bayes’ Theorem

- Random variables
- Probability distributions
- Presenting discrete random variables
- Measures of a discrete random variable
- Studying combinations of random variables
- Rules for expected value and variance

- Situations described by the binomial distribution
- The sample space for the binomial distribution
- The assumptions for the binomial distribution
- Developing the binomial distribution
- Getting successes in a particular order
- The number of particular ways to get x successes
- The binomial distribution formula
- The expected value, variance and standard deviation
- Different binomial distributions
- Examples of using the binomial distribution

- The probability of every outcome is zero
- Getting a sense of probability for continuous variables
- Probability density function - a concept
- The probability density function
- Measures of a continuous random variable

- The probability density function for normal distributions
- Normal distribution probabilities
- The standard normal distribution
- The transformation formula
- The distribution of values in the normal distribution
- The normal distribution as an approximation to the binomial distribution

- Populations of samples
- Sampling distributions

- A concrete example
- Properties of the sampling distribution of the mean

- The population proportion
- Sampling from the population
- Describing the sampling distribution of the proportion

- Using one sample to make a conjecture about the population
- What sort of population would that sample come from?
- Inference
- Estimation: a concept
- Estimation: a method
- The nature of estimates
- Testing: a concept
- Testing: a method

- Estimating a population parameter
- Imprecision and uncertainty
- Confidence intervals
- Estimation assertions

- Using sampling distributions to construct a confidence interval
- So what is confidence?
- Using the standard normal distribution
- Different confidence intervals

- Levels of significance and critical values
- The confidence interval for the population mean
- Constructing the confidence interval for the population mean
- Determining a suitable sample size for a confidence interval

- The t-distributions
- Applying the t-distribution to statistical estimation
- The confidence interval for the mean, σ unknown
- Constructing the confidence interval for the population mean

- The sampling distribution of the proportion
- The z-score approach
- The confidence interval for π

- Making a hypothesis about a population parameter
- The null hypothesis and alternative hypothesis
- To reject or not reject

- Claims become hypotheses
- Assuming the null hypothesis is true
- A sample as evidence
- Testing versus estimation

- The test statistic
- Levels of significance, critical values, and the region of rejection
- Step-by-step guide to conducting a hypothesis test

- Uncertainty in testing
- Type I and Type II errors
- How likely are the errors?
- Using bigger samples
- Power

- Testing the Mean
- Is σ known or unknown?
- Hypothesis testing the mean when σ is known
- Hypothesis testing the mean when σ is unknown
- Hypothesis testing the proportion

- How likely is the sample?
- Step-by-step guide to the P-value approach

- The distribution of X
_{1}- X_{2} - Comparing sampling distributions
- Sampling distributions and inference
- Independent samples

- The scenario for comparing two population proportions
- Estimating the difference π
_{1}- π_{2} - Testing the difference π
_{1}- π_{2}

- The scenario for comparing two population means
- Assuming σ
_{1}and σ_{2}are known - Not assuming σ
_{1}and σ_{2}are known - Assuming σ
_{1}and σ_{2}are equal - A summary of the different methods

- The mean difference
- The scenario for studying the mean difference
- Inferences about the mean difference

- Association
- Scatterplots
- Correlation
- Line of best fit

- The regression line
- Explained and unexplained variation
- Residuals
- Prediction intervals

- The situation described by the model
- The assumptions of the model
- Inferences about β
_{0}and β_{1} - Predicting values and estimating means

- The effect of multiple explanatory variables
- The multiple regression model
- The sample regression equation
- Interpreting the coefficients of a multiple regression model
- Measuring explained variation: the adjusted coefficient of determination
- Inferences in multiple regression