Chapter 13
Factorial Analysis of Variance
In Chapter 12 we focused on "oneway" analysis of variance which is the appropriate analysis when you have only one variable (or factor) with multiple levels
In the current chapter, we will focus instead on situations where we have multiple variables, each with multiple levels
For example, "fairness" of midterm as a function of gender and year:
























Can ask three questions … (1) does opinions of fairness differ across the genders?, (2) does opinions of fairness differ across the years?, (3) is the effect of year on opinions different for the different genders?
Terminology
Main effects. The first two questions are examples of what we will be calling "main effects". One way to think of main effects is the following. Assume we have variables A & B; the main effect of A would be whether there was an effect of A collapsing across levels of B. It is as if the variable B were of no interest.
Interactions. The third question is an example of an interaction. In words an interaction can be stated in the following manner. Two variables are interacting when the effect of the first variable is different at different levels of the second variable.
Simple Effects. We will also be talking about simple effects. Simple effects relate to questions like .. if we only consider second year students, are opinions concerning the exam different depending gender.
In class example with memory for words of different imageability and frequency.
Stimuli
High Frequency






















Low Frequency




















Notation
Frequency









Plotting the Data
Main Effects:
Interaction:
Is the effect of frequency different at different levels of imageability
Simple Effects:
Factorial Designs
The experiment we just ran used a factorial design.
What that means is that we included all combinations of different levels of our two variables (sometimes called a fully crossed design)
Between versus Within Subject Designs
We also had different subjects in each cell of the design. When you do that, you have a between subject design
We could have tested all subjects in all conditions, that would be called a complete withinsubjects design because all the variables were manipulated within subjects
Finally, we could have a mixed design in which one (or more) variables are within subjects, and one (or more) other variables are between subjects
Chapter 13 only considers betweensubject designs, Chapter 14 will consider within and mixed designs
Computations in twoway ANOVA
Warning: Once again, my way of presenting this stuff will be different from the way the text does it.
Logic:
Things that Stay the Same
SS_{total}  SS_{total} is still calculated as just the total sum of squares. Thus, ignoring all of the manipulated variables, just sum the data points and sum the squares of each data point, then:
SS_{within}  SS_{within} again simply equals the sum of the SS for each cell. Thus, you must first calculate the SS for each cell using the formula above, then sum them. I have done the individual SSs, from there:
SS_{within} = SS_{11} + SS_{12} + SS_{21} + SS_{22}
= 5.88 + 17.50 + 16.88 + 5.50 = 45.76
So, SStotal = 62.00 and SSwithin = 45.76 … so far, easy right?
Things that Stay Pretty Much the Same
The SStreat is also calculated in pretty much the same way as before EXCEPT now you need to do an SStreat for each variable in the design.
Thus, for each variable we are going to compute an SS which is simply the sum of squares representing the degree to which the means at each level of the variable deviate from the grand mean, multiplied by the n per cell (because of CLT .. remember?)
The grand mean for our data is 2.75
For the frequency variable, the high freq mean is 2.76, the low freq mean is 2.75. So:
Similarly:
Now for Something Completely Different
The last thing we want is the sum of squares due to the interaction between frequency and imageability
To get that we first calculate the SS for all of the cells in the design around the grand mean (multiplied by n per cell)
This "variance" is due to the interaction plus the two main effects, so by subtracting the main effects we are left with the SS for the interaction. So:
So, the SS for the interaction is:
Now on to the source table … with a brief stopover at degrees of freedom
Demonstration
































































Degrees of Freedom
df_{FxI} is df_{freq} x df_{img} … 1 x 1 = 1
df_{within} = df_{total}  (df_{freq} + df_{img} + df_{FxI}) = 31  3 = 28
Source Table
Source 




Freq 




image 




F x I 




Within 




Total 


Remember Hypothesis Testing
Remember all this ANOVA stuff is done in the context of experimental hypotheses
In the case of 2 by 2 ANOVAs there are actually three null effects; one of each main effect and one for the interaction
For example:
Once a source table is obtained, each of these hypotheses is then tested by comparing the obtained F for that hypothesis to its appropriate critical F
Simple Effects
Often the ANOVA will tell you that there is a significant interaction, but it stops there.
To properly interpret an interaction we usually need more specific information than that.
For example, consider the following interactions:
In order to accurately describe these interactions, we have to know whether the effect of variable B is significant at each level of variable A
This involves simple effects tests
Computing Simple Effects
The computation of simple effects is no different from the other sum of squares we have been calculating except we focus in on one row or column
SS _{freq at hi image}:
SS _{freq at lo image}:
You evaluate these simple effects just like any other sum of squares. Divide them by there df (cells minus 1) to get a MS. Then divide that by MS_{error} to get an F
For the above two examples F_{obtained} = 0.36
This is not significant implying that there was no frequency effect at either level of imageability
An Example From the Top
Say that I am interested in understanding phobias and, as a first step, I want to see if fear builds over time when a phobic is put in a feared situation.
So, I get 24 clausteraphobics and 24 control subjects and randomly assign 8 of each to stand in a closed elevator for 2, 5 or 10 mins, then to rate there fear on a 10 point scale with 1 being fearless and 10 being terrified.




















Say that I give you the following information:






































Simple Effects
To better understand the interaction, we could either look at the effect of group at each level of time, or look at the effect of time at each level of group.
We will do the latter as it seems to make the most sense .. so:





























So, we could describe the interaction by saying that fear increased over time for phobics, but fear did not change at all over time for the controls
Multiple Comparisons
Notice that the simple effects test gives you more information about the interaction but it still doesn’t tell you specific information about which means are different from which other means
For that kind of information, you can use all of the same multiple comparison techniques described in Chapter 12 with a factorial design as well, and you use them in the exact same way
For example, if we did a Tukey test on the "Phobics in Elevators" dataset …
CT1 CT2 CT3 PhT1 PhT2 PhT3
5 5 5 7 8 9








































































CT1 CT2 CT3 PhT1 PhT2 PhT3
5 5 5 7 8 9
   
In words then, these results suggest the following.
First, for control subjects time had no effect at all as their mean fear level was not different across the three times examined
At all of the times tested, the phobic subject showed more fear than the control subjects
Each additional amount of time significantly increased the fear level of the phobic subjects such that they were more scared at 5 mins than 2 mins, and even more scared at 10 mins than 5 mins
The moral of the multiple comparisons part of this chapter is that when you do multiple comparisons in a factorial design, you basically act like it is a single factor design with each cell of the multifactor design being like a level of the single factor design.
Magnitude of the Effect
As described in Chapter 11, it is often desirable to quantify the magnitude of an observed effect
This is also true in factorial designs with the only difference being that you know multiple effects that can be quanitified
Once again, one can use h ^{2} (The SS relevant to the effect divided by the SS_{total}) as a quick and dirty way of calculating how much of the total variation in the data was due the variable of interest
However, as mentioned, h ^{2} is biased in that in overestimates the true magnitude of an effect
The textbook goes into a description of a revised w ^{2} estimate that can be calculated for factorial designs
However, for our purposes, you don’t have to worry about understanding that
Instead, know why you would want to calculate the magnitude of an effect, know how to do so via h ^{2}, know that h ^{2} is a biased estimator and that w ^{2} is better, and know that if you ever need to calculate w ^{2} the text shows you how
Power Analysis for Factorial Experiments
Recall again that power is the probability that you will be able to reject a null hypothesis.
Power depends on the size of the effect you expect AND the number of subjects you plan to run
In Chapter 11 we said that to calculate power in a oneway ANOVA, we do the following:
Step 1: Calculate
Step 2: Convert to
Step 3: Get associated value for ncF table
Power = 1 
No focus on Step 1. That formula can be restated as the square root of the sum of squares relevant to the effect we are interested in, divided by k, and then divided by the mean squared error
So, lets say we are using a 2way factorial design … now we have 3 null hypothesis … 1) the main effect of A, the main effect of B, and the interaction of A & B.
Assuming you have some estimate of the mean squared error …
All you need to do to find the power associated with these nulls is to estimate (based on past research or an educated guess) what you think our final means will look like. With those estimates in combination with our intended n, you can compute sum of squares and use the exact logic as we did before
The only real difference is that we now have 3 power analyses we could do (assuming 2 variables)
Note: Read the meat of these sections in the text (ignoring their computations if you like)
Unequal Sample Sizes
Unequal sample sizes cause big problems for factorial designs because it messes with the independence of the two variables, allowing effects of one variable to produce apparent effects in the other
Consider the following example from the text:
















If you look at the actual cell totals, there is clearly no effect of state .. however if you look at the row totals, there appears to be an effect of state
The apparent effect of state is due the "drinking" effect and the unequal ns in the various cells
Rough Solution to Unequal ns
The row and column means we calculated are what are called "weighted" means
We could similarly compute an "unweighted" column mean which would simple be the mean of the cell means, as opposed to the mean of all the numbers that went into the cell means
Note that when ns are equal, the weighted and unweighted means are the same
However, if we calculate unweighted means in the previous example, notice that they seem to provide a better depiction of the cell data (means of 17 for both states)
We could then do our analysis using the unweighted means instead
However, in order to do this we have to "act as though" we were in an equal n condition with those row and cell means .. but what n do we use?
HigherOrder Factorial Design
So far we have been focusing on experiments that manipulate 2 variables at a time … however, often an experimenter will manipulate three or more variables
Say we have three variables .. then we actually have 3 main effects, 3 twoway interactions, and one threeway interaction
For example:
A prof wants to better understand the factors that affect performance in Psych C08. He thinks three variables are important: 1) Understanding of basic statistics which he thinks is reflected in the student’s B07 marks, 2) The textbook, and 3) the use of quizzes to keep the students attention
So, he chooses to teach 4 versions of his class next year which represent the cells of a textbook (old vs new) by quiz (have vs not have) design. However, he also splits performance by mark in B07 (B or better vs. less than B)
Assume he gets the following data:
Less than B B or better
Text Text
































Assuming there was an equal number of subjects in each cell … then what about the following?
3way interaction (B07 by Text by Quiz)?
P.S.  forget about the computations for now .. just worry about being able to interpret the data.
For example, on a test you might get something like we have been discussing along with the following source table:
Note: I made up the entire source table below … if you did the computations on the above you would not get these numbers








































B07 x T x Q 
41 
1 
41 
4.10 









Based on this I could ask you to describe the results of the experiment … could you?