Today I want to focus on the impact of a NCAA FBS team's strength of schedule on their winning percentage. To do this, I will look at a simple model of NCAA FBS production where team performance (as measured by winning percentage) is a function of team points scored and opponents points scored. Using a simple linear regression reveals three helpful results. One is that the estimated marginal impact of a point scored is positive and statistically significant and a point surrendered is negative and statistically significant; second is that the estimated marginal impact of a point scored and a point surrendered is equal when rounded to three decimal places in absolute value; and third that this very simple model "explains" about 83% of the variation in winning percent for the 2011 NCAA FBS season.
Frankly, the first better be true since wins are defined to occur if points scored are greater than points surrendered. The second is more interesting in that I conclude that scoring more points is equally as important as preventing the opponent from scoring more points in terms of the impact on winning. The third is helpful to show that at least this model is useful in evaluating team performance.
Many have argued that the strength of a teams schedule should also be included in modeling team performance and in evaluating better quality teams. I have previously disagreed on this point, but am willing to re-evaluate the impact of strength of schedule on team winning percentage. To evaluate the impact of strength of schedule, I will need a measure of strength of schedule for each NCAA FBS team and then use this measure to determine the statistical significant of strength of schedule on team winning percent. Running the numbers using the simple model above I find that strength of schedule is statistically insignificant (i.e. statistically strength of schedule has zero impact on NCAA FBS winning percent during the 2011-2012 football season). You may be thinking that my measure of strength of schedule is incorrect, so I also ran the numbers using Jeff Sagarin's strength of schedule measure for only the 120 NCAA FBS teams and found the exact same result - strength of schedule is statistically insignificant.
But what about teams in different FBS conferences? If we take into account teams in different football conferences (and group the four independent schools as one conference) does that make a difference? I like the way you think - so I ran the numbers when adjusting for teams in the different NCAA FBS conferences and points scored and points surrendered only and got very similar results as in the simple production model above. Statistically the estimated coefficients (marginal impact) are all statistically significant at the 99% level of confidence, so I am at least 99% confident that the variation in points scored, points surrendered and the adjustment for each of the ten NCAA FBS conferences plus the independent teams are different from zero. Now I did this as a check on the effect that conferences have on winning percentage, and I find that different conferences have different effects on winning percentage (though small) and that they are statistically significant.
So now let's add a measure of a teams strength of schedule. Whether I use my own or the Sagarin measure I still find that strength of schedule is statistically insignificant with respect to team winning percent.
Thus, I conclude that strength of schedule "does not matter" (since strength of schedule is statistically insignificant) in terms of how well NCAA FBS teams performed in 2011-2012.