This article was submitted by Dave from Goose Joak.
Seasoned fantasy baseball players will tell you that pitchers are harder to forecast than hitters. There are many reasons, but chief among them:
1. Pitchers progress in a less linear fashion than hitters. So predicting a breakout season is tougher.
2. Pitchers are subject to more severe injury problems than hitters.
A Common Question
Therefore it is imperative in most leagues that you scour the waiver wire for talent. In doing so, I find myself asking some of the same questions over and over. For instance:
Pitcher x (somebody like Tom Gorzelanny, perhaps) has a career ERA of 4.50. Suddenly they’ve put up 3 quality starts in a row and are pitching better than they ever have in their career. Is it time to buy?
It feels like we face this question all the time. Just this year we have cases like Dallas Braden, Carl Pavano, Ian Kennedy, Ricky Romero, Brett Cecil, Tom Gorzelanny, Randy Wells, Kris Medlen, Gio Gonzalez, Doug Fister…the list goes on and on. One way we tend to evaluate this choice, particularly on sites like FB365, is by looking at xFIP.
xFIP is a useful metric for evaluating pitchers. It is the fly ball regressed version of FIP, or basically, what we would expect a pitcher’s ERA to be based on their component statistics.
One question I often wrestle with is when to “believe” that a pitcher’s xFIP has significantly changed, particularly when looking at small sample sizes. One of my favorite examples is Justin Verlander circa 2009. His xFIP for his first three years in the majors was pretty uninspiring:
Not horrible, but not phenomenal.
In 2009, already on a fairly short leash, he began so poorly that he landed on many a waiver wire. His first four starts totaled 21 IP with a 9.00 ERA and 9 walks. But then, just as many people had given up hope, something clicked. He struck out 44 batters in the next 29 innings, walking just 8, which would compute to something like negative infinity xFIP (tongue in cheek). So obviously something had changed for him.
He went on to have a phenomenal season (3.45 ERA, 3.26 xFIP), exceeding his previous season by a run and a half. The impact of this is huge about 0.25 on your team’s ERA in a 1200 IP league.
Is This Always the Case?
But how often is this really the case? Do short, sudden changes in xFIP (either good or bad) do a better job of predicting future success than a pitcher’s past history?
One way to look at this is to compare a pitcher’s recent xFIP to their previous season’s xFIP, and evaluate which metric better predicts their future success. In other words, does their 2009 xFIP do a better job of predicting future success than their short season of 2010 xFIP?
For the analysis, I selected starting pitchers who threw at least 50 IP in 2009. That way we have a baseline of true MLB performance.
I then added another requirement to ensure they had pitched consistently in 2010. I required at least 20 IP in each of March/April, May and June for 2010. I set the IP cutoff at 10 IP in July since the month is not yet complete.
Given this set of criteria, I had a list of 71 starting pitchers to work with. It does exclude rookies and pitchers coming back from injury, which is not an insignificant omission.
However, what I found was pretty interesting. The correlations suggested that more recent performance (even in small sample sizes) did a better job of predicting future success than the entire season xFIP from 2009. This was consistent for each of the three slices I looked at.
Predicting 2010 Performance for May, June and July
0.63 (Recent statistics - 2010 March, April)
0.60 (Historical statistics - 2009 Full Season)
Predicting 2010 Performance for June and July
0.64 (Recent statistics - 2010 March, April, May)
0.57 (Historical statistics - 2009 Full Season)
Predicting 2010 Performance for July
0.58 (Recent statistics - 2010 March, April, May, June)
0.55 (Historical statistics - 2009 Full Season)
So in each case, the recent xFIP did a better job of predicting future xFIP than the previous season.
Interesting enough this appears to be valid even in the shortest of sample sizes I looked at. Just March and April alone, an average sample size of just 29 innings, better predicted future success than the entire season of 2009.
I thought that was pretty cool. Then, I decided to look at cases with the largest changes year over year. That is, to look specifically at pitchers who had huge gains or declines in their xFIP. Were their results consistent with the analysis as a whole?
I looked at the second data set from above, which had the largest variance in correlation.
What I found is that of 12 pitchers who improved by 0.50 or better in xFIP from the previous year, most (8) retained at least some of that improvement going forward:
Continued to outperform 2009 in June and July
Did not continue to outperform 2009 in June and July
Conversely, for the 12 pitchers who declined by 0.50 or more in xFIP, most (10) retained at least some of that loss going forward.
Continued to underperform 2009 in June and July
Did not continue to underperform 2009 in June and July
This makes me feel pretty confident that in cases of rapid improvement or decline, when it is reflected in component statistics, we should pay attention. These pitchers are more likely to retain some of this success (or decline going forward).
Furthermore, this analysis doesn’t even take into account cases where a pitcher significantly underperformed, then went on the DL due to injury (I have excluded those cases from this analysis). I would suspect this would further tilt the balance in favor of evaluating recent, short term performance over the previous season’s performance.
So, bottom line: it is useful to evaluate short term fluctuations in xFIP. Perhaps even more so than a full season’s worth of performance the prior year.
Dave has played fantasy baseball since 1994. He enjoys making custom baseball cards over at Goose Joak.
All statistics in this analysis via Fangraphs.com<-->