All of the statistics found below can either be found on baseball-reference.com or fangraphs, and many of them can be found on both.

(in alphabetical order)

**FIP – **FIP is a pitching metric that is an acronym standing for “Fielding Independent Pitching”. It is meant to assess how well a pitcher actually independently helps his team to pitch by removing the components of pitching that rely on the defense from its formula, which is heavily reliant on walks, strikeouts, and home run rate. In terms of judging pitcher performance, it is generally more stable (meaning less likely to fluctuate from year-to-year) than ERA and more predictive of future performance, and thusly is a generally superior metric to reference. FIP is scaled to ERA so that meaning can easily be applied to it: Whatever you might consider a good ERA is also a good FIP, and the same goes for bad ERAs and bad FIPs. There is a demonstrably better metric than FIP called tRA, but tRA is scaled to runs allowed as opposed to earned runs, which makes it slightly counter-intuitive for me. In addition, from all I’ve read on it, while it is superior to FIP, it is only slightly so, such that I prefer using FIP for the sake of being intuitive. While I may bring tRA up from time-to-time in future blogs, until a time comes when I’m more comfortable using it with confidence I will lean on FIP as my primary pitching metric in this blog. The con with FIP is that while it takes into account many things that happen on the field, the number itself does not **directly** reference any single on-field event, whereas ERA directly references earned runs. Therefore it is safe to say that FIP is more representative of true talent, as well as what *is likely* to happen going forward, whereas ERA is more representative of what has already happened. There are also a handful of pitchers (normally exceptionally high strikeout pitchers, like Valverde) who chronically out-perform their FIP. Once a trend of doing this has been established (“a trend” being 3, 4, 5 years), it is relatively safe to assume that it will continue into the future. The reason high strikeout pitchers tend to out-perform their FIPs is because they can make up for their pitching deficiencies by ending innings with flurries of strikeouts. An example of FIP as a predictive measure is with Edwin Jackson on the 2009 Tigers, whose FIP bounced between 3.9-4.1 even as his ERA was around 1.9. It was clear that the bottom was going to fall out, and eventually it did. He still finished with an ERA well above his FIP, but unless he improves his pitching totally — A gain in true talent — I would expect his ERA to more closely resemble his FIP of last year (4.28) than his ERA (3.62).

**ISO – **ISO stands for “Isolated Power”, and is a very simple-to-derive metric that is designed to refer to how well a player hits for power. The reference is to power hitting, which includes extra bases as well as home runs. It is derived by subtracting a player’s batting average from his slugging percentage, effectively removing singles from the equation, since those are not power hits. A good rule of thumb is that a player with an ISO of .200 or better is a pretty good power hitter and .250 or better is very good. .300 is excellent. .150 is around so-so and .100 generally references a slap hitter. The 2009 ISO leaders were Albert Pujols (.331), Carlos Pena (.310), Prince Fielder (.303), Ryan Howard (.292), and Mark Reynolds (.284). At the bottom end, you had Luis Castillo (.043), Emilio Bonaficio (.056), Jason Kendall (.064), David Eckstein (.074), and Edgar Renteria (.078). Ichiro had an ISO of .113, and Chone Figgins had an ISO of .096. I will use this statistic periodically when referring to a player’s ability to be a power threat or to refer in some way to his power.

**OPS** – On Base plus Slugging, the statistic is what it says. It is a player’s on base percentage added to his slugging percentage. It is an offensive metric. This stat was the premiere stat of the early part of the decade of the 2000s and is probably the newest statistic to gain mainstream acceptance, as it is now generally a regular part of most television broadcasts and national conversations. While many other stats (see wOBA, wRC+, OPS+) do a better job of evaluating offense performance, OPS is still a generally accurate quick and dirty way to show how well a player does. Generally speaking, 1.000+ places you at the top of the league, .900+ is great, .800+ is good, .750+ is so/so, .700+ is slightly below average, and below .700 isn’t good at all. The top 5 players in the MLB in 2009 for OPS were Albert Pujols (1.101), Joe Mauer (1.031), Prince Fielder (1.014), Joey Votto (.981), and Derrek Lee (.972). The top 5 players of all-time per OPS are Babe Ruth (1.164), Ted Williams (1.116), Lou Gehrig (1.080), Albert Pujols (1.055), and Barry Bonds (1.051). The major limitation of OPS is that it fails to properly value the fact that OBP is more important than slugging percentage — That is to say, that two players with an OPS of .850 might carry fairly different value in the positive/negative sense. A player with a .400 OBP and a .450 SLG is more valuable than a player with a .350 OBP and .500 SLG, for example. Also, while it is a useful metric in terms of showing generally how good a player is offensively, it doesn’t measure anything specific and in fact, even its component statistics are quite different: OBP and SLG have different denominators (OBP’s is plate appearances, SLG is at bats). In spite of its limitations, you can still get a good read through its use, though I rarely will use it in the blog other than in making off-handed commentary.

**OPS+** – Until the unveiling of the superior wRC+ metric (see below) in the Fall of 2009, OPS+ was my favorite go-to statistic for comparing players and general evaluation of offense. Due to the limited availability of wRC+ and the fact that it is still difficult to find it in a “sortable fashion” (where I can highlight spans of time and isolate it, for example), I still heavily use OPS+ in these scenarios where wRC+ is not available. OPS+ is essentially a player’s OPS, but it is adjusted for park factors and league in order to make it an easy comparison statistic across eras, as well as within the span of the same season. Nobody would sensibly argue that a .900 OPS by a member of the Chicago White Sox is as impressive as a .900 OPS by a member of the San Diego Padres, for example (though both would be very impressive). OPS+ is scaled such that 100 is exactly league average. Every number above or below 100 is representative of a percentage point within league average, such that a player with an OPS+ of 150 is 50% better than a league average hitter, and a player with an OPS+ of 75 is 25% worse than a league average hitter. In 2009, the league OPS+ leaders were Albert Pujols (188), Joe Mauer (170), Prince Fielder (168), Adrian Gonzalez (166), and Joey Votto (155). If you read the explanation of OPS above, you’ll note that Adrian Gonzalez was not in the top 5 in league OPS, but is fourth in league OPS+. This is the beauty of OPS+ at work and why I prefer it to the raw statistic. The reason for it is because Gonzalez plays half of his games at Petco Park, which has been since its construction one of the best pitcher’s parks in all of baseball. His OPS+ number takes that into account, and therefore even though his raw OPS was .958 (7th best in the league), his OPS+ of 166 was fourth and reflects that if all players were subject to the same playing environment, he would’ve had the fourth best production. He was, however, hurt in the raw numbers by his home park. In the whole of history, the all-time OPS+ leaders are Babe Ruth (207), Ted Williams (191), Barry Bonds (181), Lou Gehrig (179), and Rogers Hornsby (175).

**Rtot or TZ – **This is a reference to Total Zone, which is a fielding metric. It can be found on baseball-reference.com, where it is listed as “Rtot”. I may soon begin using the acronym TZ simply because it is more intuitive. It is generally regarded as the most accurate defensive metric we have both in measuring catchers and in measuring past players. Total Zone data goes back to 1957 (the first year where the full box score record is intact by retrosheet.org) and much like UZR is an expression of the runs saved by a player in the field. The number 0 represents a defense neutral player who isn’t particularly good or bad. Plus or minus 5 trends towards the slightly above or below average. +/- 5-10 represents a pretty good or bad player, and +/- 10 or more represents an outstanding or awful player. Looking at both extremes, Andruw Jones had a total zone rating of 35.7 in 1999. In 2009, Adam Dunn had a rating of -26.9. Total zone is a raw counting stat. Occassionally (if not often) I will use Rtot/yr., which is total zone per year. This scales performance to a full year of play and is useful in particular to look at lengthy periods of time as well as to attempt to project utility player types who may not play full seasons at one position. Players with very little playing time can have skewed numbers though when extrapolated to a full year of play. Tigers outfielder Clete Thomas, for example, had a -39.6 in 2008 in Center Field, but that was based on just 12 games and 118 innings. It is close to impossible to be that bad over a full season and such a number isn’t accurate. Remember, it’s not the stats that lie, it’s the poor use of them.

**UZR – **UZR is a fielding metric that stands for “Ultimate Zone Rating”. Due to my lack of a firm grasp on both the mathematics and methodology in its application, I will defer to the original study in both its first part and its second part to explain it in detail. Overall in terms of how to read and understand it, the output figure is a number that attempts to determine how many runs a player saves defensively over the course of a season. It has component parts — For outfielders these components are “range runs” (range), “error runs” (sure handed-ness), and “arm” (arm strength). For infielders they are range runs, error runs, and the ability to turn double plays. UZR is currently the best metric for evaluating defense, though that will likely soon change and there are several limitations that it has. Defensive analysis of baseball is very much in its infancy and many new technological tools that are being rolled out by the MLB are making it so that the state of defensive analysis as we know it is very much like a volcanic planet — Lava is constantly reshaping the surface and transforming what we know. Among its limitations is that it requires a large sample size to be totally reliable — About 3 years (approx. 4,050 innings played). Of course, in the span of 3 years, a player’s ability in the field can change. Therefore it can be said that accurate UZR readings are like trying to hit a moving target. It is still accurate in smaller quantities (like one season), but such figures need to be looked up with more suspicion. Taking one single year of UZR is similar to drawing complete conclusions on a player offensively based on 200 plate appearances. Another limitation is that it has some trouble based on its methodology judging first basemen. While it still works, it is less reliable in reference to first basemen. There are no values for it at all for catchers. A final limitation is that due to the manner in which it is tabulated, UZR figures only go back as far as 2002. This makes comparison between eras impossible and it even cuts out the careers of many players who have been in the league for a very long time. In general it is always a good idea to use two or three defensive metrics in evaluating players along with “the buzz” due to the relative ignorance that we as a baseball community have in objectively quantifying defense. I almost always will use at least two metrics whenever looking at a player. I would use three, but many places that carry such statistics require you to pay for access (or are not released until after the season is over and therefore are not very fluid), and so I use the ones that I can acquire freely, which are not many in number. The number 0 represents a defense neutral player who isn’t particularly good or bad. Plus or minus 5 trends towards the slightly above or below average. +/- 5-10 represents a pretty good or bad player, and +/- 10 or more represents an outstanding or awful player. Looking at both extremes, Andruw Jones had a UZR of 29.1 in 2009. Adam Dunn had a UZR of -22.5 in the same season. UZR is a raw counting stat. Occassionally (if not often) I will use UZR/150, which is UZR per year. This scales performance to a full year of play and is useful in particular to look at lengthy periods of time as well as to attempt to project utility player types who may not play full seasons at one position. Players with very little playing time can have skewed numbers though when extrapolated to a full year of play. Going back to Dunn again, in 2003 he posted a UZR/150 of -71.4 in Right Field, but that was based on just 4 games and 24 innings. It is close to impossible to be that bad over a full season and such a number isn’t accurate. Remember, it’s not the stats that lie, it’s the poor use of them.

**wOBA – **The acronym stands for “Weighted On Base Average”, and right now wOBA is essentially the best metric for evaluating performance if you’re looking for a single number to encapsulate performance. It is derived by weighting each offensive event (singles, doubles, triples, home runs, strikeouts, stolen bases, caught stealing, etc.) by its value to run scoring (see linear weights). The end value is essentially weighted (by linear weights) runs created, but it is scaled to OBP in order to make it an intuitive number that can be understood by all fans. So essentially whatever number one would consider a good OBP, is what would make a good wOBA. Whatever number one would consider a bad OBP, would also be a bad wOBA, and so on and so forth. It is not a park adjusted metric, therefore when using it you must always bear in mind the home park and playing conditions of the player in question. The 2009 wOBA leader was Albert Pujols at .449. Second was Joe Mauer at .438.

**WPA – **While I find this metric very intriguing, I rarely reference it other than in passing because I find what it measures to be generally unimportant compared to alot of other things that happen on the field. The acronym stands for “Win Probability Added”, and it is a metric for both hitters and pitchers (that is applied equally to both and on the same scale) that essentially shows what that player added to his team’s chance to win games on a play-by-play basis. It can be both a positive and a negative number, as players sometimes do things that lead to their team’s defeat. A team’s “chance to win” is based on an exhaustive study that was done a few years ago that essentially combined information from every game situation since they’ve been able to record play-by-play information (roughly going back to 1957) along with a mathematical formula that suggests what the “average team” will do in a given situation. This creates both a run expectancy — The number of runs an average team would be expected to score in a given situation — And a shift in the odds to win. It is a context-independent statistic, meaning that the “odds” don’t change based on the teams involved or the lineups, or the role of either of those in the situation. The “odds” in WPA are the same with Albert Pujols batting and Matt Holliday on deck with bases loaded and 1 out in the 9th inning as they are with Gerald Laird batting and Adam Everett on deck. There is also no regard given to momentum. The whole idea of removing the context is to show what each player adds to the odds of victory if all things were equal.

Still confused? As I happen to have a chart in hand for demonstration, I’ll do just that. This chart is for the Game 163 of the 2009 MLB season between the Tigers and Twins. In every game, both teams start with 50, as there is a 50/50 chance to win every game. In the top of the first inning, Curtis Granderson flied out to Center Field. This increased the Twins odds to win from 50% to 52.3%. Thus, for his at-bat, Granderson’s WPA was -0.02 (technically -0.023). In addition, the pitcher Scott Baker’s WPA for the at-bat was 0.02. Placido Polanco struck out in his next at-bat, raising the Twins odds to win 53.9%. Polanco’s WPA for the at-bat was -0.016, and Baker’s was 0.016. After having pitched to both Granderson and Polanco, Baker’s WPA for **the inning** was now 0.04 (technically 0.039). Baker closed out the inning by getting out Magglio Ordonez and raising the Twins odds to win up to 55%.

Scoring plays obviously impact these numbers far more greatly. With the Tigers winning 1-0 in the top of the 3rd, they had a 57.8% chance to win the game at that point. Miguel Cabrera hit a two-run home run to make the score 3-0 and lift the odds to 75.6%. As a result, that two-run homer raised his WPA (and lowered Baker’s) by 0.188, considerably more impactful than the .02 plays referenced earlier.

Because it is driven by game situation, as one might imagine, the biggest WPA gains (and losses) come from walk-off plays, particularly walk-off home runs. There are also big gains for lead-changing hits and late inning heroics. Pitchers get big bumps for recording strikeouts or forcing double plays when there are many runners on base in close games. Basically, for getting out of innings. The statistic itself, when referenced on a seasonal basis, is essentially an accumulation of the player’s WPA from all the plays during a year. For an idea of the where the standard is, the 2009 leaders were Albert Pujols (8.24), Prince Fielder (7.79), Joey Votto (6.35), Zack Greinke (6.07), and Chris Carpenter (5.41). Justin Verlander was 5th among all pitchers and 4th among all starters with 4.19, which also led the Tigers as a team.

The worst five players were Brad Lidge (-4.54), Todd Wellemeyer (-4.16), Dioner Navarro (-3.34), Chris Young (-3.06), and Cesar Izturis (-3.06). In every single game, the winning team’s players will have a cumulative WPA of 0.50, and the losing team’s players will have a cumulative WPA of -0.50 (the winning team having gone from 50% chance to win to 100% over the course of the game, and the losing team from 50% to 0%).

The biggest con in application of the statistic is that it completely ignores fielding. Errors count 100% against the pitcher and for the hitter. Also, poor fielding players are not docked via WPA and great fielding players are not rewarded. Curtis Granderson’s catch that literally saved the game against Cleveland in May 2009? Doesn’t show up on his WPA. One reason that I don’t like to lean heavily on the statistic is that it loosely measures something that I don’t personally think is a particular skill, which is “clutchness” (more accurately, I recognize that it is a skill but it doesn’t manifest itself in a fashion that makes it a particularly worthwhile skill to have). While players who are consistently good will dominate the tops of the lists in it, once you get further down you’ll start getting into players of various skill levels/sets who essentially had the most “clutch hits” in a given year. Getting clutch hits is valuable but has shown to not be repeatable for most players, and therefore a measure for it such as WPA has limited use. I do feel that it is of utility in judging relief pitchers, who are often brought into situations specifically to get the big outs. Therefore I use it most often in that application.

**wRC+** – wRC+, which is said aloud “W-R-C Plus”, is an acronym for weighted runs created (plus). It is similar to OPS+ only it is a more accurate metric for judging offensive performance. It is probably my favorite one to use at this time and for the foreseeable future it is the “go to metric” that I will use to compare or describe players. It is adjusted for both park and league factors, thus doing as good of a job as possible as describing the true talent level of each player. It is based on weight runs created, and as such it is essentially the same as the “plus” form of wOBA. Like OPS+, the scale is extremely intuitive, where 100 is exactly league average. Every number below or above 100 is a percentage point either below or above the league average, such that a player with a wRC+ of 150 is fifty percent better than the average hitter, and a player with a wRC+ of 50 is fifty percent worse than the average hitter. You’ll find that players deviate from 100 far more trending upward than downward, which is intuitive if you think about it. To be 25 percent worse than the average player is really, REALLY bad, but to be 25 percent better than the average player is only a borderline all-star, depending on the distribution of contribution (i.e. home runs and stolen bases and unfortunately, RBI). For 2009, Albert Pujols had a wRC+ of 184 and Joe Mauer’s was 174. Due to the park and league adjustments, wRC+ is also like OPS+ in that it makes it extraordinarily easy to compare players across eras, since it judges each player against their peers and environment and not simply in bottom line counting figures. The top 10 hitters of all-time via wRC+ are Babe Ruth (204), Ted Williams (196), Lou Gehrig (182), Barry Bonds (177), Ty Cobb (177), Mickey Mantle (177), Rogers Hornsby (175), Albert Pujols (173), Joe Jackson (171), and Jimmie Foxx (168). Note that those are their career wRC+’s, and that in individual seasons those numbers varied. Ruth, for example, posted a wRC+ of 248 in 1920. Williams posted a 233 in 1941. Bonds posted a 249 in 2002.

## Leave a Reply