Evaluating Defense

Dave · January 24, 2006 at 12:45 pm · Filed Under Mariners 

If you’ve been hanging around the blog for any length of time, you’ve probably come to realize that we like numbers. They give us a better way to evaluate what we think we saw, and they compensate for our internal bias’. Since a lot of baseball is essentially a set of isolated individual plays, it’s fairly easy to evaluate a player’s value to the team through their statistics, if you know which ones to use.

However, defensive evaluations have always been elusive to the statistical community. The numbers that were recorded, such as fielding percentage, were basically useless information, more misleading than anything else. For years, the players who have made the most memorable plays have been regarded as the elite defensive players simply because we’ve had no real objective standard of how to evaluate defense.

In the past 3-4 years, however, we’ve seen significant steps forward in the realm of defensive statistics. People interested in understanding the game better have begun purchasing play-by-play data that gives them far more information than we’ve had available previously, and have used that specific information to create systems that do a much better job of figuring out just how much value a player’s defense adds to his team. However, the age of defensive statistical analysis is still in its infancy, and as such, there is not a consensus system that is correct, or established as the industry standard. There are several systems built on solid theories that evaluate different parts of defensive prowess, and sometimes, these systems give widely contradictory results. So, what do we do then, if two systems, both well designed, can’t agree?

At this point, my preference is to take a prism perspective. All of the systems have strengths, and all have flaws. So I’d rather not take any of them at face value, but instead develop a general idea of a player’s abilities based upon as much good input as I can get. So, since it’s been requested and there’s nothing going on in Mariner-land, here’s an overview, with links for those interested, for the defensive statistics that I lend some credence to, and how I attempt to put them together to get an overall idea of a player’s contributions with the glove.

The most widely accepted system is Mitchel Lichtman’s UZR. It was initially introduced in 2003 in two articles over at Baseball Primer. UZR numbers for 2000-2003 were then posted on TangoTiger’s website, where they can still be found. Complete numbers for 2004 and 2005 are not available, as Lichtman was hired by the St. Louis Cardinals and his work became proprietary data for their club. He has released some UZR numbers for the past two years in different discussion threads at Baseball Think Factory, but for the most part, current UZR data is no longer public information.

After UZR, the best system is likely David Pinto’s PMR, or Probablisitic Model of Range. He has published his data at his Baseball Musings blog, and you can read the explanation of the system here. Pinto’s PMR is similar to Lichtman’s UZR, as they’re based on the same principles and both use proprietary play-by-play data, but the bonus is that Pinto is still publishing his work. A Baseball Think Factory poster going by Blackhawk converted PMR into a run metric, to help line up PMR with the other available numbers, on his blog. One downside to PMR that has been discussed is the addition of line drives to the model. PMR includes line drives in its evaluation, while UZR does not. Most people, including me, prefer a model without line drives, as turning line drives into outs appears at this point to be a non-repeatable skill.

Not everyone has access to play-by-play data, however, so several people have attempted to find a proxy for UZR or PMR with freely available information. Most of those efforts focus on adjusting the Zone Rating number that is available in every player’s ESPN profile. Chris Dial’s work on ZR is very good, and he’s probably the leading proponent of the value of ZR as an analytical tool. He’s also published an article explaining ZR that is essential reading if you’re interested in defensive statistical analysis. He also posted an interesting article on defense that led to some great discussion, and again, is basically required reading if you’re as fascinated by this stuff as I am. Also, Dial provides a worksheet with all the data for 2005, which is quite useful.

Another Baseball Think Factory poster, who goes by Chone Smith, posted his article on Tweaking Zone Rating, which is a similar effort to Dial’s work. Again, there is some more good discussion in the linked thread that is worth reading. And, like Dial, Smith provides a worksheet that shows all his data for 2005. Hooray for open source.

Using slightly different methodology than the others, David Gassko chipped in with his RANGE system, explained here, with a spreadsheet that contains data for 2004. He also penned an article on the Hardball Times site awarding his Gold Gloves, based on the numbers given by RANGE. The RANGE numbers were also featured in the 2006 Hardball Times Baseball Annual, and David has since tweaked his system a bit. People who purchase the THT Annual also get access to a spreadsheet with the RANGE data for 2005.

In the more subjective category, Tangotiger has published his Fans Scouting Report for several years now, asking people to fill out a survey of defensive evaluations for players they’ve watched on a regular basis. While it’s not numerically based, like the other systems we’re discussing, a compilation of subjective opinions can offer some interesting insight, and I’d be remissed if I didn’t mention it.

Lastly, the guys over at Baseball Prospectus also publish defensive numbers based on a system developed by Clay Davenport. The data is available on every player’s Davenport Translation player card on BP’s website, such as this one for Ichiro. BP’s system is the least transparent, however, as the nuts and bolts of how it works haven’t really been explained publically very well, and significant changes have reportedly been made over the past few years, but no one really knows what those changes are. Of all the systems mentioned here, I give BP’s numbers the least credibility.

Those seven systems all attempt to evaluate defense on an individual level, which can be quite a challenge. Addressing it on a team level is significantly easier, and Dave Studemund did a great job of presenting team wide adjusted defensive efficiency in the aforementioned Hardball Times 2006 Annual. Studes article is a great sanity check for all the individual defense numbers.

In addition to these, John Dewan is publishing The Fielding Bible in February, which uses similar batted ball data that led to Studes article on DER and David Gassko’s RANGE system. It should be worth checking out, and could be a valuable addition to the field.

Phew. That’s enough linking for now. That should give you a good overview of the different systems that I think add something significant to conversation on defense. However, that’s an awful lot of information, and as mentioned previously, it rarely all agrees. So, once we gather all this information, how do we turn it into a conclusion?

Let’s use a couple of Marinercentric examples. The first will be Ichiro, widely accepted as the best defensive right fielder in the game. Let’s take a look at how the defensive metrics grade him out.

UZR: +7 runs per season, 2001-2003
PMR: +3.5 runs, 2004. 2005 PMR data for RFs will be available shortly.
Dial’s ZR: -2, 2005
Smith’s ZR: +3, 2005
RANGE: +5, 2004, +18, 2005.
BP: +6 runs per season 2001-2005, +11 2005.
Fan Scouting: 92/100, Best Defensive Player in Game At Any Position, third year in a row
Studes DER: Mariners outfield defense +20 as a whole for 2005.

And you thought Ichiro would grade out well by across the board, didn’t you? Dial’s ZR system is the only one that has him below average, and that’s only for one year of data, while everyone else has him at differing degrees of goodness. The Zone Rating based systems have him as being just solid, while RANGE has him as pretty darn excellent, BP’s metrics have him being quite good, and the fans scouting report thinks he’s the best defensive player alive. On a macro level, we know for a fact that the Mariners outfield defense has been well above average since Ichiro arrived in the states.

So, what would you conclude from that sphere of information? I’d say that its extremely unlikely that Ichiro is really a below average defensive player and fooling almost every system and every person who watches him play. Essentially, we “know” that he’s a good defender. We just don’t have a great grasp of how good. Knowing that RANGE has some issues with rightfield, and its run conversion numbers are a bit inflated in my opinion, I’d likely settle in with an opinion that Ichiro’s glove is worth something like 5 to 15 runs above an average right fielder in any given year. Good? Yes. Best defensive player on the planet? No. In my opinion, the spectacular plays he makes, combined with the friendliness of Safeco Field for outfielders, makes him appear to be slightly better than he really is. But there’s almost no question that he is, in fact, a valuable defensive asset.

How about another example, and one that highlights one of the main flaws of defensive statistics currently available? Yuniesky Betancourt, who, I think, we all agree can play a little defense. All numbers just for 2005, obviously.

UZR: +11.5 (thanks Tango)
PMR: +22
Dial’s ZR: -13.5 (initial post had this number incorrect)
Smith’s ZR: -6
RANGE: -3 (thanks David)
BP: -11
Fans Scouting: 86/100, top rated SS
Studes DER: Mariners were about 15 runs below average as an infield.

Talk about divergent opinions. You’ve got anywhere from 11 runs below average over the course of a full season to 22 runs above average. That’s just a massive swing, and obviously, both can’t be correct. Why would the numbers turn out so differently?

Sample Size. The generally accepted principle in defensive statistics is that you need at least two years of data to generate any kind of real conclusion about a player’s abilities, and you’d prefer to have more. With Betancourt, we basically have 1/3 of one season. There are just way too many non-fielding factors that could influence the number over that period of time. Ball in play distribution is a huge factor in small sample defensive numbers, for instance. If Betancourt happened to receive more easy to field grounders than others, his number would be through the roof. If teams were whacking uncatchable balls into the hole, his rating would suffer, and because of the small time frame, the impact of a few extra balls here and there would be magnified greatly.

When it comes to defensive evaluations, you simply cannot ignore the issue of sample size. Limited data samples can be more misleading than informational. If you don’t have a big enough sample, ignore the data. Seriously, I don’t value PMR loving Betancourt anymore than I discount BP’s system hating him. I think they’re both near worthless, because they are drawing from too small a pool to be taken seriously.

When it comes to defense, historical context is huge. With players like Betancourt, we don’t have that, so we need to use the best available information we have, and in cases like his, that’s scouting reports. The M’s organization loves his defense. We loved his defense. Those who filled out the fans scouting report loved his defense. There’s no way that 50 games of data should overrule that information in your own mind. Scouting matters, especially when the data is flawed.

So, when discussing defensive evaluations, I say use as much good information as possible. Look at all the systems in context. Get as many years of data as possible. Look at the scouting reports. And then, draw conclusions that accurately represent your confidence level. If the system’s aren’t accurate enough yet to give us one number (they aren’t), use a range. It’s okay to say that Ichiro is about 5-15 runs above average. That’s the extent to what we know at this point. No need to be more conclusive than we’re able to.

The defensive systems will get better. This is what we have right now. They’re useful, but moreso when used together, rather than viewed as seperate entities.

Comments

42 Responses to “Evaluating Defense”

  1. Brian Rust on January 24th, 2006 1:51 pm

    Thank you Dave. I suggest you permanently link to this listing as a reference in your right hand column.

  2. Brian Rust on January 24th, 2006 1:51 pm

    Oops. Other right. The left-hand column, that is.

  3. Evan on January 24th, 2006 1:55 pm

    This was so well written there’s nothing else to add. Well done, Dave.

  4. little joey on January 24th, 2006 2:01 pm

    Could someone do a post on Jeremy Reed’s defense? He’s another player without much sample size, and since defense might be his biggest contribution so far, I’d like to hear more about it. I haven’t seen him yet, having recently returned from the peace corps, and finding quite a nice blog here.

  5. Evan on January 24th, 2006 2:11 pm

    Dave did a quick little analysis of Reed in December, though not this in-depth:

    http://ussmariner.com/?p=3183

    Though, to find that I had to use the old search engine, which is really quite difficult since they replaced it with Google.

  6. Nick on January 24th, 2006 2:47 pm

    I’m interested to see if any of these evaluations make an attempt to account for the psychological effect of superior defense, e.g. an outfielder like Ichiro, with a reputation for a strong, accurate arm undoubtedly has fewer chances at assists simply because fewer runners try to take an extra base on him. So does Ichiro get punished for fewer assists?

  7. eric on January 24th, 2006 2:49 pm

    #6 Jeff at lookout landing had a link a while ago to a study someone did on that that showed how much affect Ichiro’s arm had by preventing runners from even trying to advance.

  8. Sparhawk2k on January 24th, 2006 2:49 pm

    It would really be interesting to see all of those side by side with a lot more examples… To try to see where they tend to differ so much. But I’m thinking that could be rather difficult? Especially since one isn’t even availible and it doesn’t sound like they’re all completely calculated for all the players?

  9. Smegmalicious on January 24th, 2006 2:52 pm

    So the questions I’m left with are:
    Does the Lead Glove award Richie Sexson got hold true on other metrics? Is he really a big tall hack out there?
    Do any of these take things like intimidation into account? For instance people are less likely to take an extra base on Ichiro than they are on Randy Winn. Is that borne out in the numbers?

  10. scraps on January 24th, 2006 2:58 pm

    Thanks very much for this.

  11. marc w on January 24th, 2006 3:00 pm

    Good stuff; thanks!
    You’re right of course about the sample size issues at work in Betancourt’s figures, but I still couldn’t stop myself from snorting derisively at BP’s -11. I’m just shocked that there is a way, using normal, human mathematics, to make Betancourt look like a below average fielder.

  12. Dave on January 24th, 2006 3:04 pm

    There’s been a decent amount of work doing mass comparisons between the systems. In fact, a lot of that is presented in the linked articles above. David Gassko’s way to tell if his system was “working” or not was to compare correlation to UZR, and Mitchel Lichtman gave him enough data to compare between the two. I believe he said the correlation was .7 or so, which is fairly high for a system not using play-by-play data.

    I wouldn’t put too much stock into Sexson scoring poorly at first base in RANGE. Gassko admits that RANGE has a big problem at first base. Now, the M’s infield defense wasn’t very good last year, so Sexson might be crap with the glove, but I wouldn’t draw that conclusion just based on his RANGE number.

    Also, MGL has added an arm value to supplement UZR, so we can quantify the runs saved by a player’s ability to hold runners. And generally, the effects of “intimidation” are way overstated. The added value of Ichiro’s perceived great arm is, at most, two or three runs a year.

    There are some players where pretty much all the systems agree, by the way. Manny Ramirez is the worst left fielder in baseball by any metric you want to use. Ken Griffey Jr is abysmal defensively in center according to every number you can find. And good luck finding a rating that doesn’t love Scott Rolen.

  13. gwangung on January 24th, 2006 3:26 pm

    Hm. Multiple methods, multiple tests. Remember that from my methodology days as a way to get at some of the underlying concepts I was trying to measure. Rarely is there just ONE measuring stick that will get you where you want….

  14. Mat on January 24th, 2006 3:41 pm

    “You’re right of course about the sample size issues at work in Betancourt’s figures, but I still couldn’t stop myself from snorting derisively at BP’s -11.”

    BP’s numbers still seem to do a poor job at adjusting for context. I’m not sure if that’s exactly the case here, but it could be. Given that Safeco is a good place to catch flyballs, Betancourt winds up with somewhat fewer chances to make outs because there are only 24-27 defensive outs per game, and the outfielders are catching some warning track flyballs that would be homeruns elsewhere.

    The example I’m thinking of mainly is the Twins’ middle infield pre- and post-FieldTurf. Pre-FieldTurf, Luis Rivas had about a 92 Rate2 in 3+ seasons at 2B and Cristian Guzman had about a 95 Rate2 in 5 seasons at SS. In 2004, those numbers went up drastically to 110 for Guzman and 114 for Rivas. There’s no way on earth they actually improved that much, and if anything I feel from watching them that they were better 2-3 years ago. That trend continued with middle infielders for the Twins this year, with Juan Castro posting an obscene 124 Rate2 at SS, and Jason Bartlett posting a 118 Rate2 at SS. Castro’s considered a good glove, but his career Rate2 at SS is only 106, and Bartlett’s not generally considered a great fielder.

    The long and the short of it is that there’s definitely something either about park effects or pitching staff effects (replacing Eric Milton with Carlos Silva is a pretty drastic change from flyball to groundball pitching) that the BP defensive metrics are way, way off on. I’d suspect this is the case for more than just a few players or a few teams.

  15. Smegmalicious on January 24th, 2006 3:41 pm

    Can we get like a summation of all the Mariners?

  16. robbbbbb on January 24th, 2006 4:24 pm

    Nice writeup, Dave. Thanks. I work in a lab, and we’re always looking for multiple ways to confirm the same data. Especially since we have small sample sizes to work with. (The joys of a production environment.)

    Two points to re-emphasize, because they’re the most important things you say in the article:

    “If you don’t have a big enough sample, ignore the data.”

    “Scouting matters, especially when the data is flawed.”

    Thanks. Excellent points. I work in a lab, but that doesn’t mean we disregard subjective evaluation. You just have to be careful with it, and get as many points of view as you can.

  17. Evan on January 24th, 2006 4:25 pm

    I don’t see a lot of evidence of intimidation, but baserunners will take advantage of arms that are perceived to be exceptionally weak. I recall an instance where Carlos Delgado, not a fast guy, took second on a medium-deep flyball to center because Randy Winn’s arm is terrible.

    Mat’s point on park effects is also a very good one. Some parks are very different defensive environments. Toronto’s outfield warning track is really strange. The baggie in the Metrodome encourages players to leap into it where they wouldn’t if they were at Wrigley.

  18. Rob G. on January 24th, 2006 4:26 pm

    Curiously, why no mention of Win Shares? Seems to be as well thought out as any other system, beyond being only a cumulative stat. Thoughts?

  19. Dave on January 24th, 2006 4:35 pm

    I don’t know anyone who takes defensive win shares seriously. It’s just not a very good system, especially compared to what we already have.

    And, for what its worth, all of the system’s I mentioned above account for parks in their numbers. They do so to varying effectiveness, and I agree that BP’s has some serious issues in this regard, but the numbers are park adjusted. The M’s outfielders score off the charts in most of these metrics before the park effect is added. Safeco is where outfield flies go to die.

  20. Mat on January 24th, 2006 5:01 pm

    “They do so to varying effectiveness, and I agree that BP’s has some serious issues in this regard, but the numbers are park adjusted.”

    Just to clarify, I did mean only that BP’s numbers were poor at adjusting for park, not that they didn’t try. One thought I’ve had since then is that BP tends to do park factors over 3-year periods, maybe longer in some cases. This could be a case where averaging over a longer period is worse than living with the statistical fluctuation in one season’s worth of data, thanks to a change in environment.

    On a different defense-related note, I think it’d be really interesting to have data on “scouty” type measurements of fielders. There seems to be no reason to have lengthy, subjective debates about (for example) which SS has the best arm. We have radar guns and other tools to measure the speed of a throw, so we could just point them towards the SS and see who throws hardest in addition to who’s the most accurate (which we generally have pretty good data on). How cool would it be to see a histogram of throwing speed for Betancourt? You’d probably see a noticable peak at lower speeds, where he’s making routine plays, and then a peak at some higher speed, where he tops out on tougher plays.

  21. tangotiger on January 24th, 2006 5:06 pm

    UZR has Betancourt at +11.5 runs per 150 games.

    I don’t know how PMR’s runs above average was calculated (was 2005 set to zero, or 2002-2005 as the zero… there’s a problem here). He was one of the top ranked SS in PMR.

  22. David Gassko on January 24th, 2006 5:12 pm

    Converting the 2005 PMR baseline to 0, I get Betancourt as +6 RAA. Range has him at -3 runs, but Dave is right — at 450 innings, defensive metrics are pretty meaningless.

    This is a very nice review of all the defensive metrics, and I agree about the order Dave suggests to look at them. UZR is the best, PMR is next (but can be supplemented with other metrics), then Range and ZR, and then BP’s stuff. One quibble: Having re-done the first base ratings, I’m actually pretty confident in them now. There’s one total miss I have which is Travis Lee, but other than that, the ratings match up with UZR quite well.

    I’m going to publish an article on THT once David Pinto has done all the PMR calculations comparing UZR to Range, PMR, Zone Rating, and DFTs (that’s what BP does). Should be fun.

    Once again, this is a great overview of the defensive metrics that are out there.

  23. Rob G. on January 24th, 2006 5:47 pm

    btw, interesting e-mail exhchange between Lichtman and Rob Neyer here about defensive win shares and UZR

    http://www.robneyer.com/book_03_BOS.html

    Granted, any play-by-play system would yield better results, but I find it hard to imagine that no one takes it seriously, when I’ve read Neyer, Pinto use it at times and Hardball Times publishes the results. I’m not saying it’s the best system, but from what I read of it’s methodology, it seems pretty sound. Certainly no worse then BP’s, which no one knows how it works.

    I’m honestly just curious what people consider the faults of the system are?

  24. Dave on January 24th, 2006 5:49 pm

    Without being mean, I’m not sure Rob Neyer still qualifies as a baseball analyst.

  25. Rob G. on January 24th, 2006 5:58 pm

    Granted, I haven’t read a Neyer article since he went behind the wall and he use to be Bill James assistant. Still doesn’t mean he’s an idiot. As I said, curious what are the perceived faults of defensive Win Shares?

  26. DMZ on January 24th, 2006 6:08 pm

    Try using your search engine of choice: many, many people have taken their shots at win shares. They’re horrible.

  27. Dave on January 24th, 2006 6:13 pm

    US Patriot’s blog would be a good start, since he did a five part series exposing the flaws of Win Shares.

    There’s also this article which points them out in all their glory.

    And Neyer’s definitely not an idiot. He’s just irrelevant.

  28. Rob G. on January 24th, 2006 6:34 pm

    thanks for the links, it’ll take a few days to get through the US Patriot stuff. I don’t know what to take of the other article. Just says he’s a mathematician and he disagrees. That’s cool. I’m sure if you put up the formulas for most of these metrics (let’s just exclude the play-by-play systems), people would disagree. But I’m also guessing that if you compare the best shortstops defensively by any of the systems you mention and the best shorstops by defensive win shares, you’d get remarkably similar results.

  29. Dave on January 24th, 2006 6:43 pm

    Rob,

    I’m fairly certain that’s not true. It’s nothing against Bill James, who obviously was a pioneer of baseball statistics, but win shares is just not a very well designed system. Pretty much every current baseball analyst around debunked it when it came out. It’s not just the math guy who disagrees. It’s everyone who looked under the hood and said “hey, this doesn’t work”.

    If you’re looking for a shorter explanation, try this one. Tom’s about as smart a baseball mind as you’re going to find. There are win shares refutations everywhere. It’s Bill James clunker, after decades of great stuff.

    And, by the way, if you put up the formulas for the metrics above, you don’t get major disagreement. Go read the threads that I linked to in Dial’s articles. Gassko and Dial may have their disagreements over how precise ZR is, but no one is saying its useless. UZR’s been peer reviewed and generally accepted. The guys who are actively spending their time trying to figure out how to evaluate defense (MGL, Tango, Dial, Pinto, Gassko, Humphreys, and Emeigh, basically) have regular conversations about the design of their systems. And they work out the problems, update the formulas, and make them as good as they can.

    Using Win Shares is like riding a donkey to work. It might get you there, slowly, or it might just stop in the middle of the road and get you hit by a car. The fact that the donkey might be in a good mood on any given day isn’t a good reason to sell your car.

  30. John D. on January 24th, 2006 11:44 pm

    OUTFIELD ASSISTS – (sEE # 6) – You would think that in the course of a season base-runners would realizethat it’s foolhardy to run on so-and-so; that no outfielder should lead the league in outfield assists two years in a row; but such is not the case.

  31. John D. on January 24th, 2006 11:49 pm

    [# 30, cont.] (Must’ve hit the wrong key.) IIRC, in one of his books, Bill James mentions that some outfielder led his league in outfield assists six years in a row.
    Hmm! Those base-runners.

  32. Mike Lien on January 24th, 2006 11:51 pm

    Does the relative strength of the M’s outfielders over the life of safeco field affect its park rating or is it merely the difference between OF’ers performance at safeco vs. performance on the road?

  33. terry on January 25th, 2006 4:24 am

    #12: hmmmm….flanked by Adam Dunn and Wily Mo Pena while standing in front of Lopez at short and Aurilia at second while eric Milton pitches….can we please at least entertain the idea that given the blunt nature of defensive metrics, any conclusion about Griffey in center has to carry with it a caveat so large that an all-star cast of the biggest loser could easily find it spacious?

  34. Dave on January 25th, 2006 6:32 am

    Dunn and Pena get abysmal ratings too, and if you look at the rate at which flyballs went for hits against the Reds, they deserve it. They all suck at defense.

    Seriously, there’s no way Griffey’s not a terrible, awful center fielder anymore. His legs are gone. At this point, he should really DH.

  35. terry on January 25th, 2006 8:58 am

    Im not trying to argue and certainly I dont come from a Griffey fan-club point of view(and clearly Griffey ’06 is not the Griffey of ’95). My point is, that if I understand the defensive metrics as they are currently formulated, they are blunt because they cant eliminate/tease out the influence of the *team* contribution to the raw data (unlike FIP for evaluating a pitcher versus ERA).

    So really, if I understand these metrics, the fact that Pena and Dunn are horrible has to influence Griffey negatively…in essence those guys make centerfield play twice as big as it should. Please show me the light if im in the dark or overstating the weakness of these metrics.

    I would argue with the assertion that Griffey really has no business putting on a glove any longer. He’d make a fine left fielder. Also, even if we’d have to agree to disagree on this point, he certainly could try his hand at first base.

    Personally, if I was his agent, Id tell him to lobby for a trade to Baltimore, insist that he DH and probably guarantee at least another 4 years of that special shine on his bat.

  36. Matthew Carruth on January 25th, 2006 9:17 am

    They certainly can isolate the individual player’s “contribution” by using the zones method. The entire field is cut up into zones and BIP data is used to figure out where the balls end up. It really wouldn’t matter who plays LF or RF because you’d be comparing the CF on the balls a normal CFer should get to based on the accumulation of BIP data.

    Or something like that.

  37. Dave on January 25th, 2006 9:58 am

    So really, if I understand these metrics, the fact that Pena and Dunn are horrible has to influence Griffey negatively…in essence those guys make centerfield play twice as big as it should. Please show me the light if im in the dark or overstating the weakness of these metrics.

    This is incorrect. As Matthew notes, the field is cut up into zones of responsibility. For instance, if historically, CFs have caught 98 percent of balls hit to a specific spot in the field, than it can easily be assumed that ball is well within a normal CFs ability to convert it into an out. If Griffey, or whoever, can’t get to that ball and it falls for a hit, they’re penalized, and they should be.

    The fact that Adam Dunn is an immobile oaf doesn’t have any bearing on Griffey’s zone of responsibility. UZR and PMR have done a great job of stripping out the effects of other fielders and the park on a player’s defensive numbers. The biggest hurdle, at this point, is ball in play distribution. But BIP distribution errors will generally pop up in the prism perspective looking at a player through many differenent lenses. Good luck finding numbers that don’t say that Griffey is abysmal defensively.

    I know people are skeptical of defensive metrics, Terry, but they’re far more advanced than you appear to believe. And there’s significant reason to believe that Jr has lost enough range to be useful in the outfield, even in a corner.

  38. Evan on January 25th, 2006 10:29 am

    When I first saw Win Shares, I didn’t like it. I didn’t like it because it seemed like you could affect one player’s Win Shares by changing the performance of his teammates. Essentially, how many Win Shares my specific performance earns differs based on what my team does around me. And that struck me as absurd, but I could find a sufficiently detailed description of the math to prove it.

    Such evidence appears in part 3 of US Patriot’s Win Shares analysis. So, thanks for the link, Dave.

  39. scraps on January 25th, 2006 1:31 pm

    I didn’t know the Mariner infield was below average overall defensively. Since there’s good reason to believe that Seattle’s near the best in the league at third and short, how bad does that mean we were at second and first? Or is it that Morse and Boone drag the numbers down that far all by themselves?

  40. David Gassko on January 25th, 2006 1:48 pm

    The only metric where bad fielders in the OF might effect the team’s other OFers is ZR, where players are effectively penalized for not playing in their zones. That’s going to cause a myriad of problems for ZR which is why it’s dangerous to base anything on ZR only, especially if you know that one of a team’s OFers sucks.

    In all other well-constructed metrics, players aren’t going to impact each other’s ratings too much, except with discretionary plays (i.e. Andruw Jones taking all short pop-ups for the Braves instead of second basemen).

  41. JolietJake on January 28th, 2006 7:51 pm

    I have to wonder how much scouting has changed defense metrics over the last 3 years of play since some teams have dropped advance scouting?

    In 2005, using the NLCD for instance, I know the Pirates didn’t have any, the Brewers used interns at home charting pitchers, and the Astros started off the year without it then resumed in June or so.

    The end result, in the Pirates case, was an OF often playing out of position, playing too deep (almost no-double depth), and a hord of runners advancing on limited arm strength.

  42. vetted_coach on June 22nd, 2012 5:06 am

    I have been mystified for several years at the average fan’s scouting evaluations of Ichiro, especially as the best defensive player “at any position.”. My initial reaction upon first seeing this: “absurd.”

    There has long been a sort of elitist debate over the general approach to all baseball assessments as to the relative values of saber metrics vs. the eyeball, or anecdotal approach used by most traditional scouts. It is easier to argue with some numbers than others, but my opinion is that the most sensitive categories of evaluation (“suspect?”) are WAR and UZR. In short, a scout takes an accumulated reference of a vast number of seasons, series, games, innings, plays, and pitches, and gathers for himself a sensual result. Can any mathematical equation, for example, measure what a veteran professional sees when he observes a particular outfielder’s route to a ball? This is a simplification, but still a valid point.

    Let’s strip away our tendencies to become emotional, biased, and elitist. The most important aspect of a sabermetician’s outlook, according to Bill James, is humility.

    It is a stretch I think for anyone to conclude, given the spectrum of defensive positions, that Ichiro has ever been in any single season the best defensive player in MLB.

Leave a Reply

You must be logged in to post a comment.