The Attrition War, Methodology notes

DMZ · September 5, 2005 at 7:45 pm · Filed Under General baseball 

This is the really dry part where I talk about problems with the data and issues encountered.

General issues with research

Information on some players is scarce and only hints at the fates of some players. This is true in particular for players who are low on the lists and attracted little attention, and for players who were ranked earlier. A #10 prospect ranked only once, in 1996, is far harder to research than even a #9 prospect five years later
– As with any arbitrary time span, some teams are favored and others hurt by the distribution of their injuries. If a team destroyed their top pitching prospects early in the 1990s, they may appear progressive here.

Deciding what to look at
I chose 1995-2004 as my time span because it’s ten years through last year, which includes a period where the Mariners are perceived to have lost many pitching prospects while also extending it far enough that hopefully a random bunching would not throw off the final numbers.

Ten years is a long time, composing many pitcher-years.

I chose Baseball America’s Top 10 list for a number of reasons, but most importantly because it provides a year-to-year snapshot of the organization, and because those players are easier to find injury information on. There are several alternate means to do this I considered and discarded. For instnace, you could attempt to track all major injuries by organization by year. This is quite difficult, but might provide a better picture of what a pitcher’s chances are, by year, of being brought down by any particularly injury. This approach has its own problems that become clearer on further inspection, though.

The Top 10, 1995-2004 approach has several flaws.

Flaws in using the Baseball America lists to determine sample

Neglects changes in approach, philosophy, and personnel. If a team racked with injuries decided last year to begin using all possible resources towards injury prevention, hired a new crop of effective doctors and coaches, they might appear here to be saddled with responsibility for the poor work of those who went before them. The reverse would also be true.

Neglects team age and draft philosophy. A 1997 expansion franchise has fewer players in total and fewer players from 95-96 who would then have many years to develop arm problems. In the same way, if a team recently began to draft many pitchers, they could have 30+ pitching prospects on the lists, the bulk of them recent, giving them little time to become injured.

Number of prospects determine the denominator. If an organization has 20 prospects and one has elbow surgery, that’s 5% of the total. Thirty and it’s only 3%. A team with a small number of pitchers is much more prone to look good or bad because of luck. This is true for any team, since the difference between the worst team in the survey and the best team is only nine injuries.

Does not measure prospect value. It’s much more important to a team to protect a potential future ace than it is to protect a potential reliever. Methods to rate the importance of an injury, though, are even more complicated and prone to error. This also touches on a secondary issue, that organizational strength is not measured.

Affected by team farm system philosophy, competitive stance, and player movement. Taken to an extreme, if a team immediately traded their pitching prospects after Baseball America ranked them, they would have a 0% injury rate, while a team that held every prospect ever ranked would have a much higher injury rate.

The injury responsibility question

I chose to cut off injury tallies on players after they moved. It is hard enough to assign responsibility for injuries (as I’ll discuss in the conclusions) and even to correctly diagnose them that the simplest way is to look at possession. This is not an ideal solution, but without being able to ask a labrum about whether it was hurt through cumulative wear or trauma, any solution will be inadequate.

Teams who trade for a player have a chance to view their medical records and have their doctors look over the player. If they miss an injury, this can then be attributed to poor work by their medical staff or inadequate attention to the matter by the team. Further, players who are found to be injured or break down soon after a trade are sometimes returned to the trading team, or become the object of much publicity, but I rarely came across this.

Comments

2 Responses to “The Attrition War, Methodology notes”

  1. Mat on September 5th, 2005 11:23 pm

    Duly noted. It’s nice to see the full disclosure here, but looking at them, I can’t think of any team off hand that would have been systematically hurt or hindered by your system. Just getting your first approximation to the answer seems to be a good step in the right direction, though. Once again, excellent work.

  2. Jeff M on September 13th, 2005 10:32 am

    I just followed a link from DetroitTigersWeblog.com. This was definitely a very interesting read and I’m glad to see you openly discuss the methodologies. There are obviously limits to how far you can take this, but I think you could have factored in organization strength simply by using a league wide prospect ranking, rather than team based.

    The Tigers have certainly had a lot of players crack their Top 10 list briefly, only to disappear completely when it became obvious that they only made it due to the lack of real prospects in the organization. I think this has skewed their ranking, as a large number of real prospects (take a look at the last 6 first rounder picks) have flamed out.

    As a side note, Bonderman, Zumaya, and Verlander are sidelined with supposedly minor arm injuries at the moment.

    Thanks for tackling such an important, yet often overlooked issue.