Right but they aren't individually tailored to each event. A player gets credited with a hit whether it's a bloop over the infielders' heads or a line drive smashed into the outfield. Why try to do the same with fielding and how many runs a guy may or may not have saved based on RE/24, something itself based only on how things played out and not a universal principal?
"Allen Sutton Sothoron pitched his initials off today."--1920s article
Because that's currently about the best we can do.
Until this kind of information is made public
http://www.businessweek.com/magazine...3072802462.htm
Because at this point the best "universal" principal we have is less accurate than probabilities.
Because we live in a quantum world and there is no universal principal.
Or because using a universal principal would be not be practical.
What universal principal is going to show how many runs were saved better than something based on RE24? Using an RE24 based weighting is basically equivalent to rolling a die and predicting that each number will appear 1/6th of the time. (I suppose you could argue that that is accessible by considering the system without regard to actually rolling a die 10,000 times and recording the outcomes).
None, and I did not mean to imply that there exists a universal principle. The fact that there is none means we're assigning RE/24 values based solely on the outcomes of a thousand events per RE value. By this theory, we're assuming defensive runs can be determined by how things played out. Millions of things go into the difference between each RE event, so much that fielders have virtually no control over them. A player who makes an out with RISP versus the player who made the out without anyone on base is a poor comparison. Both did their job irrespective of a situation they couldn't control, which is usually the argument against RBI.
By "universal principle," I meant something that occurs each and every time. A theorem. A constant. Hitting a double is worth two bases forever and always. Assigning fractions based on where it was hit, how hard, how well the fielder fielded, and how well the base runner did on the paths is nonsense. It's also foolish to assign decimal values based on every possible circumstance he could have hit it. When issuing runs to outs made, there's no concrete number. I would buy into it much more if it was a given that a run is undisputedly worth X outs.
Why bother trying to convert outs to runs anyway? Why not measure a fielder based on getting outs? The mathematical correlation does not exist
"Allen Sutton Sothoron pitched his initials off today."--1920s article
I think that the reason to look at RE24 states for a fielder should not be to estimate how many runs he saved, because players will face different proportions of RE24 stats. I think that the reason should be because fielders position themselves differently, and react differently in different RE 24 states, so we want to see how a player does in each state, but then translate him to a setting where he faces a normal proportion of different RE24 states. (I think that something similar could be done for hitters too, see how a hitter does in each RE24 state but then normalize the proportion of different states he sees, although batters are put in certain spots in the lineup in part because they may be suited to certain situations (lots of walks have the most relative value for someone who bats leadoff).
But as for rating it by bases, well that's where I started my own sabermetric search maybe 12 years ago, but a double is not worth twice as much as a single, a triple not 3x as much etc. I think this can be seen by anyone watching baseball for a period of time. Not only are not all bases equal but certain bases are specifically not equal to others in general.
Bookmarks