Announcement

Collapse
No announcement yet.

I don't buy the run to win conversion in WAR

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • I don't buy the run to win conversion in WAR

    In 1968, the league average 3.43 runs per game (NL), but WAR credits .124 wins per "run".
    In 1930 NL averaged 5.68 runs per game but WAR credits 0.091 wins per run.

    In other words, 1930 NL had 1.66x more runs per game as 1968 NL, but a run was only worth 1.36x as many wins. In a pythagorean system, wins per run should be exactly proportional to runs per game, in fact you don't even need to know runs to estimate pythagorean winning percentage, you only need to know the ratio of runs scored to runs allowed.

    What is the statistical basis for awarding only 136% of the wins per run in an environment with only 60% as much offense? I don't buy it. I think that a team that averages 2 runs in a 1 run environment will win as much as a team that scores 8 runs in a 4 run environment (both about 80%).

  • #2
    Originally posted by brett View Post
    In 1968, the league average 3.43 runs per game (NL), but WAR credits .124 wins per "run".
    In 1930 NL averaged 5.68 runs per game but WAR credits 0.091 wins per run.

    In other words, 1930 NL had 1.66x more runs per game as 1968 NL, but a run was only worth 1.36x as many wins. In a pythagorean system, wins per run should be exactly proportional to runs per game, in fact you don't even need to know runs to estimate pythagorean winning percentage, you only need to know the ratio of runs scored to runs allowed.

    What is the statistical basis for awarding only 136% of the wins per run in an environment with only 60% as much offense? I don't buy it. I think that a team that averages 2 runs in a 1 run environment will win as much as a team that scores 8 runs in a 4 run environment (both about 80%).
    fangraphs includes a "league adjustment" in their war, which may throw off something. https://library.fangraphs.com/misc/war/
    Last edited by layson27; 11-11-2019, 03:05 PM.

    Comment


    • #3
      Originally posted by brett View Post
      In a pythagorean system, wins per run should be exactly proportional to runs per game, in fact you don't even need to know runs to estimate pythagorean winning percentage, you only need to know the ratio of runs scored to runs allowed.
      This isn't generally true. There are several formulas available for converting RPG to RPW, and they aren't linear or proportional.

      E.g., Pythagenpat has a general formula of RPW = 2*RPG^(1-z), where z usually is around 0.29. A simpler version of this that works for most baseball run environments is RPW = .75 * RPB + 2.75. This is fairly similar to a formula developed by Tango, which is RPW = 1.5 x (RPG + 2), except RPG is defined as (the runs scored by one team per inning) x 9, whereas the Pythagenpat is runs scored by both teams per game.

      http://walksaber.blogspot.com/2009/0...thagenpat.html
      Last edited by Stolensingle; 11-13-2019, 09:47 PM.

      Comment


      • #4
        Originally posted by Stolensingle View Post

        This isn't generally true. There are several formulas available for converting RPG to RPW, and they aren't linear or proportional.

        E.g., Pythagenpat has a general formula of RPW = 2*RPGexp(1-z), where z usually is around 0.29. A simpler version of this that works for most baseball run environments is RPW = .75 * RPB + 2.75. This is fairly similar to a formula developed by Tango, which is RPW = 1.5 x (RPG + 2), except RPG is defined as (the runs scored by one team per inning) x 9, whereas the Pythagpat is runs scored by both teams per game.

        http://walksaber.blogspot.com/2009/0...thagenpat.html
        I strongly suspect that a valid pythagorian estimator would have a different value for z in different time periods. Do WAR systems use a single value across all times, or talored to the period?

        Comment


        • #5
          Another problem. Let's say a player produces 20% more than an average player. He will get more war for producing 120% of the league rate in a 5.7 run setting as in a 3.4 run setting based on the example above and the formula. This does not imply however that the player who produces at 120% of the league rate in the 3.4 enviro would not produce runs at 120% of the league rate in a 5.7 run setting, it only means that he would get less win value from it which is unfair in player rankings though appropriate in modelling outcomes.

          So it seems to me that players in low run environments may be shortchanged in player evaluations because being 20% better in a lower scoring environment doesn't lead to as many wins as being 20% above average in a higher scoring environment. Seems we should take a player's relative performance level (runs/replacement runs) and place them all in an average run environment for purposes of player evaluation.

          Comment


          • #6
            Originally posted by brett View Post
            Another problem. Let's say a player produces 20% more than an average player. He will get more war for producing 120% of the league rate in a 5.7 run setting as in a 3.4 run setting based on the example above and the formula. This does not imply however that the player who produces at 120% of the league rate in the 3.4 enviro would not produce runs at 120% of the league rate in a 5.7 run setting, it only means that he would get less win value from it which is unfair in player rankings though appropriate in modelling outcomes.

            So it seems to me that players in low run environments may be shortchanged in player evaluations because being 20% better in a lower scoring environment doesn't lead to as many wins as being 20% above average in a higher scoring environment. Seems we should take a player's relative performance level (runs/replacement runs) and place them all in an average run environment for purposes of player evaluation.
            I agree with you that this is, or should be considered to be, an unfinished subject. Naively, one might think that we could determine RPW simply by dividing league runs by league wins, and that RPG for an average team ought to be exactly one half of RPW (since it wins exactly half its games). In fact, these values are fairly close to the actual values as published at various sites, but not quite the same. And the lower the actual RPW value, the more the deviation. E.g., for 1930, league runs/league wins = 11.12, which is pretty close to the FG value for RPW of 11.45. For 1968, the values are 6.86 and 8.13, respectively.

            Why? As far as I can tell, RPW is determined by a marginal gains type of process. That is, one takes the wins and losses of an average team, then calculates how many additional runs it has to score for the Pythag to add exactly one more win. The higher the run environment, the less difference between this marginal gain and the overall league runs/league wins value.

            Comment


            • #7
              This discussion is good, and one I would rely upon to better understand the mechanics of how WAR works at all...... for hitters.

              What is utterly baffling to this fan is how WAR works for pitchers. Can anyone explain the mechanics and variables of the calculations for starters and for relievers?

              Here is an example of the lack of common sense in WAR calculation for pitchers. Lets choose two of my favorite pitchers from ancient days: Danny Jackson and Rollie Fingers. As we know, one pitcher is in the Hall and the other is not, and I never questioned the worthiness of that.

              Danny Jackson: 15 years, 2072 2/3 career IP, 112/131 career W/L ( 19 wins below 0.500), and 18 career WAR.

              Rollie Fingers: 17 years, 1701 1/3 career IP, 114/118 career W/L (4 wins below 0.500) and 25 career WAR. He of course has a saves total which spells the difference over the Hall.

              But on WAR???? How are all those saves of Rollie amounting to only 7 additional WAR?

              Both pitchers had a WAR figure in the positive, and both pitchers were solid in the postseason, but in the end the career of one pitcher must be judged in subjective terms over the weight and significance of what saves represent.

              The only conclusion I can reach on WAR as it relates to pitchers is that subjectivity weighs heavily, and we tend to evaluate pitchers subjectively anyway without WAR.
              Catfish Hunter, RIP. Mark Fidrych, RIP. Skip Caray, RIP. Tony Gwynn, #19, RIP

              A fanatic is someone who can't change his mind and won't change the subject. -- Winston Churchill. (Please take note that I've recently become aware of how this quote applies to a certain US president. This is a coincidence, and the quote was first added to this signature too far back to remember when).

              Experience is the hardest teacher. She gives the test first and the lesson later. -- Dan Quisenberry.

              Comment


              • #8
                Originally posted by abolishthedh View Post
                This discussion is good, and one I would rely upon to better understand the mechanics of how WAR works at all...... for hitters.

                What is utterly baffling to this fan is how WAR works for pitchers. Can anyone explain the mechanics and variables of the calculations for starters and for relievers?

                Here is an example of the lack of common sense in WAR calculation for pitchers. Lets choose two of my favorite pitchers from ancient days: Danny Jackson and Rollie Fingers. As we know, one pitcher is in the Hall and the other is not, and I never questioned the worthiness of that.

                Danny Jackson: 15 years, 2072 2/3 career IP, 112/131 career W/L ( 19 wins below 0.500), and 18 career WAR.

                Rollie Fingers: 17 years, 1701 1/3 career IP, 114/118 career W/L (4 wins below 0.500) and 25 career WAR. He of course has a saves total which spells the difference over the Hall.

                But on WAR???? How are all those saves of Rollie amounting to only 7 additional WAR?

                Both pitchers had a WAR figure in the positive, and both pitchers were solid in the postseason, but in the end the career of one pitcher must be judged in subjective terms over the weight and significance of what saves represent.

                The only conclusion I can reach on WAR as it relates to pitchers is that subjectivity weighs heavily, and we tend to evaluate pitchers subjectively anyway without WAR.
                Relievers tend to pitch in higher leverage situations. Their value is modified based on the game situation when they entered the game. It does suggest that had Jackson pitched in Fingers's situations and pitched as well under that leverage, he would have had similar results to Fingers, howver so far, war only accounts for the lineups of the teams that the pitcher faced and not the specific batters, so Fingers probably faced a little higher level of batters in those clutch situations. For every pitcher who gets more value because they pitched in high leverage situations there is one who gets less value-below average performers will lose MORE war in high leverage situations so the balance of value doesn't change but generally all good relievers get a boost and all below average relievers get a dock. Still, one could argue that many starters could probably have surpassed hall of fame relievers careers had they pitched in relief.

                Comment

                Ad Widget

                Collapse
                Working...
                X