Announcement

Collapse
No announcement yet.

Pitch counts and Strasburg

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Pitch counts and Strasburg

    As a premise, generally speaking, I'm not a Sabremetrics or Moneyball fan, certainly not a proponent of using recompiled data to drive in-game or day to day strategy.

    I think a lot of the recompiled data, like WHiP and WAR, is contrived because of huge differences in playing conditions and because of limited sampling size and because real life situations are not accounted for. I also think lots of that kind of stuff is outcome oriented, ie, it supports an existing favored premise, and is not really scientific, ie, it provides a foundation for a neutral premise.

    As an example, I believe good players on good teams get their offensive stats skewed downward because those teams usually draw the best opposing pitching. Bad players on good teams get upticks because they are protected in the lineup. Meanwhile it seems to me that good players on bad teams get a statistical boost because they face the worst opposing pitching. Data like WHip and WAR doesn't seem to be able to incorporate that kind of reality.

    Nevertheless, I think mining data may help understand situations to some extent, and settling bar bets is one of them.

    When Strasburg was shut down, my conversation partner noted that pitchers of yesteryear threw until their "arms fell off," and blamed shutting him down and the Nationals putting their Series hopes at risk on irrational pitch counts vs. bloated salaries, whereas I maintained that starters don't really pitch any less than they used to because it seems to me they go deeper into counts than they ever had to because each at bat is more closely scrutinized because of all the money that rides on games and championships, and because teams recognize what they had not before: that bullpens are the soft underbellies of most teams.

    What is probably really at issue for injuries isn't even tracked by statisticians: the number of curve balls a guy throws and the quality of his form and his instruction and his off day regimen. That's for another day.

    But if you maintain all those other things as being equal, how does one prove that if pitcher X had 36 starts and went an average of 8 innings per start in the 1950's he really threw about the same number of pitches in the season as pitcher Y in the 2000's in his 31 starts averaging 6.2 subject to some reserve for a variable number in an unknown number of post season appearances? The problem is that reliable counts from that earlier time either don't exist or are hard to find. You use data you do have that you assume closely correlate to the one you want to know, and track that data instead to get a handle on the unknown.

    I tried to evaluate this with my own metrics to see if I could crudely assess what was going on in terms of pitch counts, and came up with this information, derived from data on Baseball-reference.com for the period 1951 to 1960 versus the period 2002 to 2011. I only used the National League because it has rules consistency over the era, and compiled league averages for each decade.

    I was surprised by how finding that most things about baseball had not changed no matter how much we think they have, despite philosophical changes, technical changes, merging umpiring teams, play at altitude, pitch counting, bandbox ballparks, radar guns, and what have you.


    Runs per game per team: 1950's: 4.414 2000's: 4.504

    Batting average: 1950's: .2596 2000's: .2604

    OBP: 1950's: .3276 2000's: .3299

    Walks per game: 1950's: 6.660 2000's: 6.456

    If you only had that information, since everything looks about the same, you might be tempted to conclude that more or less the same number of pitches were thrown then as now per game, just currently more spread out among larger staffs with more roles to save starters pitches.

    But I also discovered this:

    Stolen bases per game: 1950's: .647 2000's: 1.128

    Strike outs per game: 1950's: 9.24 2000's: 13.71 (noting strong upward trend lines within both periods)

    Ratio of strike outs to walks: 1950's: 1.39 2000's: 2.06 (noting strong upward trend lines within both periods)

    I am inclined to believe that information tends to show that pitchers weren't worse or relatively worse compared to hitters in the latter era, given that OBP is virtually spot on, but that hitters are taking more pitches to allow runners to steal and that they are trying harder to work counts to get into opponents' bullpens, despite a side effect of that strategy being more strikeouts, the ramification of that being somewhat attenuated by contact becoming less crucial in a regime of common stealing.

    What does the panel think?

  • #2
    "I think what you guys are into is generally a waste of time. By the way, can you help me out with something?"

    Well played!
    UI2
    BTB

    Comment


    • #3
      Originally posted by rodk View Post
      I am inclined to believe that information tends to show that pitchers weren't worse or relatively worse compared to hitters in the latter era, given that OBP is virtually spot on, but that hitters are taking more pitches to allow runners to steal and that they are trying harder to work counts to get into opponents' bullpens, despite a side effect of that strategy being more strikeouts, the ramification of that being somewhat attenuated by contact becoming less crucial in a regime of common stealing.
      I don't know about that. I don't think that SBs have much to do with the number of strikeouts. How would you explain these numbers? The SB are about the same, but the SO are far different.

      2012 - .67 SB, 7.46 SO
      1979 - .71 SB, 4.77 SO
      1920 - .70 SB, 2.94 SO
      (per team/game)

      Comment


      • #4
        Originally posted by rodk View Post
        As a premise, generally speaking, I'm not a Sabremetrics or Moneyball fan, certainly not a proponent of using recompiled data to drive in-game or day to day strategy.

        I think a lot of the recompiled data, like WHiP and WAR, is contrived because of huge differences in playing conditions and because of limited sampling size and because real life situations are not accounted for. I also think lots of that kind of stuff is outcome oriented, ie, it supports an existing favored premise, and is not really scientific, ie, it provides a foundation for a neutral premise.
        I don't think we've been reading the same materials. First, WAR and WHiP are as different as night and day. One is a casual statistic that is hardly different than batting average and which could have as well been invented by a 10 year old. The other is driven by contextual theories that have been developed by multiple sources with refinements driven by intensive data analysis. Second, yes, it IS outcome oriented. The purpose of many analytical tools is to develop tools that explain why teams win. The favored premise is: if it causes wins, it's part of the discussion. Third, our concept of science differs: the classic scientific methodf is: make conjectures (hypotheses), derive predictions from them as logical consequences, and then carry out experiments based on those predictions to determine whether the original conjecture was correct. This is what WAR does. If it does not do it perfectly, then it is imperfect. But that is the nature of science.

        Originally posted by rodk View Post
        As an example, I believe good players on good teams get their offensive stats skewed downward because those teams usually draw the best opposing pitching. Bad players on good teams get upticks because they are protected in the lineup. Meanwhile it seems to me that good players on bad teams get a statistical boost because they face the worst opposing pitching. Data like WHip and WAR doesn't seem to be able to incorporate that kind of reality.
        This is a fine example. You have a hypothesis. But you have posted no evidience to support it. Instead, you argue that since WAR doesn't help you, it is faulty. WAR was not designed to prove your theory. The bigger problem however is that you use the word "good" and "bad" in referring to players as if there is some intrinsic value in players that we improperly measure (your hypothesis.) Prove it.

        Originally posted by rodk View Post
        Nevertheless, I think mining data may help understand situations to some extent, and settling bar bets is one of them.
        Mining data helps. We agree.


        Originally posted by rodk View Post
        When Strasburg was shut down, my conversation partner noted that pitchers of yesteryear threw until their "arms fell off," and blamed shutting him down and the Nationals putting their Series hopes at risk on irrational pitch counts vs. bloated salaries, whereas I maintained that starters don't really pitch any less than they used to because it seems to me they go deeper into counts than they ever had to because each at bat is more closely scrutinized because of all the money that rides on games and championships, and because teams recognize what they had not before: that bullpens are the soft underbellies of most teams.
        Ok. Your theory is that starters pitch as much as they used to. Let's see how you prove it.

        Originally posted by rodk View Post
        What is probably really at issue for injuries isn't even tracked by statisticians: the number of curve balls a guy throws and the quality of his form and his instruction and his off day regimen. That's for another day.
        Ok. You are right. People don't know if pitchers throw more or less curves than 60 years ago. They don't know if their form is worse. Etc. That's your theory. Prove it.

        Originally posted by rodk View Post
        But if you maintain all those other things as being equal, how does one prove that if pitcher X had 36 starts and went an average of 8 innings per start in the 1950's he really threw about the same number of pitches in the season as pitcher Y in the 2000's in his 31 starts averaging 6.2 subject to some reserve for a variable number in an unknown number of post season appearances? The problem is that reliable counts from that earlier time either don't exist or are hard to find. You use data you do have that you assume closely correlate to the one you want to know, and track that data instead to get a handle on the unknown.
        You are comparing pitching 36*8=288 innings versus 31*6.2=192 2/3 innings.

        First, we don't need exact pitch counts in every game to be able to make estimates. There are many games in which people did pitch counts and they are available. If you scan BBRef, you will see pitch counts from various games in the 50's and 60's. Second, from experience plus the games we have counts on, it's not hard to derive reasonable estimates of pitch counts. This isn't hard to do if that is your theory.

        Originally posted by rodk View Post
        I tried to evaluate this with my own metrics to see if I could crudely assess what was going on in terms of pitch counts, and came up with this information, derived from data on Baseball-reference.com for the period 1951 to 1960 versus the period 2002 to 2011. I only used the National League because it has rules consistency over the era, and compiled league averages for each decade.
        Mentioned above. The preferred method, scientifically, is to take the best data available. There are pitch counts from those years. It would be better to create a model from the real data first, before guessing.

        Originally posted by rodk View Post
        I was surprised by how finding that most things about baseball had not changed no matter how much we think they have, despite philosophical changes, technical changes, merging umpiring teams, play at altitude, pitch counting, bandbox ballparks, radar guns, and what have you.


        Runs per game per team: 1950's: 4.414 2000's: 4.504

        Batting average: 1950's: .2596 2000's: .2604

        OBP: 1950's: .3276 2000's: .3299

        Walks per game: 1950's: 6.660 2000's: 6.456

        If you only had that information, since everything looks about the same, you might be tempted to conclude that more or less the same number of pitches were thrown then as now per game, just currently more spread out among larger staffs with more roles to save starters pitches.But I also discovered this:

        Stolen bases per game: 1950's: .647 2000's: 1.128

        Strike outs per game: 1950's: 9.24 2000's: 13.71 (noting strong upward trend lines within both periods)

        Ratio of strike outs to walks: 1950's: 1.39 2000's: 2.06 (noting strong upward trend lines within both periods)

        I am inclined to believe that information tends to show that pitchers weren't worse or relatively worse compared to hitters in the latter era, given that OBP is virtually spot on, but that hitters are taking more pitches to allow runners to steal and that they are trying harder to work counts to get into opponents' bullpens, despite a side effect of that strategy being more strikeouts, the ramification of that being somewhat attenuated by contact becoming less crucial in a regime of common stealing.

        What does the panel think?
        So you are doing exactly what you accuse others of doing. You are creating a conclusion, then interpreting unrelated and UNSTATED data to prove it.

        You have not created a single pitch count number to be evaluated, tested or discussed, yet you are running right to a conclusion assuming pitch counts are higher per batter.


        IMO: there is nothing wrong with a theory that pitchers are throwing as many pitches as 60 years ago, just fewer innings. Prove it with real data on pitch counts, not conjecture about stolen bases and bullpens. The scientific theory is not about theorize, conjecture about other stuff, then announce the conjecture proves the theory. There's a fact finding step in there. In this case, you need FACTS about pitch counts, not CONJECTURE about SB and the bullpen.

        Take the numerous (hundreds of games) from the 50's and 60's where pitch counts exist, accumulate the BF, Hits, BB, SO, SB, etc., and pitch counts and create a model of that era using live real data. This will give you a data set of pitches per inning with controls for BF and the type of batting result.

        Then compare that to the 2000's when we also have the same type of data, and where we can compare the control set (BF, batting results). Show that pitchers are tossing more pitches per batter while pitching fewer innings.

        THEN, you will have something.
        Last edited by drstrangelove; 09-12-2012, 04:11 PM.
        "It's better to look good, than be good."

        Comment


        • #5
          Originally posted by drstrangelove View Post
          Second, yes, it IS outcome oriented. The purpose of many analytical tools is to develop tools that explain why teams win.
          My original concept wasn't to rant against WAR per se or any other egghead evaluations about winning and losing, but to premise my own research by noting the weakness of data recompilations generally and thus the weakness of theroies premised on them on the way to delving into a more important subject: if anything at all exists that correlates pitch count to innings and to injuries, something that is hotly contested and close to my heart because my kid pitches. The Strasburg situation suggests that pro coaches believe the one critical factoid to preserving his arm is the number of innings, in as much as they indicated he might start again if he didn't max out his innings. In Little League and elsewhere, the assumption is that the critical fact is pitches thrown.

          The ultimate question on the table is whether either or both bear on the subject.

          My small part of it is to see if the historical record supports either or both.

          So far, I don't see the historical record supporting anything because I can't quite put history in context to establish the crucial fact I'm looking for: how many pitches did starters who pitched until their arms fell off throw then and now?

          I know I won't be able to find much on how much TLC pitchers got to protect them; that's subjective anyway. My hope was to find some way to quantify pitches thrown absent anyone keeping records of it. What I think I found was some indication that explains away one thing we do know: that starters 50 years ago pitched more innings than current starters.

          FWIW, this is different than trying to deduce a single winning formula for baseball teams to follow, which is what WAR seems to be. I'm inclined to believe that most of winning or losing individual baseball games and winning or losing championships has to do with synergy and arraying complementary talent (ie, a correctly designed batting order), who's hurt, who can't manage their bullpen or their rosters, who is too dumb to remember the signs and follow them, who melts down under pressure and who plays the game like he's more concerned about his hot date that night, just as football winning and losing has mostly to do with the coaching because the talent pool in that sport mostly very equally distributed. There are just as many fantastic and well paid ballplayers who are simply losers despite amazing metrics that don't account for any of that (see A-Rod and Jose Reyes) as there are hotheads who win repeatedly and can't do math (see Billy Martin).

          Nevertheless, the approach strangelove suggests on the way to what would be the "unified field theory" that produces an optimal single formula that provides a definitive assessment of a given player for roster selection on the way to winning championships sounds an awful lot like what my econometrics prof said about some student projects: it is good enough for government work.

          I have always understood scientific experimentation itself as necessarily having a neutral design; if a given theory is supported at the far end, fine, if not, better. To design derivative metrics specifically intended that "prove" or "explain" a specific conclusion after the fact, ie, that players that have certain quantifiable talents necessarily produce winning results, is the province of accountants (I'm not belittling you accountants, you aren't supposed to be doing neutral science) and Wall Street bankers (I am belittling you guys who fudged the Groupon and Facebook stats that led to investor losses and SEC investigations).

          Take the numerous (hundreds of games) from the 50's and 60's where pitch counts exist, accumulate the BF, Hits, BB, SO, SB, etc., and pitch counts and create a model of that era using live real data. This will give you a data set of pitches per inning with controls for BF and the type of batting result.
          That is highly problematic because we don't know if those games are randomly sampled or even if they were, if they constitute a sufficient sample size. Do we have anything on that to indicate that the games where we have counts are typical?

          [WAR] is driven by contextual theories that have been developed by multiple sources with refinements driven by intensive data analysis.
          Interesting but not sufficient because passer rating is a similar metric. I am lead to believe there are multiple formulas for WAR at various websites. If so, then any one of them seems as arbitrary, though no less thought out from the historical record, as passer rating, a metric that hardly anyone has any real confidence in because, among other reasons, the components put into the formula as well as those left out are arbitrary. As a reminder, last year Tony Romo was the fourth best rated passer and the Cowboys did not make the playoffs. Tim Tebow was 27th, and the Broncos went to the round of 8. A year earlier, top ranked Tom Brady and the Pats lost at home in the playoffs to #29 Mark Sanchez and the Jets.

          It seems to me that passer rating is as reverse engineered as WAR: you can take the various positive attributes of a given winner and score them, but that doesn't necessarily mean that there isn't more than one winning formula and that talent has to be evaluated against one or the other.

          The scientific theory is not about theorize, conjecture about other stuff, then announce the conjecture proves the theory. There's a fact finding step in there. In this case, you need FACTS about pitch counts, not CONJECTURE about SB and the bullpen.
          I don't think I have done anything of the sort. All that happened here is a finding that there has been a negligible change in the well recognized offensive categories with two significant exceptions: strike outs and steals. We do know absolutely that there has been a significant change in the way pitchers are used -- they pitch fewer innings each season. All I did -- without saying it is proved, just that I'm inclined to believe it -- is that the mere fact of fewer innings thrown by starters does not bear on the pitches thrown, that ultimately there really isn't any difference in the way starters are used, and that the changes in the offensive statistics that we do know about have to be scrutinized.

          I think the next step isn't to assume the mostly non-existent historical record on pitches thrown can be relied upon to make an assessment, but that we split the modern record to see if strike out pitchers produce more pitches than non-strike out pitchers, and which kind of pitcher is more inclined to be injured.
          Last edited by rodk; 09-13-2012, 09:14 AM.

          Comment


          • #6
            Originally posted by rodk View Post
            I think the next step isn't to assume the mostly non-existent historical record on pitches thrown can be relied upon to make an assessment, but that we split the modern record to see if strike out pitchers produce more pitches than non-strike out pitchers, and which kind of pitcher is more inclined to be injured.
            I'd like to respond to a lot of your statements, but I'll try this differently.

            1) the record is not non existant. Pontification doesn't change the fact that data exists.
            2) since pitch counts from the 1950's and 1960's were not taken with the intent of proving a point, they are far, far more likely to be random than non-random. Some pitchers have a full career's worth of pitch counts and the number or games in total is likely to be in the many hundreds if not well over a thousand.
            3) given that number, it's fairly certain that the overall picture will approximate the full population, Of course, it will not be perfect. But the odds that it will be reasonably close is far greater than to be vastly inaccurate.
            4) the original thesis you had implied a 50% increase in pitch count per BF today versus the 1950's. The hard data from that period would not contain errors in any manner large enough to disable testing that thesis
            5) Choosing to ignore hard data that may not be perfect, and which could validate the thesis, and instead, to speculate from 40,000 feet is hardly a comparable choice. The first method would, like so many other scientific approaches, cause people to WANT to fill out more of the data set to further validate or disprove the theory. The second method, is arm chair theorizing with no end-game, no testable results, no data worthy of concern.

            Please note that you've changed your original thesis:

            A) people today throw as many pitches as they did decades ago

            to

            B) strike out pitchers throw more pitches than non-strike out pitchers



            "I'm inclined to believe that most of winning or losing individual baseball games and winning or losing championships has to do with synergy and arraying complementary talent (ie, a correctly designed batting order), who's hurt, who can't manage their bullpen or their rosters, who is too dumb to remember the signs and follow them, who melts down under pressure and who plays the game like he's more concerned about his hot date that night, just as football winning and losing has mostly to do with the coaching because the talent pool in that sport mostly very equally distributed. There are just as many fantastic and well paid ballplayers who are simply losers despite amazing metrics that don't account for any of that (see A-Rod and Jose Reyes) as there are hotheads who win repeatedly and can't do math (see Billy Martin)."

            Again: Baseball has truck loads of data that will enable you to prove or disprove all the things that you are inclined to believe. FWIW. I'm inclined to believe that these statements are false. But that is not because I thought about, or discussed it with my friend, but because I studied a lot of baseball and read works from other people that studied even more baseball and discussed it with people who were willing to present hard facts to support opinions.

            Science really isn't about what we like to believe: it's about what we can state clearly, prove / test in some objective repeatable method, and post in a clear manner that others can review.
            "It's better to look good, than be good."

            Comment


            • #7
              Originally posted by rodk View Post
              There are just as many fantastic and well paid ballplayers who are simply losers despite amazing metrics that don't account for any of that (see A-Rod and Jose Reyes)
              In what way are they losers?

              Comment


              • #8
                Originally posted by ipitch View Post
                In what way are they losers?
                I assume he's playing the "choke in the clutch" card. That helps the narrative that they're bad ballplayers, even if their hard stats don't support it.
                46 wins to match last year's total

                Comment


                • #9
                  Originally posted by SamtheBravesFan View Post
                  I assume he's playing the "choke in the clutch" card. That helps the narrative that they're bad ballplayers, even if their hard stats don't support it.
                  No. I have hard stats on this, actually the hardest.

                  World Series appearances: A-Rod: 1 (vs. $300 million earned)
                  J Reyes: 0

                  Compare to: Paul O'Neill: 6 (with 2 teams)
                  David Justice: 6 (with 3 teams)
                  Derek Jeter: 7 (1 with Rodriguez)
                  Jack Morris: 3 (with 3 teams in a 9 year stretch)

                  There are simply guys who get better results than personal data reveals. No one tries to put a number on the Mona Lisa or Stairway to Heaven; they have qualities that can't be adequately described by subjective evaluations, and so it is with ballplayers.

                  Please note that you've changed your original thesis:

                  A) people today throw as many pitches as they did decades ago

                  to

                  B) strike out pitchers throw more pitches than non-strike out pitchers
                  Two responses: It is scientifically appropriate to change theories after looking at the data, but in this case that is not what I did. I said we should examine the data for that information but did not posit that to be the case.

                  There are any number of reasons why, if it can be shown that pitchers throw more pitches per inning than past pitchers did, that is the case. It could be that batters strike out more for the reasons I suggest. It could be that pitchers have different instructions and instincts than previous ones and are more inclined to try to strike out hitters and less inclined to challenge them because they have more back up. Maybe there's something to the idea that pitchers won't jam hitters any more.

                  As far as assuming that the limited data about pitch counts from the past is a fair sample, absent some documentation of that, I'm skeptical. You can ask President Dewey, having defeated sitting Pres. Truman in 1948, about assumptions about fair sampling.

                  Ultimately, the problem is to figure out the big picture of who is getting hurt and why so we can keep the pitchers (like my kid) safe, means my inclination to believe that various derivative records adjusted and recompiled in numerous ways but not taking into account game situations or team construction is all pretty meaningless is itself pretty meaningless.

                  If the Moneyball and sabermetrics skills can be put to a useful purpose like preventing kids from getting hurt, then there will be some meaning.

                  Comment


                  • #10
                    Originally posted by rodk View Post
                    No. I have hard stats on this, actually the hardest.

                    World Series appearances: A-Rod: 1 (vs. $300 million earned)
                    J Reyes: 0

                    Compare to: Paul O'Neill: 6 (with 2 teams)
                    David Justice: 6 (with 3 teams)
                    Derek Jeter: 7 (1 with Rodriguez)
                    Jack Morris: 3 (with 3 teams in a 9 year stretch)
                    Ted Williams - 1 WS appearance in 19 years
                    Ernie Banks - 0 WS appearances in 19 years
                    Bill Skowron - 8 WS appearances in 14 years

                    You really think those first two guys are losers? And, would you put Skowron in the HOF? About 95% of the reason that a player makes it to the WS is because of his teammates. Even when a player puts up MVP numbers, his team can still finish in last place. Any baseball player will be a guaranteed loser if his teammates stink. Do you think Jeter or Justice would have any rings if either of them had played for the Royals for their entire career? No chance.

                    Jeter - batted .111 in the 1998 ALDS (and .200 in the ALCS) - if his teammates hadn't carried him, they would not have made it to the WS
                    - batted .118 in the 2001 ALCS - if his teammates hadn't carried him in the ALCS, they would not have made it to the WS
                    (And, yes, in some years it was the other way around - Jeter did the carrying.)

                    Comment


                    • #11
                      Originally posted by rodk View Post
                      No. I have hard stats on this, actually the hardest.

                      World Series appearances: A-Rod: 1 (vs. $300 million earned)
                      J Reyes: 0

                      Compare to: Paul O'Neill: 6 (with 2 teams)
                      David Justice: 6 (with 3 teams)
                      Derek Jeter: 7 (1 with Rodriguez)
                      Jack Morris: 3 (with 3 teams in a 9 year stretch)

                      There are simply guys who get better results than personal data reveals. No one tries to put a number on the Mona Lisa or Stairway to Heaven; they have qualities that can't be adequately described by subjective evaluations, and so it is with ballplayers.
                      This is about the softest, most-flawed "stat" you could have given me to prove this point. ipitch is right; any player will be a loser if his teammates stink.

                      Rafael Belliard made 5 World Series in 8 seasons with the Braves, but I'm sure not calling him better than Jose Reyes because of it. He was not better than his personal data: his bat was historically weak (only Bobby Mathews, who played from 1871-87, had less extra base hits than him in just 185 more PAs) and he got almost all of his value out of his glove. Both things are well-documented in other ways besides numbers.

                      Give me Reyes the loser any day.
                      46 wins to match last year's total

                      Comment


                      • #12
                        Originally posted by rodk View Post
                        No. I have hard stats on this, actually the hardest.

                        World Series appearances: A-Rod: 1 (vs. $300 million earned)
                        J Reyes: 0

                        Compare to: Paul O'Neill: 6 (with 2 teams)
                        David Justice: 6 (with 3 teams)
                        Derek Jeter: 7 (1 with Rodriguez)
                        Jack Morris: 3 (with 3 teams in a 9 year stretch)

                        There are simply guys who get better results than personal data reveals. No one tries to put a number on the Mona Lisa or Stairway to Heaven; they have qualities that can't be adequately described by subjective evaluations, and so it is with ballplayers.


                        As far as assuming that the limited data about pitch counts from the past is a fair sample, absent some documentation of that, I'm skeptical. You can ask President Dewey, having defeated sitting Pres. Truman in 1948, about assumptions about fair sampling.
                        So far, there's been reference to the Mona Lisa, the US presidential election in 1948, NFL QB rating systems, a kid, unified field theory, facebook, wall street bankers, accountants and economists, along with some impressions of players who were lucky enough to ride the bench on pennant winners. Clearly, the world is large and fertile enough to support many views.

                        I don't feel that these offer valuable insights to understanding baseball and I'd prefer instead to take a hard objective look at real stats, but that's me.
                        Last edited by drstrangelove; 09-14-2012, 12:16 PM.
                        "It's better to look good, than be good."

                        Comment

                        Ad Widget

                        Collapse
                        Working...
                        X