Jump to content

#28 Prospect Griffin McLarty


Luke-OH

Recommended Posts

1 hour ago, Scalious said:

Don't know if that is any better for an non premium college conference.

I mean, feel free to read the explanation for the model, but everything is based on performance relative to league average. So it takes league strength into account. 

For example average ERA in the CAA in 2019 was 4.63 vs McLarty’s 1.87 and 1.8 K/BB vs McLarty’s 5.8. The projection model is more sophisticated than that, but you get the drift. 

There are flaws to just looking at performance obviously and a model with velocity wouldn’t like McLarty quite as much and Rom much less but DL Hall much more, but I don’t think it’s flawed to think McLarty’s college performance is extremely strong.

Link to comment
Share on other sites

I'm coming from an angle of math and probability. Not talent grading. Purly a critique of grading statically performance. Not saying it's garbage. Just has flaws.

1.) Using Logistic regression's is going to be less accurate the higher WAR probability you project. Since the sample size shrinks. Decent at capturing probability of making the majors, but is rather terrible at capturing a players upside. 

Griffin is ranked 628th on this list purely on odds of making the majors. That seems more reasonable then his odds of being a 10+ WAR being top 100

2.) Relative to league means nothing if their is no challenge. The purpose of the metric "age" is a proxy metric for a players maturation. So you can scale the appropriate challenge level to the player in understanding if the data means something. 

He has him in the top 10 in college stats. He doesn't make it clear he is dividing college performance by conference in the article i read. He mostly just broke down how he does it for the minors.

Edited by Scalious
Link to comment
Share on other sites

It would be interesting to know what other variables he found to be not predictive, and why. Velocity? Spin rate? # of plus pitches? 

It's just not clear what he considered. The reason I ask is, if read incorrectly, this model gives the impression that player development essentially doesn't matter. So if you're not ranked highly in this model, it's almost presented like you essentially don't have a chance.

I'm sure he doesn't think this way, and I'm certain teams don't think that way. My take is that I'd rather be on this list than not, for sure, and as a team I may even target guys who fit a model like this. More importantly, teams are trying to find people who project to fit their own models after maturation, and that likely overlaps this model fairly well.

Also interesting that the O's didn't draft "young" pitching last year, so it's clear they're focus is slightly different.

Link to comment
Share on other sites

Perfromance data is the easiest to "mine". So that's what most public domain models focus on.  The data the Orioles used to asses him was likely far more advanced.

Was only asseting that "Big fish in a little pond" data is just not useful in figuring out if that fish can survive a lake. He was several degrees better then his peers, but his peers are so far removed from the challenges a MLB player faces and he's not a teenager that has tons of growth left to achieve.

Edited by Scalious
Link to comment
Share on other sites

3 hours ago, Scalious said:

I'm coming from an angle of math and probability. Not talent grading. Purly a critique of grading statically performance. Not saying it's garbage. Just has flaws.

1.) Using Logistic regression's is going to be less accurate the higher WAR probability you project. Since the sample size shrinks. Decent at capturing probability of making the majors, but is rather terrible at capturing a players upside. 

Griffin is ranked 628th on this list purely on odds of making the majors. That seems more reasonable then his odds of being a 10+ WAR being top 100

2.) Relative to league means nothing if their is no challenge. The purpose of the metric "age" is a proxy metric for a players maturation. So you can scale the appropriate challenge level to the player in understanding if the data means something. 

He has him in the top 10 in college stats. He doesn't make it clear he is dividing college performance by conference in the article i read. He mostly just broke down how he does it for the minors.

Yeah, you raise good questions, I’m not sure if/how conferences are weighted. I’d guess it’s based on previous players from said conference, but as you mentioned that does bring sample size issues into it. 

I do think you are overstating the lack of competition outside of the power conferences though. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...