Jump to content

Two Myths I'm Ready to Debunk


Frobby

Recommended Posts

One way to test RShack's theory is to simply look back at established major league players and see what their minor league stats are like. If the "stats guy" theory holds up - one would expect them to have good minor league numbers. If RShack's theory holds up - one might expect them to have a variety of performance records in the minor league's.

While the following research is by no means as comprehensive and as detailed as what Drungo and others would do - it's at least a quick snap shot on the question.

I did this in about 20 minutes using baseball reference minor league stats. I simply took three established players from the Orioles, Yankees and Red Sox and looked back at their minor league performance. The hypothesis is that minor league stats are a "reliable" predictor of major league performance. Without getting into the definition of reliable - I'd say the following quick example supports that contention.

Before people jump on me I'll mention some glaringly obvious problems with the examples below. 1) It is, of course, an extrememly small sample size; and 2) I didn't bother to take out minor league stats that were accumulated after someone made the majors as part of a rehab assignment or for some other reason. (In most cases this was extremely small anyway) And finally, I just grabbed batting average and OBP under the theory that since power usually develops later that it wasn't necessarily as important as other stats for minor league purposes. I also avoided it because I didn't want to get into park/league factors and stuff like that. Leaving it out may have been a mistake... However, most of the following had a +800 OPS in the minors if I remember correctly.

At any rate - here they are:

BA OBP

Miguel Tejada .272 .347

Brian Roberts .281 .376

Nick Markakis .301 .381

David Ortiz .310 .381

Manny Ramirez .307 .404

Mike Lowell .295 .359

Derek Jeter .308 .380

A. Rodriguez .327 .386

Jorge Posada .261 .364

The above, of course, is not meant to be definitive - but merely to add to the discussion. I'll play around some more with the minor league stats on baseball reference as I have time.

This post illustrates the exact problem Shack has identified. Most of those numbers are good but there are many players that have numbers equal to those that never make it. A useful study would have to incude a bunch of crappy players also.

Link to comment
Share on other sites

  • Replies 267
  • Created
  • Last Reply
This post illustrates the exact problem Shack has identified. Most of those numbers are good but there are many players that have numbers equal to those that never make it. A useful study would have to incude a bunch of crappy players also.
The problem is, you have to use a lot more than those numbers and, on top of that, there may be players who could have made it but were never given the chance.

What if Cust had gone 0 for his first 15 ab's in Oakland this year? They may have released him and never had to the chance to enjoy his 900+ OPS. You just never know.

Link to comment
Share on other sites

This post illustrates the exact problem Shack has identified. Most of those numbers are good but there are many players that have numbers equal to those that never make it. A useful study would have to incude a bunch of crappy players also.

Exactly.

At the same time, I do appreciate the effort BRobinsonfan made to look at the MiL numbers of a few selected ML guys. That's a lot more work than most of us have done ;-)

The problem is, you have to use a lot more than those numbers and, on top of that, there may be players who could have made it but were never given the chance.

What if Cust had gone 0 for his first 15 ab's in Oakland this year? They may have released him and never had to the chance to enjoy his 900+ OPS. You just never know.

This is true, but you always have whatever data you have. The whole premise of stats guys is that you make the best possible use of the stats that are available. The issue is what "best possible use" requires if you're going to do this responsibly and in keeping with basic principles of stats-in-general. It doesn't work to say that "we can't use stats because we don't have stats about things that never happened". There is *tons* of data somewhere about what actually did happen with MiL guys, including both those who made it and those who didn't. That's what we have to go with, unless you wish to abandon the use of stats altogether (which nobody is advocating).

It won't work to just look at the MiL numbers of ML guys, although that *is* important. One thing that would be interesting to me is to see how often it happens that ML numbers are better than MiL numbers. Miggy appears to be one very mild example; I'd be interested in "how low we can go" in MiL stats and still find a good ML ballplayer. (So would Luis H. fans ;-) I'd be especially interested in those cases where a truly-very-good ML guy was not projected to be. [Edit: see Matt Holliday's MiL stats and scouting reports. He was seen as a "C" prospect.] For example, in that K-State Masters Thesis (the one that BRobinsonfan found for us, in which the stats guy used 2005 outcomes to go back and find predictive formulas that, when applied to 2002 MiL data, would account for 2005 outcomes), Miguel Cabrera was ranked something like #247 of MiL hitters (and Cust was ranked #1 ;-). I know that the K-State guy just looked at a single year of MiL stats, and I'm not claiming that his point of view is adequate, but that is an example of a player doing much better in the bigs than a forecast predicted. To be informed, we need to know more about both that kind of thing and the reverse of it (good MiL numbers leading to bad ML numbers). You can't get a handle on this unless you look at the MiL numbers of everydamnbody and see, for example, how many MiL guys with "decent" (whatever that means) MiL numbers succeeded vs. failed in the ML.

If you only look at early data for people with one kind of outcome, then you are virtually guaranteed to reach wrong conclusions. Like, for example, when they say that X-percent of heroin users started with pot "therefore pot leads to heroin". It's silly because you could use the exact same logic to say that nearly 100% of heroin users started with milk "therefore milk leads to heroin" (so maybe we should imprison Cal for "pushing" milk ;-) I'm sure we can all agree that the correct approach would be to look at everybody who used pot (or milk) and see how many of them became heroin users. The same basic principle about "sound reasoning" applies here.

Link to comment
Share on other sites

Why?

I realize that this is a popular opinion. I don't see why people have it. I think most of it is due to the fact that people really do want something to guide them, and that stats are all that people have available on the web. So, I think it's mainly a case of "if all you have is a hammer, then you treat the world like it's a nail". This is understandable. However, I don't know of any reason whatsoever to think that it's better than a coin flip for the "dilemma guys". For Babe Ruth, sure... but that's not who we wonder about...

I think that given a large enough sample size, you can do much better than a coin flip to figure out a .05 range of what OPS will be in the majors. Since even superstars may fluctuate by .05 in their OPSes from year to year due to luck and other factors, that range, while seemingly large, is acceptable to me. While a correlation coefficient may be .65 to a specific OPS projection (say .775), I think the probability (which, again is different than a correlation coefficient) of being within a .05 OPS projection range (say [.750,.800]) is well greater than 50%.

Link to comment
Share on other sites

Exactly.

At the same time, I do appreciate the effort BRobinsonfan made to look at the MiL numbers of a few selected ML guys. That's a lot more work than most of us have done ;-)

This is true, but you always have whatever data you have. The whole premise of stats guys is that you make the best possible use of the stats that are available. The issue is what "best possible use" requires if you're going to do this responsibly and in keeping with basic principles of stats-in-general. It doesn't work to say that "we can't use stats because we don't have stats about things that never happened". There is *tons* of data somewhere about what actually did happen with MiL guys, including both those who made it and those who didn't. That's what we have to go with, unless you wish to abandon the use of stats altogether (which nobody is advocating).

It won't work to just look at the MiL numbers of ML guys, although that *is* important. One thing that would be interesting to me is to see how often it happens that ML numbers are better than MiL numbers. Miggy appears to be one very mild example; I'd be interested in "how low we can go" in MiL stats and still find a good ML ballplayer. (So would Luis H. fans ;-) I'd be especially interested in those cases where a truly-very-good ML guy was not projected to be. For example, in that K-State Masters Thesis (the one that BRobinsonfan found for us, in which the stats guy used 2005 outcomes to go back and find predictive formulas that, when applied to 2002 MiL data, would account for 2005 outcomes), Miguel Cabrera was ranked something like #247 of MiL hitters (and Cust was ranked #1 ;-). I know that the K-State guy just looked at a single year of MiL stats, and I'm not claiming that his point of view is adequate, but that is an example of a player doing much better in the bigs than a forecast predicted. To be informed, we need to know more about both that kind of thing and the reverse of it (good MiL numbers leading to bad ML numbers). You can't get a handle on this unless you look at the MiL numbers of everydamnbody and see, for example, how many MiL guys with "decent" (whatever that means) MiL numbers succeeded vs. failed in the ML.

If you only look at early data for people with one kind of outcome, then you are virtually guaranteed to reach wrong conclusions. Like, for example, when they say that X-percent of heroin users started with pot "therefore pot leads to heroin". It's silly because you could use the exact same logic to say that nearly 100% of heroin users started with milk "therefore milk leads to heroin" (so maybe we should imprison Cal for "pushing" milk ;-) I'm sure we can all agree that the correct approach would be to look at everybody who used pot (or milk) and see how many of them became heroin users. The same basic principle about "sound reasoning" applies here.

This shows a total misunderstanding of things though.

In 2002, Cust just completed a season where he was 23 y/o...He had 1500 or so pro ab's at that point and was in a great hitters league.

MCab was 19 years old and already at a Frederick level league(think Rowell next year) and put up a 754 OPS.

Link to comment
Share on other sites

This shows a total misunderstanding of things though.

In 2002, Cust just completed a season where he was 23 y/o...He had 1500 or so pro ab's at that point and was in a great hitters league.

MCab was 19 years old and already at a Frederick level league(think Rowell next year) and put up a 754 OPS.

Yeah, also the Miggy example is one where he was rushed to the majors at a very young age (21). My bet would be that Tejada's AA numbers in 1997, adjusted for age and park, would project fairly close to his career numbers. We also can't forget that, according to Canseco's book (which I know may not be reliable), Tejada may have started using steroids after he reached the majors.

Link to comment
Share on other sites

SG you are missing the point Shack and a couple of others here are trying to make. There is no logical reason that a predictive system can not have a corrolation coefiecent higher than .65. It is a very difficult system to Model but .65 is not a high enough number to claim much skill in forcasting. I know it is the best to this point in time, but really your position on this subject is no different than the old school scouting gaurd defending itself against the sabremetric group. Because people are asking these questions better systems will be developed over time and being on the forefront of this will help an organization. Say Drungo, in his spare time, develops a system that raises that number to .85 and only one team has access to this information. That team will consistantly out perform all of the other teams and over time it will be by a large margin. The team will make the right call on players earlier and more often correctly than the other teams. This is the reason that it is not meaningless
I've stayed away from this quagmire until now. Does it matter that the system is .65 accurate or .55 accurate or whatever number someone throws at a dartboard and decides is the standard for bona fide USDA Grade A accuracy?

What is the alternative? How do we decide when someone is ready to make the jump to the big leagues? Intuition? Gut sense? What if that person had eaten a big bowl of chili that day, maybe then the reason his gut is rumbling isn't baseball related? It's an absurd way to make a point I realize.

What it comes down to is baseball is a game of numbers. We keep records of them and celebrate when milestones are met. Numbers have been used historically, right or wrong, to make an assessment of a player. I haven't seen anyone argue that minor league numbers are an ironclad foolproof system. Carry on with a debate seemingly prolonged on semantics.

Link to comment
Share on other sites

...[stuff deleted]... I know that the K-State guy just looked at a single year of MiL stats, and I'm not claiming that his point of view is adequate...[more stuff deleted]...

This shows a total misunderstanding of things though.

In 2002, Cust just completed a season where he was 23 y/o...He had 1500 or so pro ab's at that point and was in a great hitters league.

MCab was 19 years old and already at a Frederick level league(think Rowell next year) and put up a 754 OPS.

I know. As you can see, I specifically said that he only looked at a single year, and that I wasn't claiming his model was adequate. Neither was he. I was just using one of his findings as an example of the kind of stuff we'd want to know, that's all.

At the same time, of the 6 factors his research showed were useful, two of them involved age-and-MiL-level, including one that was explicitly called "Over His Head" which referred to whether a guy was fast-tracking through the MiL's. (The other 4 factors he found were specific formulas based on different aspects of hitting performance.)

I thought the K-State guy's findings were interesting, and a good example of how somebody might approach working on this. But his specific work is not the main point... it was just his Masters Thesis, that's all. He took a conventional approach, which is fine. As I mumbled somewhere in this thread, if I had a contract to work on this, I'd focus on two specific and non-traditional AI approaches, but that's just me. Regardless, I think it would be cool if more graduate students in stats did their research about baseball stats. Graduate students work cheap, and they don't keep their results secret ;-)

Link to comment
Share on other sites

Does it matter that the system is .65 accurate or .55 accurate or whatever number someone throws at a dartboard and decides is the standard for bona fide USDA Grade A accuracy?

It definitely matters in the kind of conclusions you decide to reach.

What is the alternative? How do we decide when someone is ready to make the jump to the big leagues? Intuition? Gut sense? What if that person had eaten a big bowl of chili that day, maybe then the reason his gut is rumbling isn't baseball related? It's an absurd way to make a point I realize.

No, you do have a good point, one that gets to the heart of it. Stats is all we have easy access to, so that's what people refer to.

The problem arises when people just trust the data that's available without knowing how to take it. Since one theme around here involves buying-and-selling, one useful analogy is the whole Enron stock mess: people were going on what data was available to them, and many people lost their retirement savings. The main reason was that they were taking numbers and running with them without regard to how useful the numbers were. Now, I know this is different, simply because we don't have the Enron executives in cahoots with Arthur Andersen, trying to deceive us. But when people use stat services that provide forecasts based on secret proprietary algorithms, and when we don't know the reliability of the forecasts, then the same basic kind of doubt should be there. Just because it's a number, that doesn't mean it's trustworthy. That's all.

What it comes down to is baseball is a game of numbers. We keep records of them and celebrate when milestones are met. Numbers have been used historically, right or wrong, to make an assessment of a player. I haven't seen anyone argue that minor league numbers are an ironclad foolproof system. Carry on with a debate seemingly prolonged on semantics.

Baseball is a game in which many numbers are kept. But numbers are not what baseball players do. They play Actual Baseball. If you think the issue here is simply about obscure semantics, then I don't know what to tell you. It's not about semantics. It's about using stats properly so that you can reach well-founded judgments, not poor and ill-founded judgments.

Stats are much more "a game of numbers" than baseball is, simply because stats is nothing but a game of numbers. The point is that we are on much more solid ground using stats to evaluate how trustworthy stats are than we are in using MiL stats to forecast the future ML performance of players. It's fine to use them for the second purpose, but our judgment should be influenced by what is known about the first issue. If we don't know that, then we might as well be using a dartboard. Again, just because certain numbers happen to be available, that doesn't tell us how trustworthy they are. Just ask the Enron stockholders.

Link to comment
Share on other sites

I know. As you can see, I specifically said that he only looked at a single year, and that I wasn't claiming his model was adequate. Neither was he. I was just using one of his findings as an example of the kind of stuff we'd want to know, that's all.

At the same time, of the 6 factors his research showed were useful, two of them involved age-and-MiL-level, including one that was explicitly called "Over His Head" which referred to whether a guy was fast-tracking through the MiL's. (The other 4 factors he found were specific formulas based on different aspects of hitting performance.)

I thought the K-State guy's findings were interesting, and a good example of how somebody might approach working on this. But his specific work is not the main point... it was just his Masters Thesis, that's all. He took a conventional approach, which is fine. As I mumbled somewhere in this thread, if I had a contract to work on this, I'd focus on two specific and non-traditional AI approaches, but that's just me. Regardless, I think it would be cool if more graduate students in stats did their research about baseball stats. Graduate students work cheap, and they don't keep their results secret ;-)

You said:

I'm not claiming that his point of view is adequate, but that is an example of a player doing much better in the bigs than a forecast predicted

This is a completely wrong statement...This is you looking at something and saying, see, his numbers there didn't show him to be a good MLer...He outperformed them!

That really doesn't tell the whole story.

Again, having some kind of arbitrary % doesn't mean as much as proper analysis based on the factors i said earlier.

Let's put it another way.

Let's say you have a 22 y/o AA player(call him player A) who has a career OPS of 650 in the minors. Let's say that someone does the stats and finds that 2% of the time, that player becomes a productive MLer.

Now let's say you have another 22 y/o AA player(player B) and he has a career 900 OPS in the minors. The same study done for player A shows that player B is a productive MLer 80% of the time.

Now, that means 20% of the time, player B will be nothing and more than likely, over some period of time(whether it be 1 year, 2 years, 5 years, 10 years, whatever), player A will come around and be a better MLer than player B will be.

Now, in that particular instance, the stats lied to you and you couldn't rely on them.

However, for every one of those occurances, there are many many more where player B was the much better player.

In other words...go with the odds...You will win more than you lose.

Link to comment
Share on other sites

You said:
I'm not claiming that his point of view is adequate, but that is an example of a player doing much better in the bigs than a forecast predicted

This is a completely wrong statement...This is you looking at something and saying, see, his numbers there didn't show him to be a good MLer...He outperformed them!

The K-State guy developed a methodology that best fit the 2002-class of MiL hitters to their 2005 outcomes. When he applied that methodology to rank 2002 MiL hitters individually, it ranked MCab as #247. That is what I was referring to. I am not saying his methodology was trustworthy, I simply meant to convey that players who are underrated by predictive systems are relevant phenomena.

[stuff deleted about using MiL OPS to predict ML success]

In other words...go with the odds...You will win more than you lose.

I have zero idea how reliable MiL OPS is as a predictor. I assume it is not among the best. But we really don't know, do we? If OPS is all the information there is, and if the OPS between 2 guys is markedly different, I agree with you. If other information is available that conflicts with that, and if that other information comes from trustworthy sources, then I don't. For example, if Player-A's OPS is somewhat worse than Player-B, but if you have people you trust telling you that the Player-A has a list of important strengths, and that his lower numbers are due to known weaknesses are likely correctable, then that influences my opinion of Player-A way upwards... and if they tell me that Player-B is at his ceiling and that ML curveballs are gonna eat his lunch, then that influences my opinion of Player-B downward.

Link to comment
Share on other sites

I have zero idea how reliable MiL OPS is as a predictor. I assume it is not among the best. But we really don't know, do we? If OPS is all the information there is, and if the OPS between 2 guys is markedly different, I agree with you. If other information is available that conflicts with that, and if that other information comes from trustworthy sources, then I don't. For example, if Player-A's OPS is somewhat worse than Player-B, but if you have people you trust telling you that the Player-A has a list of important strengths, and that his lower numbers are due to known weaknesses are likely correctable, then that influences my opinion of Player-A way upwards... and if they tell me that Player-B is at his ceiling and that ML curveballs are gonna eat his lunch, then that influences my opinion of Player-B downward.

Well, you can't just use OPS...I was just using that as example when we talk about %.

Again, these are the factors i would use:

Age

BB rate

K rates

HR rates

OPS

league factors

park factors

scouting reports

age comps

This is what i use......I don't need a % to tell me what guys will do...I will just that criteria.

If that criteria is used by 3 different sources and they all come up with a different %, it doesn't change anything for me.

In other words, i rely on those things more than anything else.

Link to comment
Share on other sites

I have zero idea how reliable MiL OPS is as a predictor. I assume it is not among the best. But we really don't know, do we? If OPS is all the information there is, and if the OPS between 2 guys is markedly different, I agree with you. If other information is available that conflicts with that, and if that other information comes from trustworthy sources, then I don't. For example, if Player-A's OPS is somewhat worse than Player-B, but if you have people you trust telling you that the Player-A has a list of important strengths, and that his lower numbers are due to known weaknesses are likely correctable, then that influences my opinion of Player-A way upwards... and if they tell me that Player-B is at his ceiling and that ML curveballs are gonna eat his lunch, then that influences my opinion of Player-B downward.

Well, you can't just use OPS...I was just using that as example when we talk about %.

Apart from the issue of whether you personally rely on OPS, I think it's safe to say that OPS is indeed the usual figure used when people are arguing about whether Player-X deserves to be taken seriously or not. It's generally taken as the end-all number of meaningful differences. It is the main point used in arguing for J.R. House and arguing against Luis Hernandez. For position players, I think it's fair and accurate to say that OPS is indeed what most OH arguments about hitters centers on. I think OPS is the very much used as the "Official OH Predictive Stat for hitters".

Again, these are the factors i would use:

Age

BB rate

K rates

HR rates

OPS

league factors

park factors

scouting reports

age comps

This is what i use......I don't need a % to tell me what guys will do...I will just that criteria... If that criteria is used by 3 different sources and they all come up with a different %, it doesn't change anything for me... In other words, i rely on those things more than anything else.

Yes, you have your own rules-of-thumb about what you trust to help you form opinions. Of course you do. I can't imagine why you wouldn't.

However, opinions based on multiple factors like these is not what we usually have here. You said you were "just using that (OPS) as an example". It's not just that, really. It's what you and everybody else always use as the primary basis in arguments. I know that in this thread alone, you've responded several times arguing for basic common sense judgments, and in most every one you've cited OPS. So, while your list of relevant factors may be long-ish, that doesn't have much to do with most of the arguments that happen around here. Most of the arguments that happen here use OPS as the proper determinant of a guy's fate. As I said, OPS is the "Official OH Predictive Stat" for hitters. Perhaps we can save syllables and just say OOPS instead ;-)

But getting back to your list of relevant factors, I don't disagree with any of them. Of course that stuff should be factored in. And I agree that the FO doesn't seem to use it as much as it should. But at the same time we must notice that, of the various factors on your list, all are based on numbers, except for scouting reports. This is necessary, simply because everybody has instant access to all the numbers, and there's certainly lots of them... but we don't have much access to the kind of info that you get just a bit of in publicly available scouting reports. A good organization will have player-development personnel who know way more than whatever we're gonna see in scouting reports. (For the moment, let's please not get distracted by critiques of the Warehouse in this regard.) For example, Leo evidently sees progress in DCab even though many people are ready to shoot him. I don't think we have any clue about that. If Leo would come to eat dinner with us, I'm sure he could explain it all in about 15 minutes. But we don't know that stuff. And in a good organization, there's gonna be player-development issues like that for tons of players. That's the kind of stuff that we're just not gonna know unless we have spies in the organization. (This includes more than just scouting reports. Let's call it "Staff Info", just so we can call it something.)

So, just as a practical matter, we're not gonna have access to all of that, simply because we're not inside the club. This basic fact means that our access to information works out to a football score of "Numbers 97, Staff Info 3". If somebody's running the baseball club, a score this imbalanced is bad. I want the information score for AM to be "Numbers: 50, Staff Info: 50, with the game in perpetual overtime."

The point is that because we're on the outside, numbers is pretty much all we've got. We simply don't know the other half of the 2-kinds of info. We just know the info we have from a distance. This means that we are shooting in the semi-dark. I think we should not only admit that we're shooting in the semi-dark, we should also act like we realize it. But what happens is that many people talk as if numbers are the One True Light, and that people who go by nothing but numbers are somehow smarter because they don't have their brains poisoned by silly things like that terribly unreliable Staff Info. I think this is a naive and foolhardy stance. I don't think we're better off because we lack half the information. When it comes to making player-development decisions, I think we're a lot worse off because of that, and in a bigtime way. I understand that numbers is all we've got, so that's what we're gonna use. I just wish we appreciated the fact that this keeps us in semi-darkness. Semi-darkness is way better than complete darkness. Just because it's semi-dark, that doesn't mean we should quit trying to see things. I think it's great to look at things using whatever kind of illumination we've got. For us, that pretty much means numbers. But we shouldn't run around as if the semi-darkness is really the bright blinding sunlight, because it's not.

I think numbers can tell us a lot. I think they can tell us things that we would not notice at all without them. Which is good, because numbers are the greatest supply of data we have to go on about MiL guys. So, we should use them. But we should not use them in naive one-dimensional ways. And that's what usually happens: everybody talks *as if* a MiL players fate is written primarily on what OPS number pops off the screen. That's wrong. The sad thing is that we don't even know how wrong it is. A little wrong? A lot wrong? Are there special guys for whom it is way way wrong? If you're playing at Vegas, low-reliability info still good enough to help you get rich. But it's a bad thing to trust when you're trying to decide a the fate of a dilemma guy from afar.

I don't expect perfection from stats. But I do think that we should use them wisely. I wish the serious stat guys would help us do better at that. I wish they would help us know how much to trust numbers and where to not trust them. Instead, all we hear is "sample size", as if that's somehow the answer that matters. In fact, sample size is only relevant to the extent that it effects the reliability of the numbers, and that is exactly the context that it is always used in around here. But sample size doesn't tell us much about reliability at all. All the people who care about "sample size" care about it only because of it's impact on reliability; there is no other reason to care about it. Well, if you know enough to care about sample size, then you also know enough to care about things that are actual measures of reliability. You should stop treating sample size as if it is a measure of reliability. It's just a necessary condition of reliability, but it is not sufficient, and it is certainly not a measure or indicator of reliability. Measures of reliability are what you should be insisting on way more than you should be insisting on sample size. Just like OPS is the "Official OH Predictive Stat", "sample size" is used as if it is the "Official Substitute for Having Info about Trustworthyness" (or OSH!T for short ;-)

Which leaves some of us, including some who actually believe in stats, having frequent reactions to seeing repetitive arguments that usually boil down to little more than OOPS and OSH!T.

(Oh, come on, fess up... it made you smile, didn't it? ;-)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...