I have to dust off the old statistics class, so a lot of this may be wrong. But, in the Fangraphs glossary article they say that "stabilization" is when the R^2 of the variable crosses .49 to .50. Said another way, the variable (for example strikeout rate) "stabilizes" at a certain sample size (60 PA). In this case, "stabilize" means that the variable (Strikeout Rate) is at least 50 percent determined by the sample size (60 PA). So, going forward, you would expect the variable (Strikeout Rate) to not go 50% higher or lower than when it crosses that "stabilization" point (60 PA). For example, if after 60 PA you had a strikeout rate of 10% you would only expect it to go up or down 5% (or less) moving forward.
However, that is only if the sample (60 PA) is the exact same as the previous sample (60 PA), which of course it is not in baseball. Russel Carleton, the guy who popularized this stuff in baseball, has written on numerous occasions that his "stabilization point" was some what arbitrary and is used incorrectly nowadays to explain early season performances. He says it's really more like 150 PA to explain strikeout rate in a season to get to the "stabilization point." You should read the linked article and anything else he has written on the subject to understand it better. Here is quote from that article that I linked.