PDA

View Full Version : Rating computation suggestion



Pages : 1 [2]

Rhubarb
15-09-2004, 12:11 AM
You cannot seriously consider that heaps of whimsical ideas should be applied to the ratings all the time


No, not whimsical ideas, interesting ideas.


... ratings would end up with no credibility and criticism would be wide-spread.


Sounds like what we have now.

The ACF ratings have more credibility than ever, in my view. The ratings officers have listened to the objections of many and have made changes after thorough testing.

Matt, do you really think that ACF ratings officers should be on call to test your every half-arsed idea about the rating system just because you pose some questions on this BB? Methinks you're taking advantage of Bill's goodwill.

Cat
15-09-2004, 08:50 AM
But they're generally not the things you call for. Take the Gold Coast. There was an issue there but David completely misidentified what it was and hence the solutions he proposed were wrong.

The above quote reminds me of the way cryptozoologists try to rub it in to science every time a new species is discovered, even though the new species that scientists find are virtually never the ones the cryptos said were out there.

The mathematics of the Glicko system is beautiful , indeed a thing to be admired. But like all mathematics, if the underlying premise is flawed, all the beauty in the world will not create the right answer.

Glickman's intuition lead him to the assumption that RD is normally distributed. However, this is not necessarily the case, particularly when one applies this to juniors. Common sense tells us that juniors begin with low ratings and over time improve. Field studies tell us the same thing.

A normal distribution implies that a junior with a rating of 1000 is equally likely to perform at 950 & 1050. Experience and intuition tell us that's not so, that in general, juniors will improve, not necessarily individually over a prescribed period, but as a population this is so. This is also supported statistically in the following way;

Most juniors over time will improve (not all). If a junior achieves a rating of (say) 1000, from an original rating of (say) 600, an for the sake of argument the improvement is linear, then to achieve that rating he must have been performing during the preceeding periods at a level greater than 1000, because while he continues to improve, his rating can never reach his rating performance. This is because the historical data from the time of his initial rating is still making a contribution to his total rating. Therefore the probability that the junior will achieve a rating performance of (say) 1050 is statitically greater than him achieving a performance of (say) 950, because his rating performance over the preceding periods would have been closer to 1050 than 950.

In essence, the value of the historical data when his performance was 600 should be redundant, but in the system it's being retained. In other words, can anything really be predicted from a child's performance, when he is 6, performing at a rating of 600, when that same child reaches 10 and has been performing consistently at 1025 (say)? 'Intuitively' we say no, because experience tells us, field studies tell us and of course, statistically his rating can never reach his rating performance during a period of linear improvement.

One would say from this, why not use a rolling rating system, like the tennis players use. This would indeed make a lot of sense. If 25, 30, 35 games provide enough information to register a very reliable rating, then surely it would make more sense to use the last 25 to 30 games for the juniors to derive their rating. What is the point of including the redundant historical data?

But there is one other issue of concern here, the perception that the interaction between junior and rating is one way. In fact not only will a junior's performance influence his rating, but that the rating will also affect his performance. This is because the child's brain is 'plastic', not solid-state (It's also true of adult's but to a lesser extent). In other words, the expectations of the child, but also the social expectations, will influence performance. By chronically under-rating our juniors we are sentencing them to a life-time of under-achievement.

Now before anyone says, 'ah but this is chess, this is different', chess does not exist in a vacuum. The scientific prinicples that underpin brain development and performance are universal, they apply and have been observed in all disciplines studied. There is nothing unique about chess, there is no 'chess centre' in the brain dedicated strictly to chess. The brain processes information from the chess board in exactly the same way as it processes information from other sources.

Like Jesus Christ at Calgary I guess I'll get a stoning for this. But isn't truth more important than one man's enormous ego?

Rincewind
15-09-2004, 09:32 AM
Glickman's intuition lead him to the assumption that RD is normally distributed. However, this is not necessarily the case, particularly when one applies this to juniors. Common sense tells us that juniors begin with low ratings and over time improve. Field studies tell us the same thing.

Actual I believe he assumes a logistic distribution but this doesn't affect the main thrust of you argument as it is a symmetric distribution. Also, I missed the references for these studies.


A normal distribution implies that a junior with a rating of 1000 is equally likely to perform at 950 & 1050. Experience and intuition tell us that's not so, that in general, juniors will improve, not necessarily individually over a prescribed period, but as a population this is so. This is also supported statistically in the following way;

I disagree, anyone rated 1000 is equally likely to perform above or below that rating. Again I miss the reference to your empirical support for this claim.


Most juniors over time will improve (not all). If a junior achieves a rating of (say) 1000, from an original rating of (say) 600, an for the sake of argument the improvement is linear, then to achieve that rating he must have been performing during the preceeding periods at a level greater than 1000, because while he continues to improve, his rating can never reach his rating performance. This is because the historical data from the time of his initial rating is still making a contribution to his total rating. Therefore the probability that the junior will achieve a rating performance of (say) 1050 is statitically greater than him achieving a performance of (say) 950, because his rating performance over the preceding periods would have been closer to 1050 than 950.

The main flaw in your argument is you assume a linear improvement over time. This is a big ask. I think a more reasonable model is players will tend to have random and unpredictable jumps in playing strength (sometimes backwards) interspersed with plateaus when playing strength remains static. Under this model your argument falls down as there is no evidence that a junior is going to continue to improve in the next or any future rating period.

Therefore the pragmatic approach is also the best. When evidence of improvement is displayed, the ratings reflects this. By way of contrast your argument seems to be:

(P1) assume juniors approve
1.Therefore juniors improve

It is logically valid but only by assuming your conclusion as a given.


In essence, the value of the historical data when his performance was 600 should be redundant, but in the system it's being retained. In other words, can anything really be predicted from a child's performance, when he is 6, performing at a rating of 600, when that same child reaches 10 and has been performing consistently at 1025 (say)? 'Intuitively' we say no, because experience tells us, field studies tell us and of course, statistically his rating can never reach his rating performance during a period of linear improvement.

This is incorrect. He can reach his performance rating provided be plays enough games. In fact it is even possible to overshoot a performance rating provided one plays enough games.


One would say from this, why not use a rolling rating system, like the tennis players use. This would indeed make a lot of sense. If 25, 30, 35 games provide enough information to register a very reliable rating, then surely it would make more sense to use the last 25 to 30 games for the juniors to derive their rating. What is the point of including the redundant historical data?

As mentioned above, your conceptualisation is flawed. If you can overshoot your performance then obviously historical data in the system is not retained ad infinitum.


But there is one other issue of concern here, the perception that the interaction between junior and rating is one way. In fact not only will a junior's performance influence his rating, but that the rating will also affect his performance. This is because the child's brain is 'plastic', not solid-state (It's also true of adult's but to a lesser extent). In other words, the expectations of the child, but also the social expectations, will influence performance. By chronically under-rating our juniors we are sentencing them to a life-time of under-achievement.

First you are assuming they are chronically underrated, secondly you are saying all juniours are weak willed. I think both are wrong. There are counter examples of (1) juniours who have performed well despite temporarily modest ratnigs. The Songs and Ronald Yu being two cases recently in NSW chess. Also there are cases of juniors who end up overrated who don't performed to expectations and become discouraged. Not so many cases these days but more so in the 80's - I won't name names.

However, I will argue that overrating is at least as dangerous as underrating. The aim should be to "correctly" rate all players, regardless of age.


Now before anyone says, 'ah but this is chess, this is different', chess does not exist in a vacuum. The scientific prinicples that underpin brain development and performance are universal, they apply and have been observed in all disciplines studied. There is nothing unique about chess, there is no 'chess centre' in the brain dedicated strictly to chess. The brain processes information from the chess board in exactly the same way as it processes information from other sources.

Like Jesus Christ at Calgary I guess I'll get a stoning for this. But isn't truth more important than one man's enormous ego?

This must be some new defintion of the word "truth" to which I've been blissfully unaware. I would say truth is underpinned by facts. Without getting into an epistological argument, just let me say, come back when you have some. :hand:

pax
15-09-2004, 09:34 AM
The mathematics of the Glicko system is beautiful , indeed a thing to be admired.

Actually it's not that beautiful. The basic concepts are nice, but the implementation requires approximation after approximation. It's not elegant, but it works - but that is, after all the important thing.

Rincewind
15-09-2004, 09:42 AM
Actually it's not that beautiful. The basic concepts are nice, but the implementation requires approximation after approximation. It's not elegant, but it works - but that is, after all the important thing.

That is a fair point. But remember the model is based on assumptions too so provided the approximations are as good as the assumption then you at least have something which is useful. I can't see too much advantage in finding an analytical solution to the model in this particular case.

pax
15-09-2004, 09:53 AM
That is a fair point. But remember the model is based on assumptions too so provided the approximations are as good as the assumption then you at least have something which is useful. I can't see too much advantage in finding an analytical solution to the model in this particular case.

Yes, quite. My comment was only about the beauty of the mathematics, not about the merits of the system. The two are quite different things.

Bill Gletsos
15-09-2004, 10:43 AM
Actual I believe he assumes a logistic distribution but this doesn't affect the main thrust of you argument as it is a symmetric distribution. Also, I missed the references for these studies.
Yes, typical DR rubbish. Make a claim based on no evidence and try and make it look like a fact.


I disagree, anyone rated 1000 is equally likely to perform above or below that rating. Again I miss the reference to your empirical support for this claim.
Same as above, another DR claim based on no evidence to support it.
He has been doing this for well over 18mths.
He really is becoming tiresome.


The main flaw in your argument is you assume a linear improvement over time. This is a big ask. I think a more reasonable model is players will tend to have random and unpredictable jumps in playing strength (sometimes backwards) interspersed with plateaus when playing strength remains static. Under this model your argument falls down as there is no evidence that a junior is going to continue to improve in the next or any future rating period.
DR continues to make unsupported statements based just on what he believes and not actual facts.


Therefore the pragmatic approach is also the best. When evidence of improvement is displayed, the ratings reflects this. By way of contrast your argument seems to be:

(P1) assume juniors approve
1.Therefore juniors improve

It is logically valid but only by assuming your conclusion as a given.
Yes, DR's logic is always flawed because he is basing it on an unfounded assumption.


This is incorrect. He can reach his performance rating provided be plays enough games. In fact it is even possible to overshoot a performance rating provided one plays enough games.
This just goes to show how little DR pays attention to previous posts.
The point you make has been discussed a number of times.
In fact it was discussed on the BB as far back as the ratings debate in the ACF bulletins last March with regards to Wettstein's rating and more recently in my discussions with pax.


As mentioned above, your conceptualisation is flawed. If you can overshoot your performance then obviously historical data in the system is not retained ad infinitum.
Dont expect DR to admit any of his ideas are flawed.


First you are assuming they are chronically underrated, secondly you are saying all juniours are weak willed. I think both are wrong. There are counter examples of (1) juniours who have performed well despite temporarily modest ratnigs. The Songs and Ronald Yu being two cases recently in NSW chess. Also there are cases of juniors who end up overrated who don't performed to expectations and become discouraged. Not so many cases these days but more so in the 80's - I won't name names.

However, I will argue that overrating is at least as dangerous as underrating. The aim should be to "correctly" rate all players, regardless of age.
As usual DR just makes statements without any evidence to support his claims.


This must be some new defintion of the word "truth" to which I've been blissfully unaware. I would say truth is underpinned by facts. Without getting into an epistological argument, just let me say, come back when you have some. :hand:
The bottom line is that Glickman is not only a Professor of Statistics but has been involved with ratings and rating systems for well over 10 years.
DR is an MD and demonstrates no understanding of rating systems.

Therefore when it comes to ratings the chance that I'm going to take DR's unfounded and unsupported views over that of Professor Glickman is about the same as my chance of becoming Australia's next GM. :owned:

Cat
15-09-2004, 10:44 AM
[QUOTE=Barry Cox]

The main flaw in your argument is you assume a linear improvement over time. This is a big ask. I think a more reasonable model is players will tend to have random and unpredictable jumps in playing strength (sometimes backwards) interspersed with plateaus when playing strength remains static. Under this model your argument falls down as there is no evidence that a junior is going to continue to improve in the next or any future rating period.

Obviously in the real world, both situations exist. The changes in junior ratings are obviously not going to increase perfectly linearly, but neither are they steady state. The evidence is readily available, look at the rating changes we observe. Simply take the average rating of a 6 year old, the average rating of a 17yr old and observe every year between. Notice the pattern?


Therefore the pragmatic approach is also the best. When evidence of improvement is displayed, the ratings reflects this. By way of contrast your argument seems to be:

(P1) assume juniors approve
1.Therefore juniors improve

It is logically valid but only by assuming your conclusion as a given.

Any attempt to apply modelling requires some field research. Check it out.


This is incorrect. He can reach his performance rating provided be plays enough games. In fact it is even possible to overshoot a performance rating provided one plays enough games.

Not if his performance continues to improve. The old data will still affect his rating.


As mentioned above, your conceptualisation is flawed. If you can overshoot your performance then obviously historical data in the system is not retained ad infinitum.

This is true, but in reality in the Australia environment, this happens infrequently, whereas under-rating is pervasive. It's a little like saying don't use flouride because it causes flourinosis. In fact for evey case of flourinosis, thousands of cases of cares are prevented.


First you are assuming they are chronically underrated, secondly you are saying all juniours are weak willed.

Its nothing to do with being weak willed. It's like saying depression or schizophrenia is due to weak will. It simply hard wiring.



The aim should be to "correctly" rate all players, regardless of age. That's fair enough

Cat
15-09-2004, 11:22 AM
Yes, typical DR rubbish. Make a claim based on no evidence and try and make it look like a fact.


Same as above, another DR claim based on no evidence to support it.
He has been doing this for well over 18mths.
He really is becoming tiresome.:

OK, very simple test. If there is no trend, and if RD is normally distributed, then rating performance should be normally distributed around rating mean. Why not publish rating performances for each period? Of course, at the moment the information will be distorted due to the recent corrections. But over time, if what you say is correct, then we should see a bell curve emerge.

Bill Gletsos
15-09-2004, 11:32 AM
OK, very simple test. If there is no trend, and if RD is normally distributed, then rating performance should be normally distributed around rating mean. Why not publish rating performances for each period? Of course, at the moment the information will be distorted due to the recent corrections. But over time, if what you say is correct, then we should see a bell curve emerge.
Firstly given your behaviour towards me over the past 18mths I have no intention of doing anything you ask.
Secondly, I'll take Glickmans opinion over yours any day.

Garvinator
15-09-2004, 12:16 PM
dont worry ppl, if i can get to the tournament on saturday, i hopefully will be putting a stop to all this rating crap talk.

Cat
15-09-2004, 01:15 PM
Firstly given your behaviour towards me over the past 18mths I have no intention of doing anything you ask.
Secondly, I'll take Glickmans opinion over yours any day.

Oh you're too sensitive

Bill Gletsos
15-09-2004, 01:26 PM
Oh you're too sensitive
No, just completely fed up with you and your foolish mate Matt.

Also it appears you cannot even answer Kevin's rating question.

Rincewind
15-09-2004, 01:29 PM
Obviously in the real world, both situations exist. The changes in junior ratings are obviously not going to increase perfectly linearly, but neither are they steady state. The evidence is readily available, look at the rating changes we observe. Simply take the average rating of a 6 year old, the average rating of a 17yr old and observe every year between. Notice the pattern?

No I have not noticed a pattern because I have not studied the data. Remember I was not proposing a model just offering an alternative based on the same evidence as yours. Actually I believe the model with the non-monotonic stocastically distributed jump discontinuities is closer to reality simply because it is more general and can be used to describe almost any function depending on the distribution of singularities.

I say nothing about the distribution or magnitude of these singularities but simply point out that to fit into my paradigm you are assuming quite a lot of evenly distributed singularities of roughly the same magnitude.


Any attempt to apply modelling requires some field research. Check it out.

Relax, the field research can be done by others. You can take advantage of this research by searching the literature for reliable studies. As you are the one proposing this linear model, perhaps you are the one to visit the library. Many good databases of journals and texts are also available online these days.


Not if his performance continues to improve. The old data will still affect his rating.

Once his rating has approached his last performance rating you are assuming his rating is "right" and therefore all history prior to that is irrelevant. This can happen if he has an overshoot due to large number of games in a rating period. A second scenario is caused by a good run of form which will cause his performance rating in a period to overstate "true" rating strength. Obviously once rating has surpassed actual, then histry is again obsolete (for an improving player). A third option is if a players suffers a period of "genuine" strength decline. Again his rating may wind up higher than his true rating strength.

In all these cases his published rating is greater than his true strength and therefore the argument of history holding him back is simply nonsense.


This is true, but in reality in the Australia environment, this happens infrequently, whereas under-rating is pervasive. It's a little like saying don't use flouride because it causes flourinosis. In fact for evey case of flourinosis, thousands of cases of cares are prevented.

I think you would need to provide some evidence of this. If this phenomena is as pervasive as you say, it should be borne out by the literature.


Its nothing to do with being weak willed. It's like saying depression or schizophrenia is due to weak will. It simply hard wiring.

Weak willed is not the PC term but you were arguing to the general state for most players. Patently most people and probably most chess players are not clinically depressed, or schizophrenic. So while people with these conditions may perform worse if they are underrated, the general population will react in a multitude of different ways. Some will be more motivated and perform better, others may be overwhelmed and perform worse. Most (I suspect) will have no statistically significant difference in their results.

If you want to argue a general statistically significant impact I think it is up to you to provide some evidence to support this claim.

Garvinator
15-09-2004, 02:40 PM
Also it appears you cannot even answer Kevin's rating question. or my question about the meeting :eek:

Cat
16-09-2004, 08:46 AM
No I have not noticed a pattern because I have not studied the data. Remember I was not proposing a model just offering an alternative based on the same evidence as yours. Actually I believe the model with the non-monotonic stocastically distributed jump discontinuities is closer to reality simply because it is more general and can be used to describe almost any function depending on the distribution of singularities.

I say nothing about the distribution or magnitude of these singularities but simply point out that to fit into my paradigm you are assuming quite a lot of evenly distributed singularities of roughly the same magnitude.

I agree in theory this is so, but it breaks down in the Australian world, not because what you say is not correct, but because the practical problem of collecting the data sufficiently quickly and and processing the data sufficiently frequently makes realisation of accurate results almost impossible. The skewness develops because of insufficient observation, (very) non-random distribution and using rating periods that are, though practical, too wide given the rate of change in the junior population. This is not anyone's fault, it's just that the infra-structure to do justice to your reasoning is prohibitive.



Once his rating has approached his last performance rating you are assuming his rating is "right" and therefore all history prior to that is irrelevant. This can happen if he has an overshoot due to large number of games in a rating period. A second scenario is caused by a good run of form which will cause his performance rating in a period to overstate "true" rating strength. Obviously once rating has surpassed actual, then histry is again obsolete (for an improving player). A third option is if a players suffers a period of "genuine" strength decline. Again his rating may wind up higher than his true rating strength.

Yes but during the period of catch up, which is not an insignificant period of time, the distortion infects the system, again creating skewness.



I think you would need to provide some evidence of this. If this phenomena is as pervasive as you say, it should be borne out by the literature.

Well as I say, if the rating performances are normally distributed around the rating mean in the the junior population over time, allowing for the recent correction, I'll eat my hat. Big apology all round I think.


Weak willed is not the PC term but you were arguing to the general state for most players. Patently most people and probably most chess players are not clinically depressed, or schizophrenic. So while people with these conditions may perform worse if they are underrated, the general population will react in a multitude of different ways. Some will be more motivated and perform better, others may be overwhelmed and perform worse. Most (I suspect) will have no statistically significant difference in their results.

If you want to argue a general statistically significant impact I think it is up to you to provide some evidence to support this claim.

It's not quite what I meant - the brain has enormous adaptive capability, especially the plastic brain of the child. Given a particular circumstance, the brain will collect information & process that information that seems consistant with its environment. It's a bit like a conjurors trick, the speed of the hand decieves the eye and the child believes its truely magic. Until the trick is revealed, the child will continue to believe.

Look, I'm not really interested in keeping this going, I've said enough already and Bill has already indicated that he & Graham will be monitoring the situation and that's good enough for me. Bill, I'm very sorry for the ego comment, it was very uncalled-for, I guess I'm still a little defensive after last week.

Rincewind
16-09-2004, 09:52 AM
I agree in theory this is so, but it breaks down in the Australian world, not because what you say is not correct, but because the practical problem of collecting the data sufficiently quickly and and processing the data sufficiently frequently makes realisation of accurate results almost impossible. The skewness develops because of insufficient observation, (very) non-random distribution and using rating periods that are, though practical, too wide given the rate of change in the junior population. This is not anyone's fault, it's just that the infra-structure to do justice to your reasoning is prohibitive.

If you are talking about the time frame from 6 to 17 there are 11 years with at least 3 rating periods per year that is 33 data points. Should be plenty to determine if the rating of juniors tend to be linearly increasing over that period. Just be very careful with the selection of your study group.


Yes but during the period of catch up, which is not an insignificant period of time, the distortion infects the system, again creating skewness.

You are assuming a pervasive catch up period but haven't demonstrated that one exists.


Well as I say, if the rating performances are normally distributed around the rating mean in the the junior population over time, allowing for the recent correction, I'll eat my hat. Big apology all round I think.

I don't know why you continually refer to the normal distribution. But whatever the distribution, as long as the mean is is not significantly perturbed from the performance then is there a problem?


It's not quite what I meant - the brain has enormous adaptive capability, especially the plastic brain of the child. Given a particular circumstance, the brain will collect information & process that information that seems consistant with its environment. It's a bit like a conjurors trick, the speed of the hand decieves the eye and the child believes its truely magic. Until the trick is revealed, the child will continue to believe.

Not all juniors as a gullible as you make out. I guess children who believe and perform to their rating could do with a talk from their coach/parent/guardian/mentor. Once they are over the rating system pre-ordaining their results then I think there won't be a problem. However, I suspect most kids who are better than their rating already know this (especially those whose rating has been increasing linearly since the age of 6).

Cat
16-09-2004, 10:46 AM
If you are talking about the time frame from 6 to 17 there are 11 years with at least 3 rating periods per year that is 33 data points. Should be plenty to determine if the rating of juniors tend to be linearly increasing over that period. Just be very careful with the selection of your study group.

That might be enough in a completely open pool, such as the internet. Another interesting statistic might be the ratio junior v's junior against junior v's adult, as some measure of the openess of the pools. The period of 3 months might be a little long for some juniors at least some of the time, another interesting potential point of investigation.



You are assuming a pervasive catch up period but haven't demonstrated that one exists.

I guess when you look at these things the models depict what should happen as an individual. Population effects are another thing, in other words there is an interaction between whats happeing individually as predicted by Glicko, and the habitat with all it's vagracies. The population effects are clear, there's clearly a period of overall development where the next effect is one of general population increase.


I don't know why you continually refer to the normal distribution. But whatever the distribution, as long as the mean is is not significantly perturbed from the performance then is there a problem?

No, depending what you define as significant. 30pts might be equivalent to 6/12 lag, for example.


Not all juniors as a gullible as you make out. I guess children who believe and perform to their rating could do with a talk from their coach/parent/guardian/mentor. Once they are over the rating system pre-ordaining their results then I think there won't be a problem. However, I suspect most kids who are better than their rating already know this (especially those whose rating has been increasing linearly since the age of 6).

Look its nothing to do with gullibility. I'll see if I can find you some references. In some ways, KB's arguments about biochemical influences on free will are similar.

Rincewind
16-09-2004, 12:29 PM
That might be enough in a completely open pool, such as the internet. Another interesting statistic might be the ratio junior v's junior against junior v's adult, as some measure of the openess of the pools. The period of 3 months might be a little long for some juniors at least some of the time, another interesting potential point of investigation.

The spacing of the mesh points doesn't come into it. Graph rating over 33 roughly evenly spaced observations and see if your hypothesis is disproved.


I guess when you look at these things the models depict what should happen as an individual. Population effects are another thing, in other words there is an interaction between whats happeing individually as predicted by Glicko, and the habitat with all it's vagracies. The population effects are clear, there's clearly a period of overall development where the next effect is one of general population increase.

I think you are being selective in your grouping of individuals. Pick all juniors including those that stop playing, those who get to various level and stop playing, etc, etc, etc and your period of overall developments might become a little less clearly defined. That's what I meant by be careful in selecting your study group.


No, depending what you define as significant. 30pts might be equivalent to 6/12 lag, for example.

We don;t care about the lag. We want to see if juniors improve linearly through the ages 6 to 17. The effect of lag won;t be appreciable.


Look its nothing to do with gullibility. I'll see if I can find you some references. In some ways, KB's arguments about biochemical influences on free will are similar.

You seem to be arguing that children are more inclined to believe what they are told as prima face truths. This seems to fit my definition of gullible very well. Either way, some reference as to the predisposition of juniors to this mindset would be interesting since it is not my personal experience.

Cat
16-09-2004, 01:34 PM
The spacing of the mesh points doesn't come into it. Graph rating over 33 roughly evenly spaced observations and see if your hypothesis is disproved.

What if the dynamic change is greater over a period of time than the system can generate? Isn't that what volatility does?


I think you are being selective in your grouping of individuals. Pick all juniors including those that stop playing, those who get to various level and stop playing, etc, etc, etc and your period of overall developments might become a little less clearly defined. That's what I meant by be careful in selecting your study group.

I might have some old data from the QJR. Do you want me to pm it to you, if I can find it?



You seem to be arguing that children are more inclined to believe what they are told as prima face truths. This seems to fit my definition of gullible very well. Either way, some reference as to the predisposition of juniors to this mindset would be interesting since it is not my personal experience.

This is true too, but its not quite what I meant. Until kids enter puberty they lack abstract thinking skills. Tell them a fact and they usually believe it literally. Childhood ability to identify falsehood correlates well with 'success'. What I was relating was something slightly different. I'll see if I can dig something out.

Rincewind
16-09-2004, 02:19 PM
What if the dynamic change is greater over a period of time than the system can generate? Isn't that what volatility does?

Yes, but your were using the argument that rating improvment can be considered linear. You can disprove that argument using 3 month sampling. If you can't disprove it then look at better quality data.

Cat
16-09-2004, 04:26 PM
Yes, but your were using the argument that rating improvment can be considered linear. You can disprove that argument using 3 month sampling. If you can't disprove it then look at better quality data.

You are very generous and intellegent man, BJC.

Garvinator
16-09-2004, 04:49 PM
here is a question: what is the difference between chesslover (negc) and David Richards in this thread?

ursogr8
16-09-2004, 05:01 PM
here is a question: what is the difference between chesslover (negc) and David Richards in this thread?

> One is all here and one is not.

>> One is helped by Bill and the other is not.

>>> One is good with the statistics and the other is not.

Garvinator
16-09-2004, 05:09 PM
> One is all here and one is not.

>> One is helped by Bill and the other is not.

>>> One is good with the statistics and the other is not.
actually not quite the answer i was thinking off, keeping trying, other suggestions welcome ;)

ursogr8
16-09-2004, 05:23 PM
actually not quite the answer i was thinking off, keeping trying, other suggestions welcome ;)
You missed the opportunity to say which met each of the criteria. The moment has passed . :hand:

Bill Gletsos
16-09-2004, 05:28 PM
> One is all here and one is not.
true.

>> One is helped by Bill and the other is not.
Yes, I doubt I've ever helped DR.


>>> One is good with the statistics and the other is not.
Ah so chesslover was good with statistics. :owned:

arosar
16-09-2004, 05:33 PM
here is a question: what is the difference between chesslover (negc) and David Richards in this thread?

Chesslover is a genius.

AR

Cat
16-09-2004, 05:49 PM
Chesslover is a genius.

AR Was, AR, was. I'm flattered by the comparison!

Bill Gletsos
16-09-2004, 06:48 PM
Was, AR, was. I'm flattered by the comparison!
Unless CL is dead then AR's "is" would be correct usage.
Since AR was answering gg's question then he was pointing out the differences between you and chesslover.
Therefore if AR is saying CL was a genius that would make you a ..... .
As such AR was hardly being flattering to you.

Cat
16-09-2004, 06:51 PM
Unless CL is dead then AR's "is" would be correct usage.
Since AR was answering gg's question then he was pointing out the differences between you and chesslover.
Therefore if AR is saying CL was a genius that would make you a ..... .
As such AR was hardly being flattering to you.

That's very clever of you Bill.

arosar
16-09-2004, 06:53 PM
Unless CL is dead then AR's "is" would be correct usage.
Since AR was answering gg's question then he was pointing out the differences between you and chesslover.
Therefore if AR is saying CL was a genius that would make you a ..... .
As such AR was hardly being flattering to you.

Once again, Bill, you are very exact. But sometimes, this exactitude is suffocating man.

AR

Bill Gletsos
16-09-2004, 07:09 PM
Once again, Bill, you are very exact. But sometimes, this exactitude is suffocating man.

AR
You know me AR, I always try to be exact. ;)

Rincewind
16-09-2004, 08:35 PM
You are very generous and intellegent man, BJC.

This is not rocket science. It is just everyday, garden variety science. You should always aim to disprove your own hypotheses. If you have access to data that might disprove a hypothesis then use it and put it to the test. If you are successful in disproving the hypothesis then you have learned something. You have a hypothesis which has been falsified. That is a good thing.

Cat
16-09-2004, 08:41 PM
This is not rocket science. It is just everyday, garden variety science. You should always aim to disprove your own hypotheses. If you have access to data that might disprove a hypothesis then use it and put it to the test. If you are successful in disproving the hypothesis then you have learned something. You have a hypothesis which has been falsified. That is a good thing.


Its not what you say its the way that you say it.

peanbrain
16-09-2004, 08:41 PM
Was, AR, was. I'm flattered by the comparison!

Don't be.

DR you are an idiot when it come to ratings and you have no idea what you are talking about.

Tell us which medical facility you work at so we can avoid going there. :hand:

Cat
16-09-2004, 08:49 PM
Don't be.

DR you are an idiot when it come to ratings and you have no idea what you are talking about.

Tell us which medical facility you work at so we can avoid going there. :hand:

I'd be delighted to give it to you, would you like a pm?

peanbrain
16-09-2004, 08:58 PM
I'd be delighted to give it to you, would you like a pm?

Firstly what is your area of practice? Is it anything to do with brains?

Cat
16-09-2004, 09:00 PM
Firstly what is your area of practice? Is it anything to do with brains?

Is that what you're in need of?

peanbrain
16-09-2004, 09:04 PM
Is that what you're in need of?

No but I think you need it! :ogre:

Cat
16-09-2004, 09:07 PM
No but I think you need it! :ogre:

Ah, well I can't help you then, can I?

Garvinator
17-09-2004, 01:10 AM
the answer was that both are banging on about a topic after everyone else has grown really tired of it. All angles have been discussed and argued about, i think.

Garvinator
19-09-2004, 07:46 PM
dont worry ppl, if i can get to the tournament on saturday, i hopefully will be putting a stop to all this rating crap talk.
Ok as promised, i have spoken to Graeme Gardiner about comments made on here by David Richards regarding ratings.

Graeme said, in abbreviated form, that David is not speaking for the Gold Coast Chess Club in any manner.

Regarding my suggestions that i wanted David to put forward at the Gold Coast Chess Club meeting, according to Graeme, david failed to do this.