Subscribe:

Pages

July 31, 2011

Elo Rating System History

As everyone know, the Elo rating system shows how strong the player is. The Elo rating system calculates the results of a game, tournament, or chess event as numerical Elo results which are easy to read.

For the basics, if your Performance is 2450 and you have met the International Master norm requirements, you will get the IM Norm. If your Performance is 2600 and you have met the Grandmaster norm requirements, you will get the GM Norm.

History
Arpad Elo was a master-level chess player and an active participant in the United States Chess Federation (USCF) from its founding in 1939.[2] The USCF used a numerical ratings system, devised by Kenneth Harkness, to allow members to track their individual progress in terms other than tournament wins and losses. The Harkness system was reasonably fair, but in some circumstances gave rise to ratings which many observers considered inaccurate. On behalf of the USCF, Elo devised a new system with a more statistical basis.

Elo's system replaced earlier systems of competitive rewards with a system based on statistical estimation. Rating systems for many sports award points in accordance with subjective evaluations of the 'greatness' of certain achievements. For example, winning an important golf tournament might be worth an arbitrarily chosen five times as many points as winning a lesser tournament.

A statistical endeavor, by contrast, uses a model that relates the game results to underlying variables representing the ability of each player.

Elo's central assumption was that the chess performance of each player in each game is a normally distributed random variable. Although a player might perform significantly better or worse from one game to the next, Elo assumed that the mean value of the performances of any given player changes only slowly over time. Elo thought of a player's true skill as the mean of that player's performance random variable.

A further assumption is necessary, because chess performance in the above sense is still not measurable. One cannot look at a sequence of moves and say, "That performance is 2039." Performance can only be inferred from wins, draws and losses. Therefore, if a player wins a game, he is assumed to have performed at a higher level than his opponent for that game. Conversely if he loses, he is assumed to have performed at a lower level. If the game is a draw, the two players are assumed to have performed at nearly the same level.

Elo did not specify exactly how close two performances ought to be to result in a draw as opposed to a win or loss. And while he thought it is likely that each player might have a different standard deviation to his performance, he made a simplifying assumption to the contrary.

To simplify computation even further, Elo proposed a straightforward method of estimating the variables in his model (i.e., the true skill of each player). One could calculate relatively easily, from tables, how many games a player is expected to win based on a comparison of his rating to the ratings of his opponents. If a player won more games than he was expected to win, his rating would be adjusted upward, while if he won fewer games than expected his rating would be adjusted downward. Moreover, that adjustment was to be in exact linear proportion to the number of wins by which the player had exceeded or fallen short of his expected number of wins.

From a modern perspective, Elo's simplifying assumptions are not necessary because computing power is inexpensive and widely available. Moreover, even within the simplified model, more efficient estimation techniques are well known. Several people, most notably Mark Glickman, have proposed using more sophisticated statistical machinery to estimate the same variables. On the other hand, the computational simplicity of the Elo system has proven to be one of its greatest assets. With the aid of a pocket calculator, an informed chess competitor can calculate to within one point what his next officially published rating will be, which helps promote a perception that the ratings are fair.

Implementing Elo's scheme
The USCF implemented Elo's suggestions in 1960,[3] and the system quickly gained recognition as being both more fair and more accurate than the Harkness rating system. Elo's system was adopted by FIDE in 1970. Elo described his work in some detail in the book The Rating of Chessplayers, Past and Present, published in 1978.

Subsequent statistical tests have shown that chess performance is almost certainly not normally distributed. Weaker players have significantly greater winning chances than Elo's model predicts. Therefore, both the USCF and FIDE have switched to formulas based on the logistic distribution. However, in reference to Elo's contribution, both organizations are still commonly said to use "the Elo system".


Source: wikipedia

July 30, 2011

Crouching Tigran, Hidden Dragon

Here's a game I used to play over and over before for a few reasons. A rare game of Petrosian as a stable defender and attacker. A nice queen sacrifice ending. A weird but brilliant 42nd move - Qa8.

Even now I'm still wondering how did white went wrong. Petrosian proves that his bad reputation is unearned. Who says Petrosian afraid to give up a piece. And as for the Qa8, all I could think is to enabled 43... Nd3 move to open up the diagonal a8-h1 in case white slays the knight on d3. You can check the game below.

White: Paul Keres
Black: Tigran Petrosian
Date: 1959
Event: Bled
ECO: B39