PDA

View Full Version : Skill differences (comparison of chess and go)



Schu
01-07-2009, 03:51 PM
Some of the questions I hear about board games kinda make me laugh, as well as the statements. One, from a rabid asiophile and big fan of go, was "go is more skillful than chess", after which I challenged him to a game of chess and he declined. Also I told him about chess' origins being Indian and therefore asian, just to mess with him a little.

But I find it interesting when people ask (or say) which game is harder, more skillful, whatever. I don't think most people realise what they're asking. I mean, first of all, a game itself can't be skillful. Second of all, what definition of harder are you going for? Requires more effort/time? In an even game, both chess and go should take equal time and effort from both players, so that's not it. Harder to learn? Both games have comparatively simple rules, though chess' in this case would have to be harder, but I don't think that's what they're asking. More prone to failure? Failure at chess is extremely contingent on the skill relative to you of your opponent.

I think the only real way to give any answer at all to the question is to frame the question as "which game makes it such that the smallest increase in skill results in one player having a reliable advantage against the other" or something like that. Now obviously skill is only measured in relative terms, so the "smallest increase in skill" has to be determined in a different way, most likely through analysis of rankings and ratings.

The way I'd do it is something like this: look up results for when players of a game ranked about #10 play players ranked about #100, and see what kind of elo point difference that would imply. do the same for #100 and #1000, and if world rankings still allow, #1000 and #10000. Then, to approximate players ranked about #100,000, you're probably looking for club champions etc., then for 1,000,000, maybe active competitive players etc.

Now, if multiplying the ranking by a factor of 10 results in a higher implied elo difference for one game than another, I think that game would have at least some claim to being the "more skillful game" or something like that, there are probably better words for it. I wonder though, I think checkers could possibly pip both, considering that the top player went decades with fewer than 10 defeats. How about sporting pursuits? Rarely do you see Federer or Nadal lose to someone not ranked in the top 20, but it certainly does happen.

Any thoughts?

ER
03-07-2009, 07:48 AM
A very interesting topic for discussion!
I think rules and their application must also be taken under consideration in deciding the complexity, depth, skills required for a game/sport.
Luck (or dearth of it) is another factor as well as the physical, mental, psychological condition of players involved!
Upsets in mental sports as in Chess are very frequent and in the highest level!
BTW and in passing, could Fischer vs Spasky (1972) and Kramnik vs Kasparov (2000) World Championship results be considered as upsets?

Schu
03-07-2009, 02:53 PM
Fischer's win was perhaps an upset in terms of psychology, since he'd never beaten him before, but I think most people realised that Fischer had been the best chessplayer around for about a decade, and if he ever got a chance to play at his full capacity for a WC and got over his own hoodoos, he'd have a good a chance as anyone. Just think about him destroying Taimanov and Larsen 6-0 and destroying Petrosian too. Kramnik's was a genuine upset. Euwe was another, Khalifman another, they happen.

Upsets are going to still happen though, you can't control everything, and people will play better than usual some times, worse other times. The only thing I'm worried about in this context is how much skill difference is needed before other factors become almost impossibly unlikely to change the outcome.

pappubahry
03-07-2009, 03:58 PM
The problem with making these sorts of cross-sport/game comparisons is that there are two things that are changing:

- the structure of the game (this is what you're interested in studying)
- the number of people who play the game/sport

To elaborate on the second point, suppose someone wanted to answer your questions, but instead of chess and go, he compares the Australian sport of chess and the Russian sport of Шахматы.

They would see that in chess, the best player is Zhao, and he has about a 500-point Elo gap to the 50th best player. But the best Шахматы player is Jakovenko, and he's only about 200 points clear of the 50th best player. You can't conclude that the top tier of Australian chess is more skilful than Russia's....

It should be possible to properly control for differences in participation rates, but I'm not sure off the top of my head how you'd go about doing so.

Kevin Bonham
03-07-2009, 05:34 PM
Kramnik's was a genuine upset. Euwe was another, Khalifman another, they happen.

Kramnik did have a very good record against Kasparov specifically and it was one of the rare cases where head-to-head scores would have been a better predictive tool than ratings. (I recall Sonas listing Kasparov-Kramnik as one of only five established player pairings for which this was demonstrably true though I do not recall how he derived that.)

Most wins of the FIDE knockout pseudo-World Championships were upset winners - Khalifman, Ponomariov and Kasimdzhanov were all nowhere near being the strongest players in the events they won. Only Anand was an expected winner. The short-match format with fast tiebreakers made it highly unlikely that the best player would actually win.

Kevin Bonham
03-07-2009, 05:58 PM
I wonder though, I think checkers could possibly pip both, considering that the top player went decades with fewer than 10 defeats.

That is probably simply because checkers is more drawish when played at top level than chess; it is easier to avoid defeat.

Determining how to compare skill factors between different games seems very challenging to me. One issue is straining out the influence of luck. In a game that contains a strong chance element even great differences in skill will not necessarily result in a reliable victory for the superior player. But it may still be the case that massive differences in skill exist. I previously mentioned the Monopoly club I used to be involved in at the Uni, in which I ran a rating system and found that win rates by player in games with an average of about five players could be as low as 5% or as high as 40%, over very large numbers of games.

Another issue, related to chance, is the influence of sample size. Is a game of chess really comparable to a five-set match of tennis in terms of the extent to which the result will reflect a skill difference should one exist? Or is a game of chess more comparable to one set, or to a three-match series? Suppose you rated tennis on a similar system to ELO, the ratings would be closer together if you did the ratings set by set than if you did them match by match.

Sports with "granular" scoring systems also mess with the idea that a great difference in skill leads to a reliable victory. In soccer one team will sometimes win the match with a goal scored against the run of play, converting 1 of its 2 serious opportunities while the other team converts say 0 of 5. But that doesn't mean there is less skill in the game than in, say, Aussie Rules; what it means is that the disconnect between the skill and the scoreboard might be greater. Chess is a bit like this too because in a game between two good players of slightly different strengths, the most likely result is a draw.

Schu
04-07-2009, 12:07 AM
The problem with making these sorts of cross-sport/game comparisons is that there are two things that are changing:

- the structure of the game (this is what you're interested in studying)
- the number of people who play the game/sport

To elaborate on the second point, suppose someone wanted to answer your questions, but instead of chess and go, he compares the Australian sport of chess and the Russian sport of Шахматы.

They would see that in chess, the best player is Zhao, and he has about a 500-point Elo gap to the 50th best player. But the best Шахматы player is Jakovenko, and he's only about 200 points clear of the 50th best player. You can't conclude that the top tier of Australian chess is more skilful than Russia's....

It should be possible to properly control for differences in participation rates, but I'm not sure off the top of my head how you'd go about doing so.

I'm sure there are ways. You could work out where players lie on a percentile score of all chess players that at the very least play club chess, or something like that. I'm no statistician though, so I wouldn't know how to go about that exactly.

Schu
04-07-2009, 12:11 AM
That is probably simply because checkers is more drawish when played at top level than chess; it is easier to avoid defeat.

Well since checkers is solved as a draw, that is not so surprising. Clearly in checkers it is extremely diffucult to lose to someone that's worse than you, but the question remains how much advantage is required to overcome checkers demonstrated drawishness and get a win.

Desmond
04-07-2009, 08:17 AM
I'm sure there are ways. You could work out where players lie on a percentile score of all chess players that at the very least play club chess, or something like that. I'm no statistician though, so I wouldn't know how to go about that exactly.That would tell you how skillful a chessplayer is at chess against other chessplayers but not sure what inferences could be drawn compared to other games.

Schu
04-07-2009, 04:41 PM
That would tell you how skillful a chessplayer is at chess against other chessplayers but not sure what inferences could be drawn compared to other games.

One could infer that if, by comparing game x and game y, x players come out as having results such that smaller degrees of skill difference (measured in possibly many different ways: percentile, standard deviation etc.) result in, say, 80% results, then that game is "more skillful" in the sense that this topic is concerned with.

Desmond
04-07-2009, 06:44 PM
One could infer that if, by comparing game x and game y, x players come out as having results such that smaller degrees of skill difference (measured in possibly many different ways: percentile, standard deviation etc.) result in, say, 80% results, then that game is "more skillful" in the sense that this topic is concerned with.But is skill just the probablity to win? Let me put this to you:

Let's say that to get from the level of beginner to master of a given game there is a number of "units of skill" that need to be attained. These units might be nuances, patterns, strategies, tactics etc whatever you want to call them depending on the given game, but let's just give them a number for simplicity.

The number of these units varies game to game. So for example game A might have 100 units and game B might have 1000.

A master in each game might have that 80% edge you mention, but how is your system going to report on the 100 vs 1000 thing?

The other point I was thinking of was that maybe in game A those 100 units of mastery might only yield a 10% advantage whereas in game B those 1000 units might yield 80% (or vice versa) but maybe that would be caught by the different measures you mention.

Schu
04-07-2009, 09:15 PM
I'm not really sure why you bring units into it. Having more elements to a game doesn't neccesarily make it more skillful. I don't think the amount of elements in a game makes a difference to what I say, and that is how I'm interpreting your "units", correct me if I'm wrong there. You can't really measure skill with these units, because people will not have entire mastry of each of elements. The only way you can measure skill is by results.

All that my idea is trying to measure is how much of a skill difference (measured in percentiles, standard deviations, whatever system is appropriate at the time) transfers to what kind of advantage in terms of average results, and my assertion is that the game that has the least amount of skill difference required for a certain named advantage, this game is the "most skillful", or at least the one where the elements affecting the result other than skill have the lease influence.


Determining how to compare skill factors between different games seems very challenging to me. One issue is straining out the influence of luck. In a game that contains a strong chance element even great differences in skill will not necessarily result in a reliable victory for the superior player. But it may still be the case that massive differences in skill exist. I previously mentioned the Monopoly club I used to be involved in at the Uni, in which I ran a rating system and found that win rates by player in games with an average of about five players could be as low as 5% or as high as 40%, over very large numbers of games.

Another issue, related to chance, is the influence of sample size. Is a game of chess really comparable to a five-set match of tennis in terms of the extent to which the result will reflect a skill difference should one exist? Or is a game of chess more comparable to one set, or to a three-match series? Suppose you rated tennis on a similar system to ELO, the ratings would be closer together if you did the ratings set by set than if you did them match by match.

Sports with "granular" scoring systems also mess with the idea that a great difference in skill leads to a reliable victory. In soccer one team will sometimes win the match with a goal scored against the run of play, converting 1 of its 2 serious opportunities while the other team converts say 0 of 5. But that doesn't mean there is less skill in the game than in, say, Aussie Rules; what it means is that the disconnect between the skill and the scoreboard might be greater. Chess is a bit like this too because in a game between two good players of slightly different strengths, the most likely result is a draw.

I agree that this would be fairly problematic. There must be some statistical method for scaling to compensate for drawishness. I suppose a simple way would be to force all games to replay draws until a result is achieved.

Hobbes
04-07-2009, 09:38 PM
I remember seeing some sort of comparison between chess skill and go skill once (30 seconds of googling failed to find it, sorry).

The method used there was to start with the best player in the world. Find somebody he scores 75% against. Then, find somebody who player B scores 75% against, etc, until you get reach the level of a complete beginner. Count the number of steps.

I think 200 rating point advantage means you expect to score 75%? Then in chess we would take about 15 steps to go from the world champion to a complete beginner. If I remember correctly, there were significantly more steps in Go than there were in chess.

Desmond
04-07-2009, 09:43 PM
I'm not really sure why you bring units into it.To make it easier to compare different games. So maybe Lucena position in chess is a skill. Maybe you want to assign that 1 unit of skill. Maybe in Go there is a skill that requires similar knowledge/experience/aptitude and you also want to assign that to be 1 unit. Then you add up all the skills in each game and you get the totals.

Not sure if that is what you meant by "elements" or not.

Schu
04-07-2009, 11:28 PM
To make it easier to compare different games. So maybe Lucena position in chess is a skill. Maybe you want to assign that 1 unit of skill. Maybe in Go there is a skill that requires similar knowledge/experience/aptitude and you also want to assign that to be 1 unit. Then you add up all the skills in each game and you get the totals.

Not sure if that is what you meant by "elements" or not.

That's what I understood it to be. But I don't really get the applicability to this topic, if only because you can't quantify chess like that, you can accumulate skills like that but it won't represent your actual skill (though there will be some correlation of course). You can only judge a person's skill by results.

Hobbes: that's roughly what I'm thinking about, yeah, though I think getting the top player isn't that good idea, he might be an outlier. Thanks for your recollection :) 200 difference in elo gives 75.974% results, so close enough :)

Desmond
05-07-2009, 12:00 AM
That's what I understood it to be. But I don't really get the applicability to this topic, if only because you can't quantify chess like that, you can accumulate skills like that but it won't represent your actual skill (though there will be some correlation of course). You can only judge a person's skill by results.OK I see what you're getting at now. What you call skill I would call playing strength.

Schu
05-07-2009, 12:40 AM
Fair enough :)

Capablanca-Fan
19-08-2014, 12:19 AM
One comparison is that computers are now clearly better than the best human players. But computers are still a lot worse than the best go players. Since a computer beat a top professional on a four-stone handicap, it looks like computer go programs are up to the level of all but the strongest amateurs and a little below professional level.

The Mystery of Go, the Ancient Game That Computers Still Canít Win (http://www.wired.com/2014/05/the-world-of-computer-go/?sf2919505=1)
BY ALAN LEVINOVITZ 05.12.14

pax
23-08-2014, 10:11 AM
Of course, that's more about the width of the search tree rather than Go being intrinsically more "difficult" than chess. It just means it is much more strategic than tactical, since it is much harder to think through all possible moves to a useful depth.

Desmond
23-08-2014, 10:38 AM
One comparison is that computers are now clearly better than the best human players. But computers are still a lot worse than the best go players. Since a computer beat a top professional on a four-stone handicap, it looks like computer go programs are up to the level of all but the strongest amateurs and a little below professional level.

The Mystery of Go, the Ancient Game That Computers Still Can’t Win (http://www.wired.com/2014/05/the-world-of-computer-go/?sf2919505=1)
BY ALAN LEVINOVITZ 05.12.14
grandmaster Norimoto Yoda - master Yoda for short.

antichrist
23-08-2014, 05:38 PM
Alternatively it could be just because the maximum computer effort has not yet applied to the belly of Go. Whereas we know that chess has copped full frontal attack for years. Computers can't beat fiddle sticks either yet but that does not mean that fiddle sticks is more complicated than chess.

Rincewind
23-08-2014, 05:59 PM
Alternatively it could be just because the maximum computer effort has not yet applied to the belly of Go. Whereas we know that chess has copped full frontal attack for years. Computers can't beat fiddle sticks either yet but that does not mean that fiddle sticks is more complicated than chess.

From the link above...


This is not for lack of trying on the part of programmers, who have worked on Go alongside chess for the last fifty years, with substantially less success. The first chess programs were written in the early fifties, one by Turing himself. By the 1970s, they were quite good. But as late as 1962, despite the game’s popularity among programmers, only two people had succeeded at publishing Go programs, neither of which was implemented or tested against humans.

Finally, in 1968, computer game theory genius Alfred Zobrist authored the first Go program capable of beating an absolute beginner. It was a promising first step, but notwithstanding enormous amounts of time, effort, brilliance, and quantum leaps in processing power, programs remained incapable of beating accomplished amateurs for the next four decades.

Capablanca-Fan
05-09-2014, 05:49 AM
I doubt that a perfect chessplaying machine could give knight odds to a top grandmaster. However, a top go professional would probably need four stones to have a chance against a perfect go player.

antichrist
27-12-2014, 08:40 PM
I doubt that a perfect chessplaying machine could give knight odds to a top grandmaster. However, a top go professional would probably need four stones to have a chance against a perfect go player.

what is Go's complication due to? Just having so many pieces (stones)? From what I gather it is like rounding the sheep up.

Rincewind
28-12-2014, 10:56 AM
what is Go's complication due to? Just having so many pieces (stones)? From what I gather it is like rounding the sheep up.

Two factors are the large number of possible moves meaning an exhaustive search approach is almost useless, and although there is a large degree of symmetry it is an approximate rather than exact symmetry. Humans seem to be good at distinguishing when the near symmetry is important meaning two similar moves are approximately equal in value and when the differences between two similar moves is important.

antichrist
31-12-2014, 11:58 AM
Two factors are the large number of possible moves meaning an exhaustive search approach is almost useless, and although there is a large degree of symmetry it is an approximate rather than exact symmetry. Humans seem to be good at distinguishing when the near symmetry is important meaning two similar moves are approximately equal in value and when the differences between two similar moves is important.

Yep, sounds like rounding up the sheep to me that a sheep dog can run rings around a human doing. But only they are trained by humans. So those dogs equate the computer but without breakdowns and cyber attacks.

Someone once said that with a few sentences I can make a complex world simple.

FM_Bill
01-04-2015, 11:11 AM
TicTacToe (or Noughts and crosses) is a game of pure skill with a low skill level. It does not take long for a novice to reach a level where they can score equally against anyone. (given they learn correct strategies).

A game with a high skill level would require many hours of study and play before a player would have a chance against top players.

How many hours of study/play did it take players to reach the top 5% (say) in both Go and Chess?

ER
22-04-2015, 11:42 AM
How many hours of study/play did it take players to reach the top 5% (say) in both Go and Chess?

in both, or in either? and it very much depends on the individual's talent as well as studying methods (availability of books, coaches etc).
Chess World Champion J.R. Capablanca claimed he never studied books, whereas Chesss World Champion Dr Alexander Alekhine insisted 'I am the book'! Not sure if Chess World Champion Dr Emmanuel Lasker, apparently
a strong Go player devoted much time in studying it!

Rincewind
28-01-2016, 11:58 AM
AlphaGo: Artificial intelligence milestone as computer beats professional at Go (http://www.abc.net.au/news/2016-01-28/computer-beats-professional-at-'most-complex-game-ever'-go/7120230)


Scientists have created a computer program that beat a professional human player at "the most complex game ever devised by humans", Go, in a milestone achievement for artificial intelligence.

...

The world's top Go player Lee Sedol has agreed to play AlphaGo in a five-game match in Seoul in March, Mr Hassabis said.
YouTube: AlphaGo masters the game of Go

"I heard Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win, at least this time," Mr Sedol said in a statement.

...

Should be a good matchup. Regardless of the result looks promising for the Go AI community.

Patrick Byrom
07-01-2019, 06:42 PM
...How many hours of study/play did it take players to reach the top 5% (say) in both Go and Chess?It seems that Go prodigies are even younger than chess prodigies: (https://www.theguardian.com/world/2019/jan/07/go-getter-japanese-girl-nine-becomes-strategy-games-youngest-professional)

A nine-year-old girl in Japan will become the youngest-ever professional player of the strategy board game go when she makes her debut later this year. Sumire Nakamura, who attends primary school in Osaka, started playing go at the age of three and will start her career at the lowest rank of shodan on 1 April, according to Japanese media. She will comfortably beat the record for the youngest professional held by Rina Fujisawa, who was aged 11 years and six months when she turned professional nine years ago.Although I'm not sure what level would qualify you as a 'professional' in chess. If it's having a GM title, then the youngest would be 12, for the IM title it would be 10.

Capablanca-Fan
03-02-2019, 04:29 AM
It seems that Go prodigies are even younger than chess prodigies: (https://www.theguardian.com/world/2019/jan/07/go-getter-japanese-girl-nine-becomes-strategy-games-youngest-professional)

A nine-year-old girl in Japan will become the youngest-ever professional player of the strategy board game go when she makes her debut later this year. Sumire Nakamura, who attends primary school in Osaka, started playing go at the age of three and will start her career at the lowest rank of shodan on 1 April, according to Japanese media. She will comfortably beat the record for the youngest professional held by Rina Fujisawa, who was aged 11 years and six months when she turned professional nine years ago.Although I'm not sure what level would qualify you as a 'professional' in chess. If it's having a GM title, then the youngest would be 12, for the IM title it would be 10.

That's quite something. In the photo she is with Yuta Iyama, currently Japan's top player. Rina Fujisawa is probably Japan's top female player now. Probably a Japanese pro is closer to a GM, since the lowest pro can give an amateur shodan (1-dan) 6 or 7 stones.

FM_Bill
23-09-2019, 10:26 AM
Another way of looking at this is to look a time limits. Capablanca once said something like 40 moves in two hours was OK for optimal chess.
Thats an average of 3 minutes a move. Most chess games are over by move 60. Go games are on average about 3 or 4 times the move number.

Go tournament games are played at a much faster rate per move than tournament chess games. I have not heard of players clamouring for
slower time limits. This is evidence that Go is easier to play than chess.

Sight of board takes much longer to acquire in chess. Even a Go beginner could quickly learn to visualise many moves ahead.
Seeing ahead in chess is made more difficult by the presence of different pieces and pieces moving differently. Sight of the board is a higher skill in chess than in Go.

antichrist
23-09-2019, 11:07 AM
Hi Bill, amazingly it only took four years for you to have your question answered. Fred Flatow of Sydney was a keen Go player a few decades ago, I have no idea now, he even attempted to set up a Go club I think. I considered him a heretic at the time.

ER
23-09-2019, 02:52 PM
Re chess and go!


31-12-2014, 11:58 AM
#26
antichrist
Yep, sounds like rounding up the sheep to me that a sheep dog can run rings around a human doing. But only they are trained by humans. So those dogs equate the computer but without breakdowns and cyber attacks.

by far best analogy ever! couldn't find the relevant "famous analogies" thread to stick it there!