Fritz 6 Computer Chess Program Isolation

2020. 2. 11. 16:25카테고리 없음

In my opinion, this would be a lot more difficult. I am slightly naive as to exactly how stockfish works, but: Computers can do a lot of accurate brute-forcing; humans must see the position in a more holistic, intuitive way. Excellent human players and excellent computer players are presumably doing completely different calculation tasks. I would suggest that computers are still bad at approaching the task in a human-like way, but they will always be able to improve their method via Moore's law (at a minimum) where humans are stuck at their current level.

  1. Fritz Chess Program Online

Stockfish might be able to tell you what it was doing, but not in a way that it would be reasonable for a human to follow. What you are looking for is a teacher. You can pay for those;) The closest we have got to a teacher app is perhaps a MOOC: not much computational progress has been made. Stockfish might be able to tell you what it was doing, but not in a way that it would be reasonable for a human to follow. I (a non-chess player) just tried to play a game against the highest level AI (and lost, obviously).

Doing an analysis of the game afterwards, this is exactly what I experience: I do 'f4' and I'm told (through the analysis tool) that the best move was 'Nf3'. Now, the obvious question this leads to is: why? Why was this a better move? I don't think that, as a human being, memorizing 'best moves' is going to lead to much improvement: we need to know WHY that move was the best move. I'm sure there is a human-friendly way to explain why one move is the best move, and why my move wasn't, but the computer probably doesn't know this explanation, because it's approaching it from a brute-force perspective. Surely, a chess computer can brute force all possible combinations, and deduce that this was the best move. But when this is not possible for a human being, just informing the me that 'what you just did was not the best move', doesn't really do much to help me (as an amateur player).

The game, for reference. It's a place to start. It's not a huge leap from modelling the game of chess to modelling the skill of chess-playing.

Stockfish already offers post-game analysis. Imagine extending that to analyze a players entire career, and offer a series of problems, games against specially-constructed opponents, analysis of relevant historical games etc, all of which are designed to help the player improve. Given Siri, Google Now, Watson and so on, we're probably not far from being able to have a meaningful natural-language conversation with a computer on a narrow subject like chess. One could imagine this kind of thing being extended to teaching other, similarly focussed skills, at least for beginners. Rock climbing. Maybe even things like soccer and basketball.

But that kind of teaching-based-on-deep-analysis is a long way off for subjects like physics or electrical engineering. Computers can't do physics, much less evaluate human physicists. The best we can do in these areas is something like the Khan Academy, where computers present 'content' created by humans, administer standardized tests designed by humans and present the results to humans for interpretation. So yeah, teaching chess in a really sophisticated way isn't all that useful in the sense that physics or EE are useful. But really, if we could teach computers to understand physics better than people do, we'd use them to make breakthroughs in physics, and that would be a much bigger deal than being able to teach physics more effectively.

On the other hand, we don't play chess or tennis or piano because they're useful, so expert AI teachers for these subjects would be really valuable. By far, the fastest way to improve at chess is with a coach. Books work in the beginning, but soon you are crawling around in the dark.

You can't identify your weaknesses, so you can't correct them. After studying the wrong thing for a year, you fix one of your weaknesses by accident, and you improve.

A coach bypasses all of that wasted time. The challenge is not to automate a chess curriculum. That already exists on many websites selling chess software. Those are useful to learn certain theory and burn it into your brain by drilling over and over. The challenge is to create a chess teacher that can identify your specific weaknesses and correct them. A middle ground approach might work well, where you take one mental model of chess and develop a program to train that specific mental model.

For example there is the Nimzowitsch model where chess is seen as siege warfare, with specific meta-strategies. If that model fits with how you think, then great. But it doesn't fit everyone. One day this super efficient learning will be reality. It sounds kind of boring. With an exponential game like chess, everyone will be at about the same level, except a few who throw their life away chasing n+1 while everyone else settled for n and having a life. One of my long-term goals is to write a chess program that plays more like a lower-level human.

Every program I've played seems to have the same pattern: play like a master for ten moves, then make an insultingly obvious blunder to let me catch up. Lather, rinse, repeat. It's neither realistic nor fun. Might as well play with a handicap. Real players at each level tend to have characteristic weaknesses that are expressible in terms of the same factors used in a program's evaluation function.

Consider some of the following, which players at a certain level will exhibit and then get past as they improve. Bringing the queen out too early. Missing pins and discoveries. Failing to contest the center. Creating bad bishops.

Bad pawn structure. Failing to use the king effectively in the endgame. These are all quantifiable. They could all be used to create a more realistic and satisfying opponent at 1200, 1500, 1800, etc. All it takes is some basic machine learning applied to a corpus of lower-level games, and a way to plug the discovered patterns into the playing engine.

That's a big problem in scrabble AIs too - there are heuristics for how to dial down the difficulty level, but they mostly involve things like finding the list of best possible moves, and then randomly picking a suboptimal one with how often you do it and how far down the list you go weighted by the handicap level, or constraining the dictionary, or not using lookahead. The problem is that it doesn't always model how actual low-rated scrabble players play, and so does not really make for a satisfying opponent - every now and then you hit a 'well, that was stupid' move that just feels implausible for even a weaker human player to make. The other problem is that the devs of the current top scrabble engines, quackle and elise, are (naturally) focused on getting better and better at playing, not on plausible ways to play badly. It's something i keep meaning to work on when i have some spare time; i have a few ideas, but nothing i've had the time to explore properly. During the final event, after playing 64 games against Komodo, Stockfish won with the score of 35½-28½.

No doubt is further allowed: Stockfish is the best chess player ever! From a statistical point of view, this isn't actually significant, despite the fact that draws help reduce the variance. 45 of those games are draws, leaving a 13-6 score in favor of Stockfish.

Considering a null hypothesis of a binomial distribution with n=19 and equal chance of winning, the two-sided p-value for that score is 0.115. Unless you already have strong evidence that Stockfish is better than Komodo, you shouldn't conclude anything about which one is best. What sets Stockfish apart is its open testing framework. Many volunteers allocate CPU time to run test games, checking if a pull request actually adds Elo rating points. Also, its open development model is a big advantage compared to closed-source chess engines, which usually just have 1-2 developers. The ideas used are mostly the same in all top engines, the advantage of Stockfish comes from tuning/testing/polishing/tweaks.

An year ago Stockfish was near the top, but not quite there. Since then its development model became more open, and the open testing framework was introduced. The rate it's been adding Elo rating points since then has been amazing. You can watch Magnus Carlsen play a chess program on his phone here: As you can tell from his commentary, he thinks the machine is rather stupid (and he is winning after the opening) but the machine is a much better calculator than he is and when the situation becomes more concrete he has to force a draw. Of course, the computer on his phone is considerably worse than the best chess engines, but top chess players generally consider computers to be excellent calculators but dumb in terms of general strategy.

In chess I'm good at strategy but terrible at calculating Chess is 99% tactics/calculation. What we call 'strategy' is just a set of heuristics that we use to avoid having to do endless calculations. However, a lot of those heuristics are already included in most chess playing software. So, if you're weak player as a whole, even if you have some strategy acumen, your contribution in an assisted chess setting will be negligible. The computer will be doing all the work anyway.

My rating is around 2000 and I have done some assisted chess playing and I can tell you that it's extremely hard to not just take computer's suggestion at every move. The chance that I'll come up with some brilliant move that the computer missed is very slim. I think I misunderstood what 'calculating' means in a chess setting. I thought it meant checking the current position of the pieces and making sure you are not about to be attacked. But googling it suggests it's more about thinking of the value of each move relative to others.

If that's the case I'm not actually bad at that. I can tell you that it's extremely hard to not just take computer's suggestion at every move. Is that how it works?

The computer just basically plays and shows you some moves it likes? That's not what I meant, I was thinking that you tell the computer something like: I want to capture piece X using Y 10 to 20 moves from now, perhaps by going via this direction. Tell me the best series of moves to get there while avoiding traps. Or even better give it 2 or 3 such scenarios and have it tell you how dangerous each one would be so you can pick one. Basically really narrow down the permutations the computer has to calculate. There is a chess app on IOS that suggests you moves and warns you against obvious situations where you would lose your piece. It's using an old but still popular engine: I had most fun with this chess engine.

Stockfish engine was not fun at all, it was playing like an 'asshole' most of the time and when the AI strength is reduced it was like playing against a stupid 'asshole'. When playing against human you feel like your opponent is on something but against AI you feel like you run for your life.

Mind you though, I am very amateur chess player, I play only recreationally. Stockfish has been available for quite a while, and over the last few years has been recognised as being the strongest open-source chess engine. It's fantastic to see it take on the best commercial engines toe-to-toe and come out on top. These ratings are computer ratings, and because they are playing very much isolates (computers vs other computers), without enough external human encounters, the ratings are consistent within the computer sphere, but not really like-for-like with human rating lists. A lot has happened since Deep Blue - that was just a 6 game match, which really only proved that machines can play near a grandmaster level, but without nerves. And at times that is good enough to beat a world champion in match conditions, because the human is more susceptible to the psychological events. Since Deeper Blue we've seen the emergence of chess engines on personal computers (rather than racks and racks of RS6000 mid-range servers), so Chess Genius, Fritz, Shredder, Junior, Rybka, Houdini and now Stockfish pushing chess engines ahead.

Fritz Chess Program Online

The Chess Genius - Junior levels show chess engines capable of matching grandmasters at blitz/active-play speeds, but still struggling at slower time controls. Rybka onwards show a computer engine where elite grandmasters are seriously challenged. Chess hasn't been solved. Just watching Magnus Carlsen's play shows we are expanding chess knowledge incrementally one game at a time.

Yes, progress has slowed down, but the incremental improvement are still there. Chess technique is refined to such a super high degree, but still far short of 'solving chess'. Perhaps this current generation can beat grandmasters when psychological factors are minimised, it's hard to tell. It's certainly not as clear cut as a Porsche vs Usain Bolt over 100 meters. A six game match is not a statistically significant set of data.

And Deep Fritz hasn't played many humans in those 8 years since. The Deep Fritz on the rating list, is that the same Deep Fritz that played Kramnik? (Notably the hardware and software version used may be different - perhaps the Deep Fritz on the rating list is weaker than the one that played Kramnik?) And also, be careful trying to compare computer-list Elo with human-list Elo.

Because participants on both lists are isolated from each other, there is a tendency for one set of ratings to be inflated/deflated compared to the other. We see this in international chess where countries that don't have regular FIDE tournaments, players either seem to be significantly stronger or significantly weaker than the prevailing norms in the bigger pool of players. South Africa, Myanmar are two such examples.

Computer software don't play that many human tournaments. It's a rare sight, even when they were clearly weaker. There's a lack of data, so the result is inconclusive. Computer software doesn't play chess. It doesn't understand positions. It has a bunch of algorithms and processes that turn a board position, tries each candidate move, turns the resulting position into some sort of number, through an evaluation function. Goes down a tree of that until either it reaches a definite conclusion, or reaches a depth where trying to go deeper takes far too much more time.

Fritz chess program

All of this is an artificial simulation of how chess is played. And it is, long-term, at the mercy of the accuracy of the evaluation function. Because right at the end of it's search-depth, it has to evaluate that position, effectively 'a guess'.

The most accurate way of evaluating a position is to try each move and go down a search tree of best moves until you reach a definite conclusion. But the evaluation is used because at that point it is computationally too expensive to go down the move tree any deeper. It's a fudge of brute force analysis - there isn't enough computational resources available to go any deeper, and so the computer must guess. This is the horizon effect, computers can't see past it. That evaluation function is a human written piece of code, that takes various on-board factors: piece placement, pawn structures, strong/weak-squares, king-safety, piece activity, central control.

In effect, the human is trying to program intuition into the machine. Humans don't understand intuition, let alone program a computer to do it. Chess strength of software is determined substantially by the number and speed of CPUs, Memory capacity, and the evaluation function.

The evaluation function is the weakest part of that chain, so there's considerable effort to hold off the use of the evaluation function for as long as possible. I think only the Rybka developers spent more time trying to teach the engine how to play chess, and that proved more successful for a few years than the 'more power quicker' led industry. The evaluation function is the most difficult part of a chess engine.

It is the typical AI problem. And humans can only take that so far. But how do you find a human who can comprehend how a grandmaster thinks about a chess position and determining the right move, and still be capable of emulating that process in software. At least Rybka's developers were International Masters. Also, humans have the strength to adapt and refine - patch their own chess playing abilities. Look at Carlsen's style, it's an ever more refined Karpov style, which itself was a more refined Capablanca style.

Carlsen excels in the kinds of positions computers don't manage well - deep strategical long-range plans, well executed. A lot of Carlsen's chess strength is intuition and feel, with an impeccable analysis to confirm his hypothesis. And he's one of the post-Chessbase crowd, grown up learning chess with computers. That is an opponent a computer should fear, if it could ever comprehend the notion. So yes, in terms of chess playing strength, humans still play chess better than computers. The computer is incredibly strong and there is no interest for a Man again Computer match but I wouldn't say it's close to being solved or that it's impossible to make further progress for three reasons: 1) Computer vs Computer can still go both ways so clearly the one chess engines make mistake others can exploit; 2) Some sequences are still hard find for computers, e.g.

The long king walk by White is beyond the horizon of the computer. 3) Computer + Man against Computer + Man correspondence matches do benefit from human intervention. It's actually not that common.

Players will use chess engines to analyze positions, but they will rarely play against them because it doesn't provide realistic practice for an actual chess match. For example, it may be reasonable to play an aggressive variation against an opponent because you think they might have difficulty finding a response in time pressure, but a computer can make precise calculations in any situation, and so such a strategy almost always backfires. What's more, chess is often abstracted at higher levels in terms of things like long term plans ('I want to put pressure on the c7 pawn and control c6') instead of concrete material gains, which allows players to make progress even in positions where there are no direct threats and no sensible exchanges of pieces.

Computers when faced with such situations will know that the position is objectively drawn and so will just shuffle their pieces around aimlessly, and playing against this kind of thing is not very good practice for actual human opponents who will try to find ways to beat you anyway. Only realistic if they were preparing for some sort of Man vs Computer tournament. Computers don't play like humans, and they are consistent in their weaknesses as well as their strengths.

As players are more exposed to playing against various chess programs, they learn what positions computers are not so good at (long-range strategical themes, beyond the typical move horizon). A local optimisation, of sorts. So players start adopting an anti-computer style of play, which requires a different mind set to pull off.

That in turn affects their natural style of play. So I argue the opposite, for preparing for top-flight tournaments, players probably avoid / severely limit playing against chess engines, to avoid this natural anti-computer bias to their play. Sure, the engines are good for checking variations and analysis, as an assistant. But not as a leader, nor as an opponent. Nearly all the evaluation terms in Stockfish have been extensively tuned. Joona Kiiski used the SPSA1 algorithm on a lot of them.

Others have been hand-tuned using tens of thousands of games per attempt on fishtest 2. Fishtest actually just recently got support for running SPSA tuning as well. There is also a strong bias towards simplification, so if an evaluation feature is not proven to be an improvement it will be removed. Over the last few Stockfish versions, the # of lines has actually decreased in each version. What separates one chess engine from another is its ability to efficiently navigate and evaluate the game tree. Stockfish effectively discards probably 95% or more of the nodes in any position.

Easier said than done, better not discard the best move in that 95%. Evaluation is relatively simple, a fancy lookup table. They say love covers a multitude of sins.

Well, searching one move deeper than your opponent covers a multitude of static positional evaluation mistakes. Searching deeper can figure out the complexity of the position better than anything that can be statically evaluated. That's where virtually all improvements exist today, figuring out which moves can be safely discarded. Are chess engines still trying to play against humans in an interesting way? (I understand they beat human players, but that people feel computers play in dull ways).

Is there a Turing Test for computer chess, where humans and computers play each other and they, and commentators, analyse the play, but no-one knows who is a computer or human until after the commentary is published? And if we ignore humans are people playing computers against other computers for some kind of machine learning play? And how optimized for speed is the software? Do they really crunch out all performance they can? (Sorry for the barrage of questions but I don't know enough about this space to do efficient websearches). Are chess engines still trying to play against humans in an interesting way?

(I understand they beat human players, but that people feel computers play in dull ways). Depends on what you mean by dull. Computers play so well that it tends to be a complete mop-up regardless of any 'anti-computer' strategies people may try.

The dull aspect is the one-sidedness. What's hard to do is make computers play weakly in a human-like way. Lobotomized computers tend to make very inhuman blunders. Is there a Turing Test for computer chess, where humans and computers play each other and they, and commentators, analyse the play, but no-one knows who is a computer or human until after the commentary is published?

Not that I've ever seen, though it still wouldn't be much of a challenge. Computer moves are pretty easy to spot-generally unintuitive or seemingly paradoxical moves that have a very concrete justification. Especially, as I said above, if they were set to play at a weaker level. And if we ignore humans are people playing computers against other computers for some kind of machine learning play? I'll let others answer this, but it would surprise me if nobody was. Still, improving evaluation heuristics and analysis efficiency seems to be yielding better results than just machine learning. And how optimized for speed is the software?

Do they really crunch out all performance they can? The more positions you can examine per second, the deeper your search tree can go, and the better your evaluations, etc.