- Artificial Intelligence (AI) is at the heart of IT-research and pioneers automated problem-solving. A recent breakthrough that could shift the AI-field was reported by Silver and colleagues of the famous Google Deep mind developer team around Demis Hassabis. They have reported that a new adaption of their in-house generalized neural network (NN) machine learning algorithm, termed AlphaZero, has achieved to outperform the world’s best-rated chess engine Stockfish, already after four hours of self-learning, and starting only with the rules of chess.
- AlphaZero originates from the superhuman Alpha Go program and could beat the best chess engine via blank slate tabula rasa self-play reinforcement machine learning, i.e. only by learning from many games played against itself. This worldwide strongest AI-performance claim has been drawn from a 100-game match between AlphaZero and Stockfish engines and has attracted much attention by the media, especially in the world of chess, which has been historically a key domain of AI. AlphaZero did not lose one game and won 28 times, while the remainders of the 100 games were draws.
- General reinforcement learning is very promising for many applications of mathematical solution finding in complexity. However, the requirement to independently verify if this breakthrough AI claim can be made poses some major difficulties and raises some inevitable doubts if a final proof has been given.
- Machine and method details are not available and only10 example games were given. This research starts with a reproducibility testing of all 10 example games and reveals that AlphaZero shows signs of human openings and might have outperformed Stockfish due to an irregular underperformance of Stockfish8, like post-opening novelties or sub-perfect game moves, like in game moves and post-opening novelites.
- At this juncture, the testing revealed that AI quiescence searches could be improved via multiple roots for both engines, which could boost all future AI performances. In light of a lack of tournament conditions and an independent referee, comparability challenges of software and hardware configurations such as AlphaZero’s TFLOP super-calculation-power, this work suggests that a final best AI-engine-claim requires further proof. Overclaim biases are found in all sciences of today due to the publishing imperatives and wish to be first.
- See Re-evaluation of AI engine alpha zero.
- The author is from "The University of Truth and Common Sense, Department of Theoretical Sciences, Germany", which sounds somewhat dodgy! I couldn't find anything about it on-line.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2019
- Mauve: Text by correspondent(s) or other author(s); © the author(s)