Checkmate: DeepMind's AlphaZero AI clobbered rival chess app on non-level playing, er, board
Quach (Katyanna)
Source: The Register, 14 Dec 2017
Paper - Abstract

Paper StatisticsColour-ConventionsDisclaimer


Full Text, including Comments

Emergent Tech → Artificial Intelligence
Checkmate: DeepMind's AlphaZero AI clobbered rival chess app on non-level playing, er, board
Good effort but the games were seemingly rigged
By Katyanna Quach 14 Dec 2017 at 01:05 47


Analysis DeepMind claimed this month its latest AI system – AlphaZero – mastered chess and Shogi as well as Go to "superhuman levels" within a handful of hours.

Sounds impressive, and to an extent it is. However, some things are too good to be completely true. Now experts are questioning AlphaZero's level of success.

AlphaZero is based on AlphaGo, the machine-learning software that beat 18-time Go champion Lee Sedol last year, and AlphaGo Zero, an upgraded version of AlphaGo that beat AlphaGo 100-0.

Like AlphaGo Zero, AlphaZero learned to play games by playing against itself, a technique in reinforcement learning known as self-play.

“Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case,” DeepMind's research team wrote in a paper detailing AlphaZero's design.

AlphaZero faced Stockfish, a chess-playing AI program that won the Top Chess Engine Championship (TCEC) last year. AlphaZero won 28 games of chess, drew 72, and lost none against Stockfish.

Shogi, a Japanese strategy game similar to chess, is more complex. Here, AlphaZero won against Elmo, a Shogi computer engine, in 90 games, drew twice, and lost 10 matches.

The rules of the two board games were provided to AlphaZero, and the system learned how to master them both over the course of 68 million self-play matches against itself. To put it another way, AlphaZero took four hours to grasp chess to a level where it could beat Stockfish, spending nine hours totals on the game format – and took less than two hours to master Shogi to the point where it could see off Elmo. AlphaZero also creamed DeepMind's Go-playing AI AlphaGo Lee after eight hours of training.

It’s an impressive feat – but one that was achieved by carefully manipulating the experiment, Jose Camacho Collados, an AI researcher and an international chess master, argued in an analysis this week.


Sorry to burst your bubble, but Microsoft's 'Ms Pac-Man beating AI' is more Automatic Idiot
READ MORE
Firstly, DeepMind is part of Google-parent Alphabet, and thus has access to massive computing power. AlphaZero was trained on 64 TPU2s – the second generation of Google’s TPU accelerator chip – and a whopping 5,000 first-generation TPUs to generate self-play games from which AlphaZero played from.

That means, as Camacho Collados pointed out, the time spent training AlphaZero per TPU is roughly two years. In contrast to that processing power, Stockfish and Elmo, were only given 64 x86 CPU threads and a hash size of 1GB, meaning that both game engines were not on equal footing to begin with.

AlphaZero ran on math-crunching hardware dedicated to neural networks, while its opponents ran on PCs. Think supercar versus a Ford Focus.

“The experimental setting does not seem fair,” Camacho Collados said. “The version of Stockfish used was not the last one but, more importantly, it was run in its released version run on a normal PC, while AlphaZero was ran using considerable higher processing power. For example, in the TCEC competition engines play against each other using the same processor.”

Next, DeepMind's paper stated that both systems, AlphaZero and Stockfish, were given one minute to make a move. That is highly unorthodox for tournament play. As everyone knows, in a chess match, players are typically given a bank of time in which to make all their moves, not a countdown per move. For example, the World Chess Federation gives players "90 minutes for the first 40 moves followed by 30 minutes for the rest of the game with an addition of 30 seconds per move starting from move one."

That means some actions, such as early moves, can be performed quickly, giving yourself more time – more than a minute if needed – to perform later-stage maneuvers. Stockfish was designed to play chess like normal over a period of time rather than against a minute-long shot clock.

AlphaZero, on the other hand, was optimized for minute-to-minute play. The neural network took the positions on the board as input, and spat out a range of moves and chose the one with the highest chance of winning at every move. It learned this by self-play and using a Monte Carlo tree search algorithm to sort through the potential strategies.

Camacho Collados noted:

The selection of the time seems odd. Each engine was given one minute per move. However, in the vast majority of human and engine competitions each player is given a fixed amount of time for the whole game, and then this time is administered individually. As Tord Romstad, one of the original developers of Stockfish, declared, this was another questionable decision in detriment of Stockfish, as “lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move.”

The decision to go with one-minute timeouts, as well as under-powering its competitors, seems awfully convenient for DeepMind.

It’s also difficult to really scrutinize AlphaZero since DeepMind have not released the code publicly for any of its game-playing systems. It’s impossible to test any claims made, and to check if the results are reproducible.

In the paper, ten games played between AlphaZero and Stockfish were cherry-picked by the researchers to show AlphaZero winning. The losses it faced against Elmo in Shogi have not been published, so it’s impossible to see where the software was inferior.

“It is customary in scientific papers to show examples on which the proposed system displays some weaknesses or may not behave as well in order to have a more global understanding and for other researchers to build upon it,” Collados wrote.

“We should scientifically scrutinize alleged breakthroughs carefully, especially in the period of AI hype we live now. It is actually responsibility of researchers in this area to accurately describe and advertise our achievements, and try not to contribute to the growing (often self-interested) misinformation and mystification of the field.

“I personally have a lot of hope in the potential of DeepMind in achieving relevant discoveries in AI, but I hope these achievements will be developed in a way that can be easily judged by peers and contribute to society."

Other machine-learning experts El Reg chatted to this week privately agreed that while AlphaZero is a cool research project, it is not quite the scientific breakthrough the mainstream press has been screaming about.

A spokesperson from DeepMind told The Register that it could not comment on any of the claims made since “the work is being submitted for peer review and unfortunately we cannot say any more at this time.” ®




Tips and corrections
47 Comments

Checkmate: DeepMind's AlphaZero AI clobbered rival chess app on non-level playing, er, board
DeepMind claimed this month its latest AI system – AlphaZero – mastered chess and Shogi as well as Go to "superhuman levels" within a handful of hours. Sounds impressive, and to an extent it is. However, some things are too good to be completely true. Now experts are questioning AlphaZero's level of success. AlphaZero is …


17 0 Reply
12 days Charlie Clark
Re: Google /Alphabet PR
Not just. The advances in self-teaching demonstrated by going from AlphaGo to AlphaZero are very impressive. As is the work done by Google on its own TPU chips.

2 5 Reply
12 days Anonymous Coward
Anonymous CowardRe: Google /Alphabet PR
The advances in self-teaching demonstrated by going from AlphaGo to AlphaZero are very impressive. As is the work done by Google on its own TPU chips.

If they're that impressive, why did they have to have to rig the games? Is anyone planning to sell a block of shares soon?

10 0 Reply
12 days Charlie Clark
Re: Google /Alphabet PR
why did they have to have to rig the games

Despite what the article suggests the games weren't rigged at all and Google had no influence in them. In terms of hardware it wasn't a level playing field but the hardware advantages don't really explain the difference in the scores.

I'm sure Google would be more than happy for a rematch with beefier opponents, though it might be worth noting that more hardware might not help the other side much.

1 12 Reply
12 days BarryUK
Re: Google /Alphabet PR
They kind of do - you would expect Stockfish running on 100 CPUs to beat Stockfish running on 1 CPU, which seems to be about the level of disparity here. Unless the computing power is the same you can't say whether Google's algorithm is superior.

Given how amenable chess is (unlike Go) to the brute force style approach I would be surprised if the neural network AI could really produce a better engine.

11 1 Reply
12 days ma1010
WTF?Re: Google /Alphabet PR
@Charlie Clark

Like HELL the hardware doesn't matter. Let's you and me have a motorcycle race. I get to ride my 1800 CC Gold Wing. You get a 50 CC moped. All else being equal, I will win quite easily.

6 1 Reply
12 days skepticdave
Re: Google /Alphabet PR
It would be interesting to see AlphaZero against a grandmaster under tournament conditions. There was nothing in its play to suggest that it could outplay a GM. And Stockfish (under its crippled conditions) made mistakes that a club player could have exploited.

3 1 Reply
12 days skepticdave
Re: Google /Alphabet PR
"I'm sure Google would be more than happy for a rematch with beefier opponents,"

I beat the blind cripple in a boxing match - easily! I COULD have beaten Anthony Joshua, but I simply CHOSE to fight the blind cripple. But trust me, if my opponent had been Anthony Joshua, I would STILL have won. And you know this must be true, because my friend says so.

4 1 Reply
11 days Randy112235
Re: Google /Alphabet PR
They didn't rig the games, half the stuff in the article is FUD. The 2 TPU's AlphaZero ran on while playing Stockfish is not that much more powerful than the CPU Stockfish ran on, except AlphaZero need TPU(similar to GPU) and Stockfish needs a CPU.

1 2 Reply
11 days Charlie Clark
FAILRe: Google /Alphabet PR
@ma1010

Sure, let's go but I get to choose the surface we ride on.

Lots of computer problems do not scale linearly. Google did not deliberately set up a crippled opponent: the advantage of AlphaGo is mainly in the approach and the training.

0 2 Reply
11 days iron
Re: Google /Alphabet PR @Randy112235
> The 2 TPU's AlphaZero ran on while playing Stockfish is not that much more powerful than the CPU Stockfish ran on

You need to visit an optician, the article clearly states AZ ran on 64 TPU2s and 5,000 TPU1s. If 5,064 specialised chips are "not that much more powerful" than a single general purpose x86 CPU then clearly Google has invented a total lemon of a chip that should be consigned to the dustbin of history asap.

1 1 Reply
12 days Notas Badoff
The lies come at no extra charge!
So this sounds a lot like strategies used while benchmarking _our_ systems vs. _their_ systems. Whatever you could do to make your stuff look N times better, lies included.

Yes, this is how good that model mainframe is. Look at our benchmark numbers! (Done on a 4 CPU installation, and we're selling you the 2 CPU installation...)

13 1 Reply
12 days Lysenko
Let's be realistic...
What is this stuff for? What is Google for? This isn't about playing chess or helping people search the internet, it's about advertising. Google is an ad delivery network that uses a search engine as bait and this AlphaZero thing is an ad delivery optimisation engine that just so happens to be able to play chess as a side effect.

On that basis, it is perfectly understandable that Google refuses to publish all the test data. Sophistry, mendacity and psychological manipulation are the pillars on which the advertising industry stands. Criticising an ad firm for being economical with the truth is like criticising the sea because you can't drink it.

11 8 Reply
12 days DocJD
Re: Let's be realistic...
As I understand it, the company is now called Alphabet at the top level because they are, in fact, more than one company. Google is still the search engine/advertising part, but there are other companies split off from Google under the Alphabet umbrella. The fact that these other companies may not be making a profit right now doesn't mean they don't plan to at some time in the future.

6 1 Reply
12 days Jellied Eel
Re: Let's be tax efficient
The fact that these other companies may not be making a profit right now doesn't mean they don't plan to at some time in the future.
Now that's something an AI could help with, assuming the CFO/Treasurer lets them. So Alpha-AI is an R&D shop, so tax credits. If it generates a loss, no tax and possibly some relief. Then maybe once it's working, the AI servers are installed in Alphabet's mysterious barges, and off-shored. Then it can provide AI as a service to other Alphabet companies to make sure they're not profitable. Shifting revenues and costs around subsidiaries is far more profitable than shifting virtual game pieces..

6 0 Reply
12 days Rebel Science
DeepMind is clueless about how to achieve AGI
DeepMind has never made a breakthrough in AI and never will. They essentially apply well-known techniques invented by others (Monte Carlo search, deep learning and reinforcement learning) to games chosen for their limited number of behavioral options. I would be infinitely more impressed if they made a robot that could walk in any generic kitchen and fix a meal of scrambled eggs with bacon, toast and coffee.

As an aside, Demis Hassabis and his team at DeepMind are on the record for suggesting that the human brain uses backpropagation for learning. They published a peer-reviewed paper on it. I cringe when I think about it.

14 6 Reply
12 days Dave 126
Re: DeepMind is clueless about how to achieve AGI
Google X, and later Alphabet, did have walking robots, but they sold off Boston Dynamics to Softbank Group - presumably because the most promising market sector was the military.

Many of the other skills, such as image recognition and environment awareness, involved in the cooking task you outline are still being researched by Alphabet.

0 2 Reply
12 days Tomato Krill
Re: DeepMind is clueless about how to achieve AGI
Well they bought Boston Dynamics first so don't get a bunch of credit for that...

1 1 Reply
12 days John Savard
Flawed, Perhaps, but Valuable Still
Comments on the initial AlphaZero announcement fairly quickly took note of the large floating-point power used by AlphaZero, and the fact that Stockfish's hash tables were restricted to 1 GB.

But that chess experts noted that AlphaZero's play included consideration of very subtle positional factors - something Stockfish does not excel at, but this is known to be a strength of the commercial chess engine Komodo - is also a fact.

It may well be that if one tried using equal hardware power to play chess by techniques similar to those used by AlphaZero, the result wouldn't be much better than had been achieved by the Giraffe chess engine. That took 72 hours, rather than 4, to teach itself to play chess - and it only got to International Master level, significantly inferior to that of Stockfish.

The thing is, though, it is still very significant to prove that something can be done at all, even if not necessarily in an efficient manner. Something can be a significant scientific advance in AI without being the most cost-effective way to make a strong chess engine.

It may well be that AlphaZero's feat, by demonstrating the validity of the neural network and Monte Carlo search approaches, will allow technology from Giraffe to be incorporated into programs like Stockfish to make them better.

Minimise Comment
12 1 Reply
12 days sabroni
Re: it is still very significant to prove that something can be done at all
Indeed. They've proved that the more computing power you throw at a problem the faster you can fix it. TBF we did know that already. The way this "experiment" was designed it's clear it's mainly an advert for Google.

10 2 Reply
12 days Charlie Clark
Re: it is still very significant to prove that something can be done at all
They've proved that the more computing power you throw at a problem the faster you can fix it. TBF we did know that already.

What we know is that this is rarely the case without doing work to improve the algorithms and parallelisation. Google has demonstrated that it has done this and also worked on improving the hardware by making it more suitable for the task at hand.

0 3 Reply
12 days MonkeyCee
Re: Flawed, Perhaps, but Valuable Still
"That took 72 hours, rather than 4, to teach itself to play chess "

That 4 hours is meaningless.

It took 4 hours on 64 TPU2s and 5000 TPU1s. The quoted researcher reckons that's about 2 years per TPU (didn't specify gen 1 or 2), so being conservative AlphaZero took the equivalent of 128 YEARS (over a million hours) to get to the level it's at. Or if the TPU1s count, over ten thousand years.

1 0 Reply
11 days Anonymous Coward
Anonymous CowardRe: Flawed, Perhaps, but Valuable Still
2 years Total if they used 1 TPU. 4 hours on 5064 TPU's, which equals to 2 years if you only had 1 TPU. I have no idea where you are getting 128 years or more from.

0 0 Reply
12 days Milton
AI = Marketing = Lying
I've bored the assembled commentardsphere more than once by pointing out that there is presently no such thing as "AI" and probably won't be for at least another decade—not the "artificial intelligence" that people meant when using the term for the last 50 years, before the marketurds got their slimy hands on it, anyway.

But if even the Reg simply won't be bothered to call out this brazen misuse of the term, slapped onto anything that uses "machine learning" techniques, I guess there's little chance for the rest of the media, scientifically and technically illiterate as 97% of it is.

Let's be clear, though, that the fibs, bias and propaganda associated with the various Alpha achievements are absolutely to be expected in this context. "AI" is being relentlessly hyped and exaggerated, the label is being misused, sometimes hilariously, machine learning tech is frequently being misapplied and wasted, and everyone who might have a dollar to spend is being told they've got to have it (usually via eyewateringly awful web ads). We saw this with "cloud", when the 1970s architecture of connecting remotely to powerful computing resources was resurrected as if it was a Wonderful New Thing; we've been seeing it with the Internet of Shyte, as every greedy idiot on the planet comes up with increasingly ludicrous reasons for connecting your toaster, dog, toothbrush, greenhouse and left lower molar to the internet, thereafter to be infected by malware and used for mining {Enter This Week's New Bit Currency Here} before it steals your identity, money, wife and aforementioned dog.

If politicians and marketurds are the scourge of our age, it's because they have one thing above all else in common: lies, lies, constant lies.

And poor dog.

Minimise Comment
17 3 Reply
12 days diodesign
Re: AI = Marketing = Lying
"But if even the Reg simply won't be bothered to call out this brazen misuse of the term"

Holy balls, we just published hundreds of words calling out DM's approach - and we're still the bad guys. We use "AI" as shorthand for various related technologies just as "the cloud" covers IaaS, PasS, SaaS, etc. The exact tech is defined, AI is used to avoid repeating the same phrase over and over. We're not a dry technical manual.

You may have noticed we bounce between terms – IBM, Big Blue, Intel, Chipzilla, Microsoft, Redmond, crypto-currency, digi-dosh, etc – because it's more interesting to read, easier on the eye and mind, and still conveys overall the same message.

Trust me, trust us, after decades of writing and publishing, combined as a team, an article with the same terms repeated over and over and over and over stops being engaging – and becomes bland documentation.

C.

19 0 Reply
12 days Androgynous Cupboard
Re: AI = Marketing = Lying
Techies are well aware that artificial intelligence is not intelligence, but it's still the blanket term that's used for this range of technology. Like the use of "hacker" for "cracker", your argument was lost a long time ago. See also "decimate", "tea" instead of infusion, the list goes on.

2 0 Reply
12 days Charlie Clark
Re: AI = Marketing = Lying
Any form of technology sufficiently advanced can be considered magic

Add to this any form of inference engine sufficiently advanced can be considered intelligence. Games may be an extremely limited domain but even so the computer has effectively taught itself how to play and beat the best. This may make it more of an idiot savant than an Einstein, but I'm reasonably happy to class this as a kind of intelligence, similar to any rules-plus stuff like claims handling that we currently employ people to do.

0 2 Reply
12 days AMBxx
Coat
Did it come up with anything new?
I'm crap at Chess, but do understand the notion that there are certain patterns of opening moves, mid-game, end-game etc. Did the AI come up with any new approaches?

Just like to know.

4 0 Reply
12 days Anonymous Coward
Anonymous CowardRe: Did it come up with anything new?
Yes. It now sells more adverts.

13 2 Reply
12 days astrax
Re: Did it come up with anything new?
AlphaZero is causing a stir in the chess community. I'm a big fan of agadmator's Youtube chess channel and I watched an analysis of one of the games between AlphaZero and StockFish. The greatest surprise was the way AlphaZero willingly gave up a whole piece (a Knight) to keep up its own momentum and refuse to allow StockFish to develop its pieces. Note this behaviour is very human; usually a chess engine will sacrifice a piece for some tangible, strategic gain or to implement a tactic.

This is the key difference here. The point to take away from this is that Google have not merely developed a more powerful chess engine that runs on more powerful hardware, rather they have created something that behaves much like an *extremely* strong human Grandmaster, not simply a super-powerful logic-monster. This will probably change the way elite chess players train for tournaments.

Keep in mind that even with the hardware handicap, StockFish could analyse up to 70 million positions a second and play with an elo rating of 3300+. AlphaZero took just 4 hours to learn the game from scratch and beat a well honed engine like StockFish is pretty impressive.

Linky: Link

12 2 Reply
12 days Sil
Re: Did it come up with anything new?
I am no expert, but the imposed time management (1 min per move) does not seem adequate if the standard is no limit per move / nn minutes total max.

I would be unfair to humans / computers following this standard.

AlphaZero's victory is impressive, but if it really just is throwing a lot of processing power on well known techniques such as Montecarlo search & reinforcement technique, it becomes much less impressive. Any company with access to supercomputers could do the same.

4 0 Reply
12 days Dave 126
Re: Did it come up with anything new?
> I'm crap at Chess, but do understand the notion that there are certain patterns of opening moves, mid-game, end-game etc. Did the AI come up with any new approaches?

I haven't looked deeply into this Chess, but Alpha Go certainly came up with moves and strategies that human grandmasters said they had never seen before.

1 0 Reply
12 days Archtech
Re: Did it come up with anything new?
For a different viewpoint, see this article: Link

These things are relative, and compared to grandmasters I too am a crap chess player. But I have spent a lot of time playing and studying the game, and I think The Register's article is far too dismissive. Chess players see AlphaZero as playing at a completely different level from any previous chess engines. Even though programs like Stockfish can easily make mincemeat of any human player - including the world champion - they still play in a distinctive, highly tactical style. There are still positions that completely baffle them, because all they really do is apply minimax to the deepest level they can.

From the ChessBase article linked to above, it would seem that AlphaZero combines the strengths of previous chess engines with those of very strong human players. The games it played against Stockfish are very impressive, as it completely outthinks Stockfish in a very human - or, rather, superhuman way.

Time will tell. On the one hand, if it's a genuine breakthrough, this could be one sign that AI is real. Remember, there was a lot of difference between the Wright Brothers' collection of bicycle parts and, say, a 747 - but the principles are the same and the time to go from one to the other not all that long.

Minimise Comment
7 0 Reply
12 days Anonymous Coward
Anonymous Coward
Bitter people are bitter...
Some guy who didn’t build an AI algorithm to play chess doesn’t understand what Google were testing.

For example, Stockfish won’t benefit from additional hardware. Stockfish being configured not using an opening book is to compare engine to engine strength (which is probably academically more interesting).

I am going to guarantee that in a few weeks DeepMind will redo the tests, jumping through the hoops that the waste-of-space detractors invent.

5 11 Reply
11 days caesium
Re: Bitter people are bitter...
"Stockfish won’t benefit from additional hardware."

Of course it would, especially given the small time limit. Deeper search translates to stronger play with alpha-beta all else being equal. It might not have mattered significantly if the games were using a standard time limit.

"Stockfish being configured not using an opening book is to compare engine to engine strength"

Which is incompatible with how this was advertised. If one of the ideas is to compare machine-learning with human tuning, than we need to use all of the relevant strategy's resources, not ban a certain type of human tuning (which Stockfish was designed to play with) and than declare victory...

"jumping through the hoops that the waste-of-space detractors invent."

No hoops here, just use standard tournament rules, vanilla SF configuration and publish _all_ of the games. Or else we'll conclude Google spiced up their (already interesting) work.

1 0 Reply
12 days Red Ted
Painting oneself in the best possible light
As with any research activity, it is to the benefit of the researchers to paint themselves in the best possible light. This then gets headlines for them and their funding body.

Always the devil is in the detail.

The same applies to the research that leads to "Cure for cancer found" headlines.

3 1 Reply
12 days Simon Rockman
Chess and Go? It should be playing online poker.

Maybe it already is...

2 0 Reply
12 days Missing Semicolon
Boffin
Peer review test
Given the current issues around peer review, it will be interesting to see if the valid criticisms in the article will be re-iterated by the reviewers and result in the paper being updated.

3 1 Reply
12 days Ugotta B. Kiddingme
... just until I need glasses?
"playing against itself, a technique in reinforcement learning known as self-play."

For some reason, I read that second word as "with"

2 0 Reply
12 days hellwig
Bronze badge
This is NOT AI
The neural network took the positions on the board as input, and spat out a range of moves and chose the one with the highest chance of winning at every move. It learned this by self-play and using a Monte Carlo tree search algorithm to sort through the potential strategies.

I f*cking KNEW it! I said this months ago when it beat GO. At some point, it could catalog enough moves to basically know where to go from any point. So it basically kept playing itself at each move, and had enough processing power to compute probably billions of moves in the one minute it allocated to each real move. The less possible moves remaining, the more likely it was to find a successful path through to nearly guarantee against a loss (it tied 72% of the time).

In high-school I tried this approach with a Mancala game. I gave the game a few seconds to basically try each possible move and build up a tree of most-likely-to-succeed moves. It sucked because I didn't really understand proper Mancala strategy and my high-school coding skills sucked, but apparently I was just lacking the hardware. If only I was programming in a 32-bit instruction set at the time.

1 1 Reply
11 days Randy112235
Re: This is NOT AI
You can't brute force Chess let alone Go with the computing power we have now. The way you are describing is actually similar to what normal engines do now, Brute Force+pruning.

0 0 Reply
11 days Destroy All Monsters
Re: This is NOT AI
The way to think about NN processing is: It's compression (throwing away "irrelevant" details):

Totally readable with a large picture of the bottled dog of John Bull:

Link

Also check out the Youtube video.

0 0 Reply
12 days Destroy All Monsters
Mushroom
GOGGLE RAGE == GOOGAGE!!
A spokesperson from DeepMind told The Register that it could not comment on any of the claims made since “the work is being submitted for peer review and unfortunately we cannot say any more at this time.”

This not being legal proceedings or an IPO manoeuver, I call utter bullshit by the usual suspects.

Of course Google is at liberty to discuss AlphaWhatsists. The peer reviewing researchers certainly wont mind and one hopes that they evaluate the paper on its merits, and reject it if they hear the show was rigged.

4 0 Reply
12 days Anonymous Coward
Anonymous Coward
Science or marketing?
A key element of science is reproducible results, and expanding the wealth of knowledge for the benefit of mankind. If DeepMind aren't releasing any of their code, how is what they're doing 'science'?

Sounds more like marketing to me.

3 2 Reply
12 days Bob Dole (tm)
Re: Science or marketing?
Ask the IPCC to release the underlying unmodified data they've used. oh wait...

1 0 Reply
11 days Archtech
Re: Science or marketing?
Now that's a great suggestion... in principle. Have you any idea how much of the "science" that is currently being done relies heavily on secret code?

And that's research that deals with really important matters - not just playing a game as a first approach to developing AI techniques.

1 0 Reply
8 days geremore
Check the games won by Alpha
What the "standard" chess programs fail to see, are crippled pieces. Almost all games won, there was a material balance or even a plus for Stockfish, but that program fails to notice a long inactivity of its pieces.

Comment:

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2018
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - Sept 2018. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page