Computing in the Real World
PC Pro
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Colour-ConventionsDisclaimerPapers in this BookNotes Citing this Book

"Collins (Barry) - Can AI Solve Chess's Stalemate"

Source: PC Pro - Computing in the Real World, Issue 330, April 2022 (Received in February 2022)

  1. This is a rather disappointing piece, in that it’s more about chess and its forms and popularity than about AI.
  2. The article’s title probably stems from the (to my mind absurd) suggestion of Nigel Short to change the Laws of Chess so that stalemate is a win but also from the fact that chess world championship matches have latterly had long strings of draws with the result decided by blitz games.
  3. The claim is that traditional AI – which embeds the strategies of their creators’ grandmaster advisors – has led to wars of attrition as players who train against them play one another.
  4. However, AlphaZero1, which has trained itself, adopts seemingly riskier positions – especially flank manoeuvres and a disregard for material as against piece activity – and has led some grandmasters to reassess certain positions and play in a more exciting style.
  5. The article makes analogies between chess and cricket, in that the latter has moved to shorter forms with more excitement. The article does think the ‘long form’ of chess will continue. In response to the ‘no stalemate’ suggestion, one of the thrills of test cricket is two tail-enders trying to hang on for a draw. That’s the stalemate analogy, as against the bore-draw where both sides slog away until time runs out.
  6. It also makes a useful point that playing on-line against a computer removes some of the stress and ego-damage caused by play against human opponents, both on-line and F2F.
  7. There are some – to my mind – useless ‘alternative rules’ chess games available on-line. The whole point of playing chess isn’t to calculate variations, even though this is necessary to avoid blunders, but to get a deep understanding of the game and its strategy. You can’t do this if you change the rules – you get a different game. Why would you want to play – superficially – a number of very similar games?
  8. However, it doesn’t go into how the various chess engines currently work. It notes that your phone can thrash the world champion, but doesn’t say how the phone’s software runs (presumably by calls to a server somewhere). Also, AlphaZero was trained on specialist hardware (5,000 TPUs) but ran on 42 CPUs but only 4 TPUs in its match against Stockfish. Are the results of AlphaZero’s training generally available to run? How do chess AIs run these days?

Paper Comment
  • Sub-title: “AI has been blamed for dragging professional chess into an endless succession of draws. But as Barry Collins discovers, it's also injecting new life into the game - for both pros and amateurs.
  • PC Pro 330, April 2022
  • Photocopy filed in "Various - Miscellaneous Folder I: A - M".

In-Page Footnotes ("Collins (Barry) - Can AI Solve Chess's Stalemate")

Footnote 1:

"Kobie (Nicole) - Does Facial Recognition Have a Future?"

Source: PC Pro - Computing in the Real World

Full Text
  1. Introduction
    • In January 2020, Robert Julian Borchak Williams was handcuffed and arrested in front of his family for shoplifting after being identified by facial recognition used by the Detroit Police Department. The system was wrong and he wasn’t a criminal but, because a machine said so, Williams spent 30 hours in jail.
    • Williams has the distinction of being the first person arrested and jailed after being falsely identified by facial recognition – or, at least, the first person that we the public have been told about. The Detroit police chief said at a meeting following the reports of Williams’ arrest that the system misidentified suspects 96% of the time1. Given the wider discussion around reforming policing in the US following the killing of George Floyd2 by Minneapolis officers, it's no wonder calls for bans of the tech are starting to be heard.
    • Amazon, Microsoft and IBM soon paused sales of facial-recognition systems to police, although it’s worth noting that there are plenty of specialist companies that still sell to authorities. Politicians are calling for a blanket ban until the technology is better understood and proven safe. "There should probably be some kind of restrictions." Jim Jordan, a Republican representative, said in a committee hearing. "It seems to be it's time for a timeout.”
    • That’s in the US. In the UK, police continue to use the controversial technology. The Met Police used it at the Notting Hill Carnival and outside Stratford station in London, but the tech is also used by police in South Wales. "Facial recognition has been creeping across the UK in public spaces.” Garfield Benjamin, researcher at Solent University, told PC Pro. "It is used by the police, shopping centres, and events such as concerts and festivals. It appears in most major cities and increasingly other places, but is particularly prevalent across London where the Met Police and private developers have been actively widening its use.”
    • That's despite a growing body of evidence that suggests the systems aren’t accurate, with research from activist group Big Brother Watch3 claiming that 93% of people stopped4 by the Met Police using the tech were incorrectly identified. A further study by the University of Essex showed the Met Police's system was accurate 19% of the time5.
    • Can facial recognition ever work? Is a temporary ban enough? Or is this a technology that should forever be relegated to photo apps rather than serious use? The answers to these questions will decide the future of facial recognition - but the road forward isn't clear.
  2. The problems with facial recognition tech
    • The problems with facial recognition aren’t limited to a few instances or uses - it's across the entire industry. A study by the US National Institute of Standards and Technology tested 189 systems from 99 companies, finding that black and Asian faces were between ten to 100 times more likely to be falsely identified6 than people from white backgrounds.
    • What causes such problems? Sometimes the results are due to poor quality training data, which could be too limited or biased - some datasets don't have as many pictures of black people as other racial groups, for example, meaning the system has less to go on. In other instances, the algorithms are flawed, again perhaps because of human bias, meaning good data is misinterpreted.
    • That could be solved by having a “human in the loop”, when a person uses data from an Al but still makes the final decision - what you would expect to happen with policing, with a facial-recognition system flagging a suspect for officers to investigate, not blindly arrest. But we humans too easily put our faith in machines, says Birgit Schippers. a senior lecturer at St Mary’s University College Belfast. "There’s also concern over automation bias, where human operators trust, perhaps blindly, decisions proposed by a facial-recognition technology," she said. "Trained human operators should in fact take decisions that are based in law.”
    • Even a sound system trained well on a solid dataset can have downsides. “It has a profound impact on our fundamental human rights, beginning with the potential for blanket surveillance that creates a chill factor, which impacts negatively on our freedom of expression, and perhaps our willingness to display nonconformist behaviour in public places.” Schippers explained. "Another fundamental concern is lack of informed consent7 ... we do not know what is going to happen with our data.”
    • Then there's the other side of human intervention: misuse. " Another key concern is the way that facial-recognition technology can be used to target marginalised, vulnerable, perhaps already over-policed communities," she said.
    • Whether we allow the tech in policing or elsewhere should depend on whether the benefits outweigh the downsides, argues Kentaro Toyama, a computer scientist at the University of Michigan. "The technology provides some kinds of convenience – you no longer have to hand label all your friends in Facebook photos, and law enforcement can sometimes find criminal suspects quicker.” Toyama said. "But. all technology is a double-edged sword. You now also have less privacy-, and law enforcement sometimes goes after the wrong people." And it's worth remembering, added Toyama, that facial recognition isn't a necessity. “There was no such technology - at least, none that was very accurate - until five to ten years ago, and there were no major disasters and no geopolitical crises because of the lack8."
  3. Fixing facial recognition
    • New technologies don't arrive fully formed and perfect. They need to be trialled and tested to spot flaws and bugs and knock-on effects before being rolled out more widely. Arguably, facial recognition has been rolled out too quickly because we're still clearly in the phase of finding the problems with and caused by this technology. So, now that we know about bias, inaccuracy and misuse, can we fix those problems to make this technology viable? “If people are seeking to make them fairer, then on a technical level we need to address bias in training data which leads to misidentification of women and ethnic minorities," said Solent University’s Benjamin. But, he added, you must be vigilant: "The audit data that is used to test these systems as biased audits often conceal deeper flaws9. If your training data is mostly white males, and your audit data is mostly white males, then the tests won't see any problem with only being able to correctly identify white males.”
    • There have been efforts to build more diverse facial recognition training sets, but that only addresses one part of the problem. The systems themselves can have flaws with their decision-making, and once a machine-learning algorithm is fully trained, we don’t necessarily know what it’s looking for10 when it examines an image of a person.
    • There are ways around this. A system could share its workings11, telling users why it thinks two images are of the same person. But this comes back to the automation bias: humans learn quickly to lean on machine-made decisions. The police in the Williams case should have taken the facial recognition system as a single witness, and further investigated - had they asked, they would have learned Williams had an alibi for the time of the theft. In short, even with a perfect system, we humans can still be a problem12.
  4. Regulation, regulation, regulation
    • Given those challenges and the serious consequences of inaccuracy and misuse, it’s clear that facial recognition should be carefully monitored. That means regulators need to step in.
    • However, regulators aren’t always up to the job. "Facial recognition crosses at least three major regulators in the UK: the CCTV Commissioner, Biometrics Commissioner and Information Commissioner,” said Benjamin. "All three have logged major concerns about the use of these technologies but have so far been unable to come together to properly regulate their use. The Biometrics Commissioner even had to complain to the Met Police when they misrepresented his views, making it seem like he was in favour of its use. We need more inter-regulator mechanisms, resources and empowerment to tackle these bigger and more systemic issues.”
    • Beyond that, there is no specific regulation that addresses these concerns with facial recognition in the UK, noted St Mary’s University College Belfast’s Schippers, but there is currently a private members bill working its way through parliament, seeking to ban the use of facial recognition in public places13. In Scotland, MSPs have already made such a recommendation, but plans by Police Scotland to use the technology had already been put on hold.
    • Such a pause could let regulators assess how and when to use the technology. “As the pros and cons become clearer, we should gradually allow certain applications, at progressively larger scales, taking each step with accompanying research and oversight, so we can understand the impacts.” said the University of Michigan’s Toyama.
    • That’s worked for other potentially dangerous, but useful, advanced technologies. "The most effective form of this is the development of nuclear energy and weapons - not just anyone can experiment with it, sell it, or use it,” Toyama added. “It’s tightly regulated everywhere, as it should be."
  5. Time for a ban?
    • Facial recognition is flawed, has the potential for serious negative repercussions, and regulators are struggling to control its use. Until those challenges can be overcome, many experts believe the technology should be banned from any serious use. "There is little to no evidence that the technologies provide any real benefit, particularly compared to their cost, and the rights violations are too great to continue their deployment.” Benjamin said. Toyama agrees that a moratorium is necessary until the potential impacts are better understood. “Personally, I think that many uses can be allowed as long as they are narrowly circumscribed and have careful oversight … though, I would only say that in the context of institutions I trust on the whole,” Toyama explained.
    • Schippers would also like to see a ban - but not only on facial recognition technology's use by police forces, but by private companies too. “Retailers, bars, airports, building sites. leisure centres all use facial-rejection technology,” Schippers said. “It's becoming impossible to ignore.”
  6. What’s next?
    • Facial recognition is quickly becoming a case study in how not to test and roll out a new idea - but other future technologies could also see the same mistakes.
    • Look at driverless cars or drones: both are being pushed hard by companies and governments as necessary solutions to societal problems, despite the technologies remaining unproven, regulation not yet being in place, and the potential downsides not being fully considered.
    • That said, facial recognition seems more alarming14 than its fellow startup technologies. “There’s something about facial recognition that many people feel to be particularly creepy - but facial recognition is just another arrow in the quiver of technologies that corporations and governments use to erode our privacy, and therefore, our ability to be effective, democratic citizens.” said Toyama.
    • Due to that, and the inaccuracies, missteps and misuse, facial recognition faces a reckoning - and it’s coming fast.
    • “I think the next five years will see a strong tipping point either way for facial recognition,” said Benjamin. “With some companies ceasing to sell the technologies to the police, and some regulatory success, we could see them fall out of favour. But the government and many police forces are very keen on expanding their use, so it will be a matter of whether rights and equality or security and oppression15 win out.”
    • The pace of technology development continues to accelerate, but we should control its pace. We need to either slow it down via regulatory approval and testing, or speed up our own understanding of how it works and what could go wrong - or we risk more people like Williams being made victims of so-called progress.

Further Notes
Paper Comment
  • Sub-title: “Regulation against the future tech is looming amid concerns about its accuracy for policing and other public uses. Nicole Kobie reveals the future of facial recognition.
  • PC Pro 312, October 2020

In-Page Footnotes ("Kobie (Nicole) - Does Facial Recognition Have a Future?")

Footnote 1:
  • This is an extraordinary failure rate, and I’d initially thought it a typo for “identified”, but apparently not.
  • It’s not clear, however, how the “identification” takes place. Presumably there’s not a national database of mug-shots, so does the AI just make its best guess from a database of ex-cons?
  • Maybe there’s a national database of ID cards, driver’s licenses or passport photos?
  • See Wikipedia: Identity documents in the United States.
Footnote 2: Footnote 3: Footnote 4:
  • So, marginally better that the US, but still terrible.
  • But – again – it’s not explained how the technology is used.
  • Also, as this is a pressure group, how confident can we be of their statistics?
Footnote 5: Footnote 6:
  • I can well believe it, given the likely training algorithms.
  • But how does this stack up with the dire success rates already reported?
Footnote 7:
  • Well, for any system to work, there could be no question of “consent” – informed or otherwise – at the individual level (or all those that needed to be surveyed would opt out.
  • This is something that would need to be voted on (hopefully not by a referendum) as a general policy.
Footnote 8:
  • That is a very broad claim!
  • Covid-19 contact tracing … wouldn’t facial recognition help? Is it being used in the Far East? Maybe use of mobile-phone tracking instead? Just as invasive?
Footnote 9:
  • Well, yes, but these flaws can be fixed as well. Stop whining. We’re only arguing here about whether the tech can be got to work.
Footnote 10:
  • This is a general problem with machine learning – we don’t know how it does it.
  • But this is completely different from – say – credit rating. It either gets faces right or it doesn’t – if it does, we don’t care how it does it (like we don’t care how AlphaZero beats Stockfish). But a credit-rating isn’t a “fact” in the same way. A human that checks the algorithm might suffer from the same prejudices that are implicated in the training program. But you can’t be “prejudiced” about facial recognition, can you? You – and the AI – might both think you’ve identified someone – but you might both be wrong, and this is a fact that’s easily checked.
Footnote 11:
  • Really? Not in neural networks.
Footnote 12:
  • Well, yes – but that’s true of any technology, and it doesn’t suggest we all be Luddites.
Footnote 13:
  • Given the technology doesn’t yet seem to work, it shouldn’t be used to inform any decisions without human oversight.
  • But, like driverless cars, if you ban them from public spaces they will never improve.
Footnote 14:
  • You must be joking! Driverless cars and drones can directly lead to significant loss of life.
Footnote 15:
  • This is a very tendentious way of putting things. Face recognition – provided it works – is ethically neutral. It’s how it’s used and regulated that matters.

"Kobie (Nicole) - Quantum Supremacy is here - So what?"

Source: PC Pro - Computing in the Real World, 307, May 2020

Full Text
  1. Introduction
    • Given the idea central to quantum computing is that bits can be in multiple states at the same time, perhaps it's no surprise that Google's quantum supremacy claims are disputed - as is the importance of the milestone itself.
    • Last October1, Google released a paper claiming its quantum computer had hit that milestone, only for IBM to counter with a paper disagreeing supremacy had been reached. Both sides have a point, but either way, we’re on our way to quantum computing - although don't expect a quantum laptop anytime soon, if ever. Indeed, we’re not really sure what architecture quantum computing will take, or how we'll use it.
    • Here's why the quantum supremacy milestone is much less dramatic - and more compelling - than you may realise.
  2. What is quantum computing?
    • Standard computers use binary: a bit is either on or off, a one or a zero. Quantum computers take advantage of subatomic behaviour that allows for particles to be in multiple states, so a quantum bit or “qubit" can be a one, a zero, anywhere in between or both. That allows an exponential increase2 in data storage and processing power.
    • A quantum computer would harness that massive power to process at a much faster rate than a standard computer, but there are challenges.
      1. First, we need to build one. There are machines being built by Google and IBM as well as California-based Rigetti Computing and Canadian D-Wave Systems, all with different techniques.
      2. Second, because of interference, quantum computing is all over the place - to put it mildly - meaning such systems require error correction.
      3. And, even with a working system, we need algorithms to manage its processes.
    • All of that development will take time. To track progress, in 2012 California Institute of Technology professor John Preskill came up with the idea of quantum supremacy as a milestone and it's simple: we reach supremacy when a quantum computer can do an equation that a traditional computer could not in a reasonable time frame. "It goes beyond what can be achieved by classical computing" said Toby Cubitt3, associate professor in quantum information at University College London.
    • Despite the dramatic name, all quantum supremacy really means is that a quantum computer has been built that works. "The terminology is unfortunate," admitted Cubitt, "but we appear to be stuck with it." That said, it's an important milestone, but only the first on the long road to quantum computing. "That's really a long way off," he added.
  3. What did Google claim?
    • In October4, Google claimed to have hit that milestone using its Sycamore system. Google said in a paper in Nature5 that Sycamore performed a calculation in 3mins 20 secs using 53 qubits, claiming that the same calculation would have taken 10,000 years using the most powerful supercomputer in the world, IBM's Summit. “Google was really hying to hit that quantum supremacy milestone” said Cubitt. “They achieved a quantum computation that is at the boundary of what can be simulated or reproduced by the world's biggest supercomputers". However, an early draft of the paper was leaked a month ahead of the Nature publication, giving IBM time to get ahead of those claims. In its own paper. IBM said that an optimised Version of Summit could solve the calculation in 2.5 days - meaning Sycamore's feat didn't qualify as true quantum supremacy.
    • But the Sycamore wasn’t at full operation as one of its 54 qubits was “on the fritz”, said Michael Bradley, professor of physics at the University of Saskatchewan. “So only 53 were working." And that matters, because had that qubit been functional, the IBM paper's claim wouldn't stand. By adding that extra qubit, the Sycamore has another power boost which would have let it easily surpass IBM’s system. Any computation can be reproduced on a classical computer, given enough time." said Cubitt. "How fast it can be pone, that's what changes.”
    • If the right type of algorithm is used, designed with the right structure, every qubit added to the computation will double the size of the problem to be solved for a classical computer.” In other words, if that fifty-fourth qubit had been working, the IBM system would have been left in the dust. "Once you have exponential growth, it doesn't matter how big of a computation you’ve done”, Cubitt said, “Because if you can just manage to simulate something on a classical computer, then if a few extra qubits are added, you're definitely not going to be able to make it twice as big as the world's biggest supercomputer.”
    • So why did IBM dispute Google’s Sycamore achievement? “This argument is a little bit [of] PR by IBM." said Cubitt. “Sour grapes that they didn't achieve it first.”
    • Of course, this is only the first step towards quantum computing - and the computation itself isn't useful. There's no requirement in Preskill’s definition that the computation have any purpose, and Google's doesn’t - it's essentially a random sampling equation. “It isn’t what most people would consider a computation,” said Bradley. "It's kind of an odd benchmark. It's a good benchmark, in that it demonstrates topologically that you can do_ the operations needed for a computation, but the actual computation is of no interest to anybody.” He added: “It doesn’t take away from the technological achievement, but 1 think in a way it's a little bit oversold."
    • Still, Bradley stressed that merely making the machine work is worth applauding. “It’s pretty impressive because the whole thing needs to be cooled to ultra-low temperatures cryogenically, which is no mean feat,” he explained. "And you’ve got to control the communication between the different bits and so forth. The fact they were able to do anything at all is impressive."
  4. Quantum is on the way … slowly
    • Now we have the hardware, the next milestones will be making use of it – and that won't be easy. Indeed, the lack of useful algorithms is likely why Google stuck to random sampling for its computation. “There's not many computations that can be done by quantum computers," Bradley explained. “The idea of quantum computation is that by exploiting aspects of quantum nature, in particular interference of qubits coherently, you can get a huge speed up for certain kinds of processes."
    • But there simply aren't many algorithms that take advantage of those properties. "The hardware is getting somewhere, but the software - so to speak - is a ways behind," he said. That's one reason we likely won't have quantum computers on our desks. “We probably won't ever have a general purpose quantum computer because they just don’t have that kind of algorithmic flexibility that we see with a regular computer." Bradley said.
    • In truth, we don’t need it. Standard computers are fast enough for most tasks. Instead, quantum computers are likely to sit alongside supercomputers, performing specific tasks for researchers, be it modelling, crunching through massive amounts of data, and running until now impossible experiments, in particular with quantum physics.
    • Another milestone is error correction, which adds overhead to any quantum computer. Cubitt says one of the key steps is to solve that challenge. “The next milestone people are thinking about is to demonstrate rudimentary error correction and fault tolerance in a quantum computation," he said, noting that having enough overhead in a quantum computer that error correction isn't a burden is a long way off.
  5. Not quite here yet
    • So let’s be absolutely clear: quantum computers aren’t going to be sitting on our desktops anytime soon, if ever. Instead, they’ll first become the next generation of supercomputers. "That's really a long way off,” said Cubitt. Bradley agrees: "It will take time and we have to temper expectations a little bit.”
    • Indeed, different types of quantum I computers may work better for different algorithms and tasks, meaning we end up with a variety of systems. And some of the time, as IBM's work shows, we may be able to simply simulate quantum computers i - that’s cheaper and more accessible. “There’s no point in building a quantum computer if you can just do it classically," Cubitt said.
    • That suggests that the debate around whether Google achieved supremacy or not is missing the point. Both companies have helped push the science of computing further.
    • But science is slow, driven by methodical steps forward rather than dramatic breakthroughs. Surpassing the quantum supremacy milestone has sparked hype and backlash - and neither are true. “The Google experiment is a very nice piece of science," Cubitt said. Google’s paper is a milestone worth celebrating, but there's more work to be done.
  6. Quantum hardware
    • There are many different types of quantum computer - ion traps, superconducting circuits, optical lattices, quantum dots and measurement-based one-way systems - but which one will reign supreme remains to be seen. "It's really hard to say what the equivalent of the silicon transistor is going to be for quantum," Cubitt said.
    • Indeed, Cubitt notes that ten years ago the good money would have been on ion traps, but they've given way to superconducting circuits used by Google's Sycamore. “In ten years' time, that may be completely different," he said. “They hit obstacles at different points, so one is stuck for a while and the other pulls ahead." Superconducting circuits are very fast but messy, while ion traps have cleaner qubits and can run quantum information for longer, meaning less error correction is required, but they're slower. Measurement-based quantum computation can manage larger computations, but works more slowly. Intel is working on a silicon quantum computer; if it works, circuits can be built much closer together, which means they’ll be cleaner with data, explains Cubitt. “Different architectures have different trade-offs."
    • Given the various trade-offs and benefits, there may be no winner. Instead, we may have a myriad of different types of quantum computers for different tasks. Given that, it's worth looking at the field as a whole, rather than one company leaping ahead of another. "We've made steady progress over 20 years and will probably continue to make steady progress - it’s not one breakthrough," he said.

Paper Comment
  • Sub-title: “Google has laid claim to the milestone, but IBM disagrees. Nicole Kobie reveals why that's good news for computing science, even if quantum systems remain many years in the future.
  • PC Pro 307, May 2020
  • See "Evenden (Ian) - Quantum computing comes of age".

In-Page Footnotes ("Kobie (Nicole) - Quantum Supremacy is here - So what?")

Footnotes 1, 4:
  • Ie. October 2019.
Footnote 2:
  • Why? How?
Footnote 3: Footnote 5:

"PC Pro - Computing in the Real World"

Source: PC Pro - Computing in the Real World

Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2022
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)

© Theo Todman, June 2007 - Sept 2022. Please address any comments on this page to File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page