Computing in the Real World
PC Pro
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Colour-ConventionsDisclaimerPapers in this BookNotes Citing this Book

"Collins (Barry) - Can AI Solve Chess's Stalemate"

Source: PC Pro - Computing in the Real World, Issue 330, April 2022 (Received in February 2022)

  1. This is a rather disappointing piece, in that it’s more about chess and its forms and popularity than about AI.
  2. The article’s title probably stems from the (to my mind absurd) suggestion of Nigel Short to change the Laws of Chess so that stalemate is a win but also from the fact that chess world championship matches have latterly had long strings of draws with the result decided by blitz games.
  3. The claim is that traditional AI – which embeds the strategies of their creators’ grandmaster advisors – has led to wars of attrition as players who train against them play one another.
  4. However, AlphaZero1, which has trained itself, adopts seemingly riskier positions – especially flank manoeuvres and a disregard for material as against piece activity – and has led some grandmasters to reassess certain positions and play in a more exciting style.
  5. The article makes analogies between chess and cricket, in that the latter has moved to shorter forms with more excitement. The article does think the ‘long form’ of chess will continue. In response to the ‘no stalemate’ suggestion, one of the thrills of test cricket is two tail-enders trying to hang on for a draw. That’s the stalemate analogy, as against the bore-draw where both sides slog away until time runs out.
  6. It also makes a useful point that playing on-line against a computer removes some of the stress and ego-damage caused by play against human opponents, both on-line and F2F.
  7. There are some – to my mind – useless ‘alternative rules’ chess games available on-line. The whole point of playing chess isn’t to calculate variations, even though this is necessary to avoid blunders, but to get a deep understanding of the game and its strategy. You can’t do this if you change the rules – you get a different game. Why would you want to play – superficially – a number of very similar games?
  8. However, it doesn’t go into how the various chess engines currently work. It notes that your phone can thrash the world champion, but doesn’t say how the phone’s software runs (presumably by calls to a server somewhere). Also, AlphaZero was trained on specialist hardware (5,000 TPUs) but ran on 42 CPUs but only 4 TPUs in its match against Stockfish. Are the results of AlphaZero’s training generally available to run? How do chess AIs run these days?

Paper Comment
  • Sub-title: “AI has been blamed for dragging professional chess into an endless succession of draws. But as Barry Collins discovers, it's also injecting new life into the game - for both pros and amateurs.
  • PC Pro 330, April 2022
  • Photocopy filed in "Various - Miscellaneous Folder I: A - M".

In-Page Footnotes ("Collins (Barry) - Can AI Solve Chess's Stalemate")

Footnote 1:

"Kobie (Nicole) - Does Facial Recognition Have a Future?"

Source: PC Pro - Computing in the Real World

Full Text
  1. Introduction
    • In January 2020, Robert Julian Borchak Williams was handcuffed and arrested in front of his family for shoplifting after being identified by facial recognition used by the Detroit Police Department. The system was wrong and he wasn’t a criminal but, because a machine said so, Williams spent 30 hours in jail.
    • Williams has the distinction of being the first person arrested and jailed after being falsely identified by facial recognition – or, at least, the first person that we the public have been told about. The Detroit police chief said at a meeting following the reports of Williams’ arrest that the system misidentified suspects 96% of the time1. Given the wider discussion around reforming policing in the US following the killing of George Floyd2 by Minneapolis officers, it's no wonder calls for bans of the tech are starting to be heard.
    • Amazon, Microsoft and IBM soon paused sales of facial-recognition systems to police, although it’s worth noting that there are plenty of specialist companies that still sell to authorities. Politicians are calling for a blanket ban until the technology is better understood and proven safe. "There should probably be some kind of restrictions." Jim Jordan, a Republican representative, said in a committee hearing. "It seems to be it's time for a timeout.”
    • That’s in the US. In the UK, police continue to use the controversial technology. The Met Police used it at the Notting Hill Carnival and outside Stratford station in London, but the tech is also used by police in South Wales. "Facial recognition has been creeping across the UK in public spaces.” Garfield Benjamin, researcher at Solent University, told PC Pro. "It is used by the police, shopping centres, and events such as concerts and festivals. It appears in most major cities and increasingly other places, but is particularly prevalent across London where the Met Police and private developers have been actively widening its use.”
    • That's despite a growing body of evidence that suggests the systems aren’t accurate, with research from activist group Big Brother Watch3 claiming that 93% of people stopped4 by the Met Police using the tech were incorrectly identified. A further study by the University of Essex showed the Met Police's system was accurate 19% of the time5.
    • Can facial recognition ever work? Is a temporary ban enough? Or is this a technology that should forever be relegated to photo apps rather than serious use? The answers to these questions will decide the future of facial recognition - but the road forward isn't clear.
  2. The problems with facial recognition tech
    • The problems with facial recognition aren’t limited to a few instances or uses - it's across the entire industry. A study by the US National Institute of Standards and Technology tested 189 systems from 99 companies, finding that black and Asian faces were between ten to 100 times more likely to be falsely identified6 than people from white backgrounds.
    • What causes such problems? Sometimes the results are due to poor quality training data, which could be too limited or biased - some datasets don't have as many pictures of black people as other racial groups, for example, meaning the system has less to go on. In other instances, the algorithms are flawed, again perhaps because of human bias, meaning good data is misinterpreted.
    • That could be solved by having a “human in the loop”, when a person uses data from an Al but still makes the final decision - what you would expect to happen with policing, with a facial-recognition system flagging a suspect for officers to investigate, not blindly arrest. But we humans too easily put our faith in machines, says Birgit Schippers. a senior lecturer at St Mary’s University College Belfast. "There’s also concern over automation bias, where human operators trust, perhaps blindly, decisions proposed by a facial-recognition technology," she said. "Trained human operators should in fact take decisions that are based in law.”
    • Even a sound system trained well on a solid dataset can have downsides. “It has a profound impact on our fundamental human rights, beginning with the potential for blanket surveillance that creates a chill factor, which impacts negatively on our freedom of expression, and perhaps our willingness to display nonconformist behaviour in public places.” Schippers explained. "Another fundamental concern is lack of informed consent7 ... we do not know what is going to happen with our data.”
    • Then there's the other side of human intervention: misuse. " Another key concern is the way that facial-recognition technology can be used to target marginalised, vulnerable, perhaps already over-policed communities," she said.
    • Whether we allow the tech in policing or elsewhere should depend on whether the benefits outweigh the downsides, argues Kentaro Toyama, a computer scientist at the University of Michigan. "The technology provides some kinds of convenience – you no longer have to hand label all your friends in Facebook photos, and law enforcement can sometimes find criminal suspects quicker.” Toyama said. "But. all technology is a double-edged sword. You now also have less privacy-, and law enforcement sometimes goes after the wrong people." And it's worth remembering, added Toyama, that facial recognition isn't a necessity. “There was no such technology - at least, none that was very accurate - until five to ten years ago, and there were no major disasters and no geopolitical crises because of the lack8."
  3. Fixing facial recognition
    • New technologies don't arrive fully formed and perfect. They need to be trialled and tested to spot flaws and bugs and knock-on effects before being rolled out more widely. Arguably, facial recognition has been rolled out too quickly because we're still clearly in the phase of finding the problems with and caused by this technology. So, now that we know about bias, inaccuracy and misuse, can we fix those problems to make this technology viable? “If people are seeking to make them fairer, then on a technical level we need to address bias in training data which leads to misidentification of women and ethnic minorities," said Solent University’s Benjamin. But, he added, you must be vigilant: "The audit data that is used to test these systems as biased audits often conceal deeper flaws9. If your training data is mostly white males, and your audit data is mostly white males, then the tests won't see any problem with only being able to correctly identify white males.”
    • There have been efforts to build more diverse facial recognition training sets, but that only addresses one part of the problem. The systems themselves can have flaws with their decision-making, and once a machine-learning algorithm is fully trained, we don’t necessarily know what it’s looking for10 when it examines an image of a person.
    • There are ways around this. A system could share its workings11, telling users why it thinks two images are of the same person. But this comes back to the automation bias: humans learn quickly to lean on machine-made decisions. The police in the Williams case should have taken the facial recognition system as a single witness, and further investigated - had they asked, they would have learned Williams had an alibi for the time of the theft. In short, even with a perfect system, we humans can still be a problem12.
  4. Regulation, regulation, regulation
    • Given those challenges and the serious consequences of inaccuracy and misuse, it’s clear that facial recognition should be carefully monitored. That means regulators need to step in.
    • However, regulators aren’t always up to the job. "Facial recognition crosses at least three major regulators in the UK: the CCTV Commissioner, Biometrics Commissioner and Information Commissioner,” said Benjamin. "All three have logged major concerns about the use of these technologies but have so far been unable to come together to properly regulate their use. The Biometrics Commissioner even had to complain to the Met Police when they misrepresented his views, making it seem like he was in favour of its use. We need more inter-regulator mechanisms, resources and empowerment to tackle these bigger and more systemic issues.”
    • Beyond that, there is no specific regulation that addresses these concerns with facial recognition in the UK, noted St Mary’s University College Belfast’s Schippers, but there is currently a private members bill working its way through parliament, seeking to ban the use of facial recognition in public places13. In Scotland, MSPs have already made such a recommendation, but plans by Police Scotland to use the technology had already been put on hold.
    • Such a pause could let regulators assess how and when to use the technology. “As the pros and cons become clearer, we should gradually allow certain applications, at progressively larger scales, taking each step with accompanying research and oversight, so we can understand the impacts.” said the University of Michigan’s Toyama.
    • That’s worked for other potentially dangerous, but useful, advanced technologies. "The most effective form of this is the development of nuclear energy and weapons - not just anyone can experiment with it, sell it, or use it,” Toyama added. “It’s tightly regulated everywhere, as it should be."
  5. Time for a ban?
    • Facial recognition is flawed, has the potential for serious negative repercussions, and regulators are struggling to control its use. Until those challenges can be overcome, many experts believe the technology should be banned from any serious use. "There is little to no evidence that the technologies provide any real benefit, particularly compared to their cost, and the rights violations are too great to continue their deployment.” Benjamin said. Toyama agrees that a moratorium is necessary until the potential impacts are better understood. “Personally, I think that many uses can be allowed as long as they are narrowly circumscribed and have careful oversight … though, I would only say that in the context of institutions I trust on the whole,” Toyama explained.
    • Schippers would also like to see a ban - but not only on facial recognition technology's use by police forces, but by private companies too. “Retailers, bars, airports, building sites. leisure centres all use facial-rejection technology,” Schippers said. “It's becoming impossible to ignore.”
  6. What’s next?
    • Facial recognition is quickly becoming a case study in how not to test and roll out a new idea - but other future technologies could also see the same mistakes.
    • Look at driverless cars or drones: both are being pushed hard by companies and governments as necessary solutions to societal problems, despite the technologies remaining unproven, regulation not yet being in place, and the potential downsides not being fully considered.
    • That said, facial recognition seems more alarming14 than its fellow startup technologies. “There’s something about facial recognition that many people feel to be particularly creepy - but facial recognition is just another arrow in the quiver of technologies that corporations and governments use to erode our privacy, and therefore, our ability to be effective, democratic citizens.” said Toyama.
    • Due to that, and the inaccuracies, missteps and misuse, facial recognition faces a reckoning - and it’s coming fast.
    • “I think the next five years will see a strong tipping point either way for facial recognition,” said Benjamin. “With some companies ceasing to sell the technologies to the police, and some regulatory success, we could see them fall out of favour. But the government and many police forces are very keen on expanding their use, so it will be a matter of whether rights and equality or security and oppression15 win out.”
    • The pace of technology development continues to accelerate, but we should control its pace. We need to either slow it down via regulatory approval and testing, or speed up our own understanding of how it works and what could go wrong - or we risk more people like Williams being made victims of so-called progress.

Further Notes
Paper Comment
  • Sub-title: “Regulation against the future tech is looming amid concerns about its accuracy for policing and other public uses. Nicole Kobie reveals the future of facial recognition.
  • PC Pro 312, October 2020

In-Page Footnotes ("Kobie (Nicole) - Does Facial Recognition Have a Future?")

Footnote 1:
  • This is an extraordinary failure rate, and I’d initially thought it a typo for “identified”, but apparently not.
  • It’s not clear, however, how the “identification” takes place. Presumably there’s not a national database of mug-shots, so does the AI just make its best guess from a database of ex-cons?
  • Maybe there’s a national database of ID cards, driver’s licenses or passport photos?
  • See Wikipedia: Identity documents in the United States.
Footnote 2: Footnote 3: Footnote 4:
  • So, marginally better that the US, but still terrible.
  • But – again – it’s not explained how the technology is used.
  • Also, as this is a pressure group, how confident can we be of their statistics?
Footnote 5: Footnote 6:
  • I can well believe it, given the likely training algorithms.
  • But how does this stack up with the dire success rates already reported?
Footnote 7:
  • Well, for any system to work, there could be no question of “consent” – informed or otherwise – at the individual level (or all those that needed to be surveyed would opt out.
  • This is something that would need to be voted on (hopefully not by a referendum) as a general policy.
Footnote 8:
  • That is a very broad claim!
  • Covid-19 contact tracing … wouldn’t facial recognition help? Is it being used in the Far East? Maybe use of mobile-phone tracking instead? Just as invasive?
Footnote 9:
  • Well, yes, but these flaws can be fixed as well. Stop whining. We’re only arguing here about whether the tech can be got to work.
Footnote 10:
  • This is a general problem with machine learning – we don’t know how it does it.
  • But this is completely different from – say – credit rating. It either gets faces right or it doesn’t – if it does, we don’t care how it does it (like we don’t care how AlphaZero beats Stockfish). But a credit-rating isn’t a “fact” in the same way. A human that checks the algorithm might suffer from the same prejudices that are implicated in the training program. But you can’t be “prejudiced” about facial recognition, can you? You – and the AI – might both think you’ve identified someone – but you might both be wrong, and this is a fact that’s easily checked.
Footnote 11:
  • Really? Not in neural networks.
Footnote 12:
  • Well, yes – but that’s true of any technology, and it doesn’t suggest we all be Luddites.
Footnote 13:
  • Given the technology doesn’t yet seem to work, it shouldn’t be used to inform any decisions without human oversight.
  • But, like driverless cars, if you ban them from public spaces they will never improve.
Footnote 14:
  • You must be joking! Driverless cars and drones can directly lead to significant loss of life.
Footnote 15:
  • This is a very tendentious way of putting things. Face recognition – provided it works – is ethically neutral. It’s how it’s used and regulated that matters.

"Kobie (Nicole) - Neuralink: an old idea that could be the future of medicine"

Source: PC Pro - Computing in the Real World, 342, April 2023

Brain-computer interfaces have long been in the works, but now Musk is applying the accelerator. That could sink his own efforts, but might spur useful research too, finds Nicole Kobie.

  • A cure for paralysis and blindness that could one day allow humans to level up their own cognition - and all you have to do is trust the tech's most tempestuous CEO to plonk an implant into your brain.
  • Elon Musk’s Neuralink has ambitions as lofty as SpaceX’s dream of a mission to Mars and Tesla’s fantasy of driverless cars. Founded in 2016. Neuralink is building an implantable brain-computer interface (BCD that would allow computers to read neural signals. That could, in theory, help anyone suffering paralysis, blindness, dementia or other brain diseases, but Musk also sees the technology as valuable to boost human capabilities. "It's like replacing a piece of your skull with a smartwatch, for lack of a better analogy,” he said at a recruitment demonstration known as "Show and Tell” in November. Neuralink showed attendees a video of monkeys that spelt out the words “welcome to show and tell”. The previous year's demo involved a monkey playing a Pong-style game by thinking about moving the controller, while the year before a pig was shown with an embedded implant.
  • Musk claims the technology will be ready to implant into a human brain within six months, and that he intends to have one surgically shoved into his own skull in a future demonstration. All this is subject to regulatory approval, which in the US is covered by the Food and Drug Administration. "Obviously, we want to be extremely careful and certain that it will work well before putting a device in a human, but we’ve submitted, I think, most of our paperwork to the FDA,” he said. It's worth noting that Neuralink has previously hoped to begin human trials in 2020, and Musk in 2021 said he hoped they’d begin in 2022.
  • And there's already a challenge to that FDA approval. A complaint has been filed by the Physicians Committee for Responsible Medicine, as well as a federal investigation into animal welfare, sparked by internal staff complaints that the company’s rushed development pace is causing unnecessary suffering. Neuralink also faces increasing competition from the likes of Science Corp, which is working to cure blindness, and Synchron, which is hoping to treat paralysis.

How it works
  • At its most basic, Neuralink is a chip that’s stitched into the brain with tiny threads that pick up signals. If doctors want to read brain signals now, they attach electrodes to the skull; Neuralink is a miniaturised version that not only reads signals but sends them in order to tell a paralysed limb to move or - perhaps one day - control a smartphone without moving a muscle.
  • “The basic idea isn’t science fiction.” noted Andrew Jackson, professor of neural interfaces at Newcastle University. As a neuroscientist with his own neurotech spinout, MintNeuro, Jackson has been controlling animal brains with computers since the early 2000s. “I was building a wearable electronic circuit that sat on the head of a monkey and was connected via electrodes to brain cells,” he told PC Pro. “We called it the neurochip back then.” The aim was to track brain activity in order to build connections to prosthetics, and his work has expanded into generating movements as well as suppressing epilepsy seizures.
  • “I think the thing that people don’t often understand... is that this is a concept and an idea that’s been around for some time, ’ he said. “And impressive progress has been made." Indeed, Neuralink’s core idea has been demonstrated by researchers since the BrainGate trials in 2009. with implants in humans as early as 2002. And progress has continued, with a trial at Bristol’s Southmead Hospital helping to reverse symptoms of Parkinson’s in one patient using a deep-brain stimulation device.
  • Beyond Musk’s showmanship and funding, what’s different with Neuralink? The form factor, for a start. The BrainGate trial used an implant called a Utah Array, which Jackson describes as a “bed of nails of 100 electrodes’. And while it works, that’s one of the core limitations to the existing technologies: the implanted electrodes exit through the skin to be plugged into external equipment.
  • That’s the challenge Jackson is trying to solve at MintNeuro. which is working on a wireless design, and the core benefit of Neuralink’s design, which is low power and wireless in a tiny package. “What they’re doing well is developing low power wireless interfaces, so you can get rid of the cable through the skin, and making a reasonably small implant package that’s connected to lots of flexible electrodes, and speeding up the implantation of the device with surgical robots,” he said. “What they’re doing is quite sensible.’
  • The biggest hurdles aren’t the hardware or the brain science, but where the two meet. “The challenges are always at the interface - it’s where the mushy biology bit meets the fabricated engineering bit," he said. “ if you put an electrode into brain tissue you can record nice bright signals on day one. but then over time the quality of those signals sort of deteriorates and might be unstable. That’s because of scar tissue building up around the electrodes.’
  • Jackson said that from Neuralink’s public presentations, it seems the company may not be fully aware of such challenges, or underappreciates the hurdle they present. “Those are the kinds of things that are much more difficult to solve and the solutions aren’t what microelectronics fabrications are used to dealing with," he said.
  • Jackson cautions that Neuralink hasn't yet released significant details about the hardware itself, with the product presentation actually part of a recruitment day rather than a technology demonstration. “I think the academics among us would prefer that there were scientific papers being published where we could see all of the details, but it’s sort of a culture clash." he said. “That's how academics work. But we have to sort of realise that's not how tech companies work."

Reading minds
  • The tech challenges aren't small - though they are about size. Implants not only need to be tiny to reduce damage to the brain, they also require wireless links to avoid those cables through the skull, draw the lowest power imaginable, be rechargeable and support updates, which Neuralink has said will be possible. “All of that’s relatively easy given the kind of sophistication that we have with smartphones,” said Jackson.
  • But challenges remain, particularly with decoding brain signals. It’s now easy to train a system to pick up a signal: tell a person to move to the left, or think about moving to the left, and record what their brain does. Eventually, a system can be told to look for that pattern. But what happens when we try to go beyond such simple decoding?
  • “I think there’s a sense within the technology industry that all the problems go away if you can scale up to more and more data," said Jackson. "If you increase the bandwidth of things and increase the number of channels you get a Moore’s Law effect and the problems just fall away.”
  • That did work with Al speech recognition, which required tons of training but is now commonplace. But reading more complicated signals, such as complex thoughts beyond ‘left’ or "right’, might not work that way. “If we get to the point where enough people have got brain implants and we can get these big data sets, is it going to turn out that the interface that has been trained on someone else's brain or a lot of other people’s brains then works for your brain without having to go through a training process?” asked Jackson.
  • He added: “It’s not like I can identify a neuron, a single brain cell, in my brain that's analogous to one in your brain. That's not how brains work... all our brains are different.”
  • What this means is that scaling the technology from simple animal games to mind-reading will take a long time and a lot of effort, and might not even be possible. “We don’t know whether it will scale the same way as speech decoding or if it’s just a fundamentally different kind of problem... but that’s not a reason not to try.” he said. “It's an interesting question... and you only find out by getting these kinds of devices into people’s brains.”

Leadership challenges
  • The core idea behind Neuralink is sound, which is why others are also developing similar implants. And the technology could be genuinely useful. But Neuralink still faces challenges. The biggest of all might be Musk. His demand for fast development could be behind those animal testing complaints, with Reuters reporting that staff have suggested the high pressure environment - in which Musk urged staff to work faster by picturing a bomb strapped to their heads - was leading to fatal mistakes.
  • Neuralink is reportedly under investigation by the US Department of Agriculture over animal testing complaints. Staff, once again according to Reuters, raised concerns over the pace of testing, saying it was causing unnecessary suffering and even deaths of animals by running trials concurrently rather than one at a time and waiting for results. Further complaints report surgical mistakes causing suffering to animals.
  • Since 2018, 1,500 animals have been killed in Neuralink testing, though that doesn’t necessarily indicate wrongdoing. In response to the accusations, Neuralink has detailed its animal welfare policies, saying it exceeds industry standards. The Physicians Committee for Responsible Medicine has countered that the FDA should investigate. “The company’s own employees admit that its botched animal experiments may be suspect to regulators.” said Ryan Merkley of the Physicians Committee in a statement.
  • Musk is famously in a rush - his claims that Tesla’s Autopilot mode is fully self-driving are repeatedly slapped down, even by his own executives - but Neuralink does face rising competition. That said, while founders want their company to become the first to achieve a tech milestone, those awaiting medical help want solutions that work properly - the rest of us are happy for multiple BCI suppliers, so having a robust industry is positive.

Culture clash
  • As Neuralink is a private firm led by an infamously outspoken individual, there’s plenty it doesn’t have to share with the rest of us - this is the culture clash that Jackson referred to earlier. While researchers in the field are intrigued to know how the tech works, there is another key question: what’s the intent?
  • We know Neuralink wants to cure blindness and physical paralysis, as well as brain diseases such as Parkinson’s. But Musk has also suggested the aim is to enhance humans, perhaps letting those with Neuralink implants “speak” telepathically, control devices with our minds, access memories, and even stream Spotify without headphones.
  • Whether Neuralink is serious about such aims matters, argues Jackson, as it impacts the ethical equation. It’s worth testing on animals - or some believe it is - to help reduce human suffering, which is why we allow it for drug development, for example. But if the intent is to avoid putting on headphones, the balance shifts. “If your goal is to develop technologies to help severely disabled people, you can justify putting a device like this into someone's head," said Jackson. “It’s more difficult to justify if we're using disabled people as a stepping stone to collect the data sets that we’re going to use to develop the next generation of human-enhancement product.”
  • On the other hand. Musk's fast pace of development could be damaging more animals than is necessary7, but could also mean quicker progress. Do researchers have a responsibility to work at pace when the goal is reducing human suffering? “There's a balance, and you can go too slowly,” Jackson said. “There are people out there who would benefit from his technology, assuming it works and it’s safe.”

What next?
  • There’s another benefit to Neuralink: it draws attention to the field. The aim of the “show and tell” was recruitment, and shining a light on these technologies could help draw the best and brightest to the field. Jackson says he’s already unsure whether to advise students to stay in academia to work on such topics or to find a tech startup - it’s hard to see where the most progress will happen. It also draws investors, making it easier for rivals and new startups to find money. “I think it's really great for the field in a lot of ways, but there are also a lot of things that they need to do by the book," Jackson said.
  • This research is unquestionably not new. and work was progressing before Neuralink joined the crowd. The troubles that follow Musk might distract Neuralink - and attract regulatory attention that derails the firm’s work - but the miniaturisation of chips and wireless tech means we’re ever closer to the day when someone regains sight or the ability to walk from a brain implant, whether it's made by Neuralink or not.

"Kobie (Nicole) - Quantum Supremacy is here - So what?"

Source: PC Pro - Computing in the Real World, 307, May 2020

Full Text
  1. Introduction
    • Given the idea central to quantum computing is that bits can be in multiple states at the same time, perhaps it's no surprise that Google's quantum supremacy claims are disputed - as is the importance of the milestone itself.
    • Last October1, Google released a paper claiming its quantum computer had hit that milestone, only for IBM to counter with a paper disagreeing supremacy had been reached. Both sides have a point, but either way, we’re on our way to quantum computing - although don't expect a quantum laptop anytime soon, if ever. Indeed, we’re not really sure what architecture quantum computing will take, or how we'll use it.
    • Here's why the quantum supremacy milestone is much less dramatic - and more compelling - than you may realise.
  2. What is quantum computing?
    • Standard computers use binary: a bit is either on or off, a one or a zero. Quantum computers take advantage of subatomic behaviour that allows for particles to be in multiple states, so a quantum bit or “qubit" can be a one, a zero, anywhere in between or both. That allows an exponential increase2 in data storage and processing power.
    • A quantum computer would harness that massive power to process at a much faster rate than a standard computer, but there are challenges.
      1. First, we need to build one. There are machines being built by Google and IBM as well as California-based Rigetti Computing and Canadian D-Wave Systems, all with different techniques.
      2. Second, because of interference, quantum computing is all over the place - to put it mildly - meaning such systems require error correction.
      3. And, even with a working system, we need algorithms to manage its processes.
    • All of that development will take time. To track progress, in 2012 California Institute of Technology professor John Preskill came up with the idea of quantum supremacy as a milestone and it's simple: we reach supremacy when a quantum computer can do an equation that a traditional computer could not in a reasonable time frame. "It goes beyond what can be achieved by classical computing" said Toby Cubitt3, associate professor in quantum information at University College London.
    • Despite the dramatic name, all quantum supremacy really means is that a quantum computer has been built that works. "The terminology is unfortunate," admitted Cubitt, "but we appear to be stuck with it." That said, it's an important milestone, but only the first on the long road to quantum computing. "That's really a long way off," he added.
  3. What did Google claim?
    • In October4, Google claimed to have hit that milestone using its Sycamore system. Google said in a paper in Nature5 that Sycamore performed a calculation in 3mins 20 secs using 53 qubits, claiming that the same calculation would have taken 10,000 years using the most powerful supercomputer in the world, IBM's Summit. “Google was really hying to hit that quantum supremacy milestone” said Cubitt. “They achieved a quantum computation that is at the boundary of what can be simulated or reproduced by the world's biggest supercomputers". However, an early draft of the paper was leaked a month ahead of the Nature publication, giving IBM time to get ahead of those claims. In its own paper. IBM said that an optimised Version of Summit could solve the calculation in 2.5 days - meaning Sycamore's feat didn't qualify as true quantum supremacy.
    • But the Sycamore wasn’t at full operation as one of its 54 qubits was “on the fritz”, said Michael Bradley, professor of physics at the University of Saskatchewan. “So only 53 were working." And that matters, because had that qubit been functional, the IBM paper's claim wouldn't stand. By adding that extra qubit, the Sycamore has another power boost which would have let it easily surpass IBM’s system. Any computation can be reproduced on a classical computer, given enough time." said Cubitt. "How fast it can be pone, that's what changes.”
    • If the right type of algorithm is used, designed with the right structure, every qubit added to the computation will double the size of the problem to be solved for a classical computer.” In other words, if that fifty-fourth qubit had been working, the IBM system would have been left in the dust. "Once you have exponential growth, it doesn't matter how big of a computation you’ve done”, Cubitt said, “Because if you can just manage to simulate something on a classical computer, then if a few extra qubits are added, you're definitely not going to be able to make it twice as big as the world's biggest supercomputer.”
    • So why did IBM dispute Google’s Sycamore achievement? “This argument is a little bit [of] PR by IBM." said Cubitt. “Sour grapes that they didn't achieve it first.”
    • Of course, this is only the first step towards quantum computing - and the computation itself isn't useful. There's no requirement in Preskill’s definition that the computation have any purpose, and Google's doesn’t - it's essentially a random sampling equation. “It isn’t what most people would consider a computation,” said Bradley. "It's kind of an odd benchmark. It's a good benchmark, in that it demonstrates topologically that you can do_ the operations needed for a computation, but the actual computation is of no interest to anybody.” He added: “It doesn’t take away from the technological achievement, but 1 think in a way it's a little bit oversold."
    • Still, Bradley stressed that merely making the machine work is worth applauding. “It’s pretty impressive because the whole thing needs to be cooled to ultra-low temperatures cryogenically, which is no mean feat,” he explained. "And you’ve got to control the communication between the different bits and so forth. The fact they were able to do anything at all is impressive."
  4. Quantum is on the way … slowly
    • Now we have the hardware, the next milestones will be making use of it – and that won't be easy. Indeed, the lack of useful algorithms is likely why Google stuck to random sampling for its computation. “There's not many computations that can be done by quantum computers," Bradley explained. “The idea of quantum computation is that by exploiting aspects of quantum nature, in particular interference of qubits coherently, you can get a huge speed up for certain kinds of processes."
    • But there simply aren't many algorithms that take advantage of those properties. "The hardware is getting somewhere, but the software - so to speak - is a ways behind," he said. That's one reason we likely won't have quantum computers on our desks. “We probably won't ever have a general purpose quantum computer because they just don’t have that kind of algorithmic flexibility that we see with a regular computer." Bradley said.
    • In truth, we don’t need it. Standard computers are fast enough for most tasks. Instead, quantum computers are likely to sit alongside supercomputers, performing specific tasks for researchers, be it modelling, crunching through massive amounts of data, and running until now impossible experiments, in particular with quantum physics.
    • Another milestone is error correction, which adds overhead to any quantum computer. Cubitt says one of the key steps is to solve that challenge. “The next milestone people are thinking about is to demonstrate rudimentary error correction and fault tolerance in a quantum computation," he said, noting that having enough overhead in a quantum computer that error correction isn't a burden is a long way off.
  5. Not quite here yet
    • So let’s be absolutely clear: quantum computers aren’t going to be sitting on our desktops anytime soon, if ever. Instead, they’ll first become the next generation of supercomputers. "That's really a long way off,” said Cubitt. Bradley agrees: "It will take time and we have to temper expectations a little bit.”
    • Indeed, different types of quantum I computers may work better for different algorithms and tasks, meaning we end up with a variety of systems. And some of the time, as IBM's work shows, we may be able to simply simulate quantum computers i - that’s cheaper and more accessible. “There’s no point in building a quantum computer if you can just do it classically," Cubitt said.
    • That suggests that the debate around whether Google achieved supremacy or not is missing the point. Both companies have helped push the science of computing further.
    • But science is slow, driven by methodical steps forward rather than dramatic breakthroughs. Surpassing the quantum supremacy milestone has sparked hype and backlash - and neither are true. “The Google experiment is a very nice piece of science," Cubitt said. Google’s paper is a milestone worth celebrating, but there's more work to be done.
  6. Quantum hardware
    • There are many different types of quantum computer - ion traps, superconducting circuits, optical lattices, quantum dots and measurement-based one-way systems - but which one will reign supreme remains to be seen. "It's really hard to say what the equivalent of the silicon transistor is going to be for quantum," Cubitt said.
    • Indeed, Cubitt notes that ten years ago the good money would have been on ion traps, but they've given way to superconducting circuits used by Google's Sycamore. “In ten years' time, that may be completely different," he said. “They hit obstacles at different points, so one is stuck for a while and the other pulls ahead." Superconducting circuits are very fast but messy, while ion traps have cleaner qubits and can run quantum information for longer, meaning less error correction is required, but they're slower. Measurement-based quantum computation can manage larger computations, but works more slowly. Intel is working on a silicon quantum computer; if it works, circuits can be built much closer together, which means they’ll be cleaner with data, explains Cubitt. “Different architectures have different trade-offs."
    • Given the various trade-offs and benefits, there may be no winner. Instead, we may have a myriad of different types of quantum computers for different tasks. Given that, it's worth looking at the field as a whole, rather than one company leaping ahead of another. "We've made steady progress over 20 years and will probably continue to make steady progress - it’s not one breakthrough," he said.

Paper Comment
  • Sub-title: “Google has laid claim to the milestone, but IBM disagrees. Nicole Kobie reveals why that's good news for computing science, even if quantum systems remain many years in the future.
  • PC Pro 307, May 2020
  • See "Evenden (Ian) - Quantum computing comes of age".

In-Page Footnotes ("Kobie (Nicole) - Quantum Supremacy is here - So what?")

Footnotes 1, 4:
  • Ie. October 2019.
Footnote 2:
  • Why? How?
Footnote 3: Footnote 5:

"Kobie (Nicole) - The risks of the generative AI gold rush"

Source: PC Pro - Computing in the Real World, 344, June 2023

Full Text
  1. Companies are rushing to make money from generative AI chatbots such as OpenAI’s ChatGPT, and people are embracing then too. But we must factor in the risks, says Nicole Kobie.
  2. The backlash didn’t take long. OpenAl released the latest version of its ChatGPT in the autumn of 2022, and within weeks startups were taking advantage of the generative Al tool and the large language model1 that powers it. In 2022 alone, $1.4 billion was reportedly invested in generative Al companies in 78 deals.
  3. But warnings about the technology arose just as quickly. First, people didn’t understand the text was grammatically correct but not necessarily factual; that's not a flaw but how these systems inherently work, although that was apparently news to many. (Indeed, ChatGPT itself warns that it “may occasionally generate incorrect information”, can be biased and has limited knowledge after 2021.)
  4. Critics also raised concerns about the ownership and quality of the data on which the models were trained, wondering where future data sets could be sourced. Then came the hackers and researchers, trying to find the edges of the controls for the systems, in order to break them.
  5. Those shaking the most with fear over Al advancements weren't regulators or ethicists but search incumbents. Google and Microsoft both launched their own generative Al chatbots, rushing out products to avoid being left behind. Google immediately raised eyebrows - and slashed 8% from the company's stock price after its Bard chatbot not only returned an incorrect fact about space photography but used the example in the company’s marketing material. Microsoft Bing's chatbot is powered by OpenAI’s systems but without some of the controls put in place to avoid returning embarrassing answers. Which is how it told one journalist to quit their unhappy marriage, refused to accept what year it was. and even vaguely threatened to harm one researcher.
  6. Despite all of these red flags, plenty of companies have set up to use ChatGPT. its models or systems like it as the core of their business offering. Jasper and Writesonic (see P70) are marketing and content creators, while mental health app Koko uses ChatGPT to talk to users. At the same time, business leaders have found ways to embed generative Al - be it chat or images or something else - into their existing workflows, as a coding tool, virtual assistant or ghost writer.
  7. It’s easy to see why so many people and companies are excited by ChatGPT. Toss in any prompt, and it returns decent-quality results. It’s particularly good at debugging code, reports suggest, and helpful for explaining complicated ideas in simple terms or kickstarting a writing project; many Linkedln articles are surely beginning life in ChatGPT these days. But there are risks those using this technology should know about.
  8. Whose data?:
    1. One of the biggest challenges facing generative Al is data. First, there's the challenge of using internet text, images or video. GPT-3, for example, was trained on 570GB of data scraped from the web, books and Wikipedia, a total of 300 billion words. But much of that data - stories on a magazine's website, say - are under copyright. Can those words be used to train a model that in turn writes very similar stories?
    2. “They are not paying anyone,” said Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cybersecurity and cyber law at Capitol Technology University in Washington, DC. “And they say proudly: we’ve been using years of your hard work... we have built a system that can do exactly what you are doing and we’ll be selling it. Thank you for your input.”
    3. He added: “I believe this is kind of unfair, to put it mildly.”
    4. From a legal standpoint, it’s unclear whether that use of online material would break intellectual property legislation - and OpenAI and Al advocates clearly disagree. But regulators and Al makers may want to step up and create systems to pay creators before a legal challenge arrives. For example, authors in the UK can register to be paid for their work being photocopied in universities and the like. A similar pool of funds contributed by tech companies could help support illustrators and writers impacted by generative AI.
    5. The same follows for our personal data, including social media and blog posts, says Kolochenko, suggesting that a rule similar to GDPR for Al training could be implemented. “I’m not saying that it should be restricted or banned.” he said. “This will likely be counterproductive because we’ll likely have many people who wish to share their paintings or songs to train AI. Others may simply say my personal choice is that my blog posts or articles are designed for human beings.”
    6. How do we tell Al models what to look at? One idea is to use “robots.txt” instructions on websites that tell web crawlers what they’re allowed to look at. Kolochenko suggests website owners may want to update their terms of service to say whether content can be used for Al training.
  9. Bad data in...
    1. Scraping the web also raises quality concerns: there's a lot of junk on the internet, and plenty of it is biased, racist and outright wrong. All of those conspiracy theories floating around weird sites that get shared on Facebook by your odd uncle? Generative Al models are reading the same garbage.
    2. To be clear, it can sift through some of the worst online nonsense. Ask ChatGPT if, for example, the Covid vaccine was designed to control people and it can identify that such claims are “baseless conspiracies” and advise readers to only rely on “credible sources of information”.
    3. But Al researchers have found ways to “jailbreak’ the restrictions set up to avoid bad results. In one of many examples, Gary Marcus, author of Rebooting AI, showed in a series of Twitter posts, how the Bing system could be used to generate disinformation with an example around the 6 January storming of the US Capitol, with the criminals who attacked the building described as "brave and heroic’.
    4. “If you're just downloading the whole of the World Wide Web, there [are] astonishing quantities of toxic, misogynistic, hateful content out there – and your neural network is absorbing it and doesn't know it’s hateful and toxic,” said Michael Wooldridge, professor of computer science at the University of Oxford. “The responsible companies are trying very hard to deal with that but you don’t have to try very hard to persuade these systems to produce toxic content. This is one of the big challenges and it’s not obvious how that’s going to get resolved.”
  10. Dearth of data:
    1. There’s another looming problem that doesn’t have an obvious solution: a lack of future training data. To train its models, OpenAI read the entire internet and then some. To improve its models, it needs more data. But where can it come from? The easy pickings are already used, and developers of such technologies are hoping to scale by an order of magnitude of ten times every year, says Wooldridge.
    2. “The data really becomes a problem, as there’s no point in having a very large neural network if you don’t have the data to train it,” he said. “And it is not at all obvious where ten times more data might come from, if you’ve already used everything available legally in the digital form to train your neural networks.”
    3. The ease of using ChatGPT and its rivals to create content means the future internet will be increasingly written by ChatGPT and its rivals, meaning the web will be a less useful source for future training - otherwise such generative models are being trained on their own output. “As Al-generated content becomes more prevalent, it’s going to get ever more of a problem." Wooldridge added.
    4. And that's without people trying to cause problems. Don’t like the coming Al revolution? Publish a lot of false information online to poison future training results. “By feeding in misinformation, fake news stories and so on, you're poisoning the data," Wooldridge said.
    5. Ironically, generative Al systems can be used to write that content, helpfully automating its own future data problems.
  11. Security risks:
    1. As it can write code, generative Al brings with it other malicious risks. After bypassing built-in protections, researchers have found ways to mass-generate malware, bringing in a scary new future of even more industrialised attacks. And then there's spam. Poorly written messages remain an easy way to spot spam, whether it’s pushing a genuine product or malicious links. But with a tool that can write in English cleanly, it’s harder to use grammar as a defence.
    2. There are other more subtle risks to businesses, in particular those with sensitive data. Lawyers, for example, should be wary of copying and pasting their own contracts into a generative chatbot, while companies should ensure confidential news such as planned acquisitions aren't run through such systems. “Most likely all input to such systems is being aggregated for continuous training," noted Kolochenko. “When you copy-paste your source code, you may unwittingly disclose your trade secrets... I believe one day we’ll have a smart person who will copy-paste Coca-Cola’s recipe to ChatGPT to see how it can be improved by Al.”
    3. And that means that data could now, in theory, be extracted with a clever, or random, prompt. “At the end of the day. everything that can be copied and pasted will be used and stored and processed somehow.”
  12. Common sense or not so much:
    1. One intriguing aspect of deep learning is that we often don't understand how an Al system knows what it knows. To learn, it's told what parameters to consider and then set loose on extremely huge data sets. An Al system may then be able to identify a dog versus a cat, for example, but we don’t necessarily know what aspects of the animals it’s using to make its assessment. That aspect comes into play with generative Al because the technologies appear to be picking up common sense reasoning, something researchers have been considering for decades. “It’s just the stuff that you pick up in your life as a human being as you go about your world - understanding that when you drop an egg, it’s going to break on the ground,” Wooldridge said.
    2. Humans can’t really pinpoint how they learned that eggs smash when gravity slams them into a hard surface, but not a soft one - perhaps through experience or being told by a parent. “We just kind of learn along the way, and giving computers such common sense understanding has turned out to be very difficult." he said.
    3. However, by giving these models huge amounts of natural language text to learn from, they may be picking up common sense. “Exactly what it’s picked up, and what it can know and what common sense understanding it has reliably, is quite hard to tell," Wooldridge said.
    4. At the core of that problem is the difficulty in knowing whether the Al actually “knows" something, or is merely regurgitating what it found online, as perhaps a description of the results of eggs falling has been written about in great detail. So are language models learning common sense or just repeating our own back to us? “It’s difficult to know and I think it's a mixture of both," Wooldridge said. “It is entirely plausible that it genuinely has learned some common-sense understanding but separating that out from the regurgitation is quite difficult... because we don’t have access to exactly the training data that was used2”.
    5. Companies using generative Al for key aspects of their work and startups building a business off the back of these technologies should be aware we don't really know how they work, or what they’re capable of yet.
  13. Coming cost – and benefits:
    1. As more data is needed, so too is more processing power - and that's another threat for businesses that are depending on generative Al. One of the biggest challenges for companies or innovators looking to use ChatGPT or its rivals in their day-to-day work may be access. The service is frequently unavailable for free users due to heavy demand, but OpenAI has handily launched ChatGPT Plus to guarantee access with faster response times and new features, at a cost of $20 per month. Companies needing direct access to such compute-heavy tools can expect to cough up even more in the coming months and years. But plenty of people will pay up while OpenAI and its rivals continue to develop such systems further. Hopefully, the early backlash will drive some consideration of how to proceed safely and ethically so these tools will help businesses rather than sink them - if not, regulators will need to hurry up and take action.
    2. “Regulators should really consider now imposing the mandatory disclosure of data sources," said Kolochenko. "Startups that are building their own generative Al system should carefully consider licensing agreements for data they use for training... and asking for permission."
    3. Be as enthusiastic as you'd like about ChatGPT and its rivals, but keep your eyes open. Indeed, it’s worth nothing that these warnings come from Al advocates, not naysayers. “I personally believe that Al is our future and I believe that with Al we can build a better and sustainable future," Kolochenko said. "However, here with this specific set of Al technologies, we have certain considerations that in my opinion should be addressed."

In-Page Footnotes ("Kobie (Nicole) - The risks of the generative AI gold rush")

Footnote 1:
  • The author seems to presume that the reader understands what LLMs are, and how they work.
  • I have no idea – so – as usual – Wikipedia is a good start!
  • See "Wikipedia - Large language model".
Footnote 2:
  • This is a rather feeble complaint. The training dataset is so vast – and scheduled to grow ad infinitum that no-one will ever know precisely how an AI learnt to operate the way it does.
  • Obviously, there will be some ‘smoking guns’ – really odd ideas may be able to be tracked to source – but ‘no way’ for something like ‘common sense’.
  • This is a general complaint against the transparency of AIs.

"PC Pro - Computing in the Real World"

Source: PC Pro - Computing in the Real World

Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2023
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)

© Theo Todman, June 2007 - Sept 2023. Please address any comments on this page to File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page