Notes
- To save space, I only keep hard copies for a rolling 1-year period or thereabouts.
- The magazines are now available online at magazine.pcpro.co.uk: All are now available if you have a print subscription (as I do). You need the ID, which can be obtained from MyMagazine: MySubscriptions.
- There's an excellent Search (and bookmarking) facility.
"Cassidy (Steve) - Is Quantum Computing Ready for Prime Time?"
Source: PC Pro - Computing in the Real World, 349, October 2023
Full Text
- We’re hearing more and more about quantum computing, and you might be starting to wonder what value it could have for your business. The answer is far from clear, because good, practical information about this new frontier is hard to come by.
- I have to admit I rather enjoy a nice information gap. Computing in general is stuffed full of such things, especially when new, specialist technologies start to collide with a less cautious, more gung-ho marketplace. The general understanding of quantum computing (QC for short) doesn’t go much further than the social media-driven perception that it represents the end of conventional computational barriers - for the good guys and the bad equally - and that unimaginable leaps in performance are just around the corner for everything from tablets to data centres. Of course, the more ambitious the perception, generally the less likely it is to represent the truth, and what we might call "quantum superstition" is no exception.
- The potential of quantum: While the nascent technology might not be about to transform every computing task, I’ve had a number of conversations in the past year that suggest it’s time for businesses to start thinking about potential real-world quantum applications. That primarily means processes that involve performing analysis or computation on large data sets.
- For example, George Gesek, CTO at quantum specialist QMWare, gives the example of vehicle testing: when large manufacturers put their designs through the full range of product-testing procedures, they get back over 260 distinct statistics, settings, results and predictions per vehicle. Some of those 260 data items might be single-number results, while others might be n-dimensional plots of fuelling maps versus programmed power output management. Finding correlations and combinations of these values via conventional computing is intensive, repetitive and slow: a quantum-based approach could do the same work in the blink of an eye.
- Another discussion involved an American supermarket chain, and its habit of offering discount coupons. The question here was whether combinations of discount schemes might push the retailer into making a loss, and finding the answer was harder than you might think: coupons are a universe of fancifully ephemeral, limited data all to themselves. They can be local, global, time limited, product limited, modified by qualifying buyer, brand, age group... the number of permutations rises even more rapidly than the number of data points in the car-testing example, making it another ideal candidate for the particular strengths of QC.
- As well as analysing static data sets, quantum computing can be useful for modelling complex systems such as financial investment portfolios and the chaotic markets in which they operate. In chemical research, quantum analysis can simulate interactions between molecules far more efficiently than any current computer system. Amid all the wide-eyed talk of universes, infinities and criminal masterminds, there are proper business benefits to be reaped, especially for forward-thinking companies that embrace the potential of QC ahead of their rivals.
- Quantum Computing Explained1:
- The word “quantum" is easy to misunderstand. Actually, it’s almost universally misunderstood, because until the past few years, most of us had only encountered the term in science fiction. There the term has been thrown about quite wildly, perhaps to denote something that travels through time, or something that can be in two places at once. The latter image isn't wholly off the mark, but it doesn't do much to explain how the actual hardware operates.
- The secret to quantum computing is that, rather than regular bits that store ones and zeros, it works with “qubits” made out of subatomic particles. Using high-energy techniques that wouldn't be out of place at CERN, these registers can be placed into a quantum superposition of states where they effectively represent both 0 and 1 at the same time.
- By carefully manipulating a group of qubits it’s thus possible to carry out operations on hundreds, thousands or billions of values at once, and almost instantaneously obtain results that might take far longer to find with a conventional computer. Eventually the goal is to attain “quantum supremacy2", whereby quantum computing is able to solve problems previously considered not computable within the lifetime of this universe.
- While QC has revolutionary potential, not every type of operation can benefit from the amazing capabilities of qubits. Considerable specialist know-how is needed to identify suitable tasks, and prepare and present them in a way that can benefit from the technology.
- Quantum resources: Even though quantum could be the next computing revolution, you'll have noticed that nobody’s blowing up your phone to try to sell you a quantum computer. In fact, it's doubtful whether on-premises quantum computing will become a reality within two decades3.
- That's because, while these machines have many similarities to regular electronic computers, the emergent state of the technology is such that they're vastly more expensive to build - I've seen prices of $15 million per unit4 quoted.
- They're also a lot harder to install and operate, thanks to the awkward fact that separating the results of your QC computations from the background noise of the universe requires the core components to be kept a few degrees above absolute zero - that is, around -270 degrees C. So, although the processing hardware might be no larger than your average pedestal server, the cooling apparatus is liable to fill a room, and to consume more energy than the rest of your business put together.
- In practice, then, the only way for most humans to get access to a QC CPU is to lease access via a hybrid cloud architecture, from a supplier such as Amazon, IBM, Google or Microsoft. This allows you to stay cosily unfrozen at home, while your code runs in a data centre located in some ice-dripping cavern above the Arctic Circle. Existing and well-understood mechanisms keep the cloud copy and your on-premises versions in sync; the difference is that the cloud version is reachable by the quantum CPU, while your stay-at-homes are not.
- Preparing your workloads: So, you've signed up with a remote quantum provider - that means you're all set, right? Alas, no. There's another challenge that will likely be the dominant limiting factor in adoption for the foreseeable future, that being the brainwork of designing your application to run on a QPU.
- This is not a trivial barrier. Quantum processors aren't just very cold Pentiums, you see: their capabilities are inexorably tied to - indeed, are a function of - the underlying structure of reality (see “Quantum computing explained”, above). There's no quick, easy, general way to convert an arbitrary workload into a quantum program. It can be a puzzle simply to determine whether your job spec is amenable to quantum processing at all.
- Needless to say, the popular buzz and hogwash don’t help at all. I was intrigued by what my man from QMWare had to say, but when he started proposing an implementation of Turing’s "universal simulator" concept - in which a punched paper tape can emulate any theoretical CPU you can imagine - I began to have my doubts as to whether this was within the reach of commercial possibility.
- This is going to be a problem for as long as hybrid is the only realistic architecture for quantum access. As Gesek points out, the switch from on-premises to hosted and hosted to elastic cloud has been a smooth progression of demonstrable benefits. Moving to quantum cloud is going to be a much harder sell: businesses don't often do things that are nerd-cool, unless it presents a clear route to making money.
- QC and Al: Perhaps the only buzzphrase bigger than QC right now is Al, so it was probably inevitable that they'd get mashed together. But this isn’t mere marketing: QC has an important role to play, if not in general Al then in the specific subfield of machine learning.
- I've said that the advantage of quantum computing kicks in when you're trying to analyse or optimise a large corpus of data points. As it happens, this is what a lot of machine-learning projects do: the process of training a virtual brain circuit to do the drudge work you have in store for it often entails some phases of very considerable compute burden. A quantum process called annealing can help out enormously by quickly processing a data set and finding optimal combinations of elements.
- This might be an interesting source of pressure on the currently high cost of quantum resources, because ML is one of the most obvious general applications of the technology. It also promises to be a rich source of confusion, however: when one specific corner of Al rests on one specific application of QC, it’s easy to imagine that this is all that quantum computing is good for - or, conversely, that wider applications are just around the corner. In reality, the full potential is both much broader than this, and probably some distance away.
- The cost of quantum: That brings us to the next potential obstacle to widespread QC adoption: quantum access is expensive. The precise cost depends entirely on the tasks you’re running, but you can be sure that we’re not talking here about the everyday hosted services that run your website and a WordPress plugin or two.
- It’s not just the cost of the compute cycles, either. You may be used to the idea of a hybrid IT estate entailing extra costs to mix up local resources with the hosted cloud stuff, but to take advantage of QC you need to go beyond a conventional hybrid outlook. You have one complete copy of your definitive business data lake held locally, and another up there where the supercooled chips can get at it, so you can run any and all of your apps, database services, third-party connectors and the like within the purview of the QPU. This is about the most expensive hybrid model possible.
- Maybe things will get better as fibre speeds increase: in the future it may be possible to "invite" a cloud QPU to "visit" your data. Right now, though, the use cases in which a fully hybridised data estate actually pays its way are firmly over at the super-sized end of the business spectrum.
- The best way to achieve efficiency improvements may be by targeted simplification. Quantum computing is the ideal model for, say, analysing a complete record of the behaviour of all airmasses bordering the Atlantic for the past century - but that’s probably overkill if you’re just wanting to know if it’s going to be nice out today. As organisations refine the scope of the data they deliver to the hybrid space, it should be possible to reduce the required scale of quantum operations.
- It will be clear by now that, while quantum capacity is no longer a mere sci-fi concept, there’s a lot to work out before the technology can really be called mainstream. Indeed, as in the early days of Al, the majority of supposedly "quantum" products and services popping up in the near future are likely to be nothing of the sort. But if you have an intractable modelling problem or a lot of data to pre-test for regulators, QC has the potential to provide immense gains - and you can start working on the code right now.
- Bad quantum: The one thing everybody “knows" about quantum computing is that, since it can analyse any number of values simultaneously, it ought to be able to crack the strongest passwords and encryption keys in the blink of an eye. There are also questions about whether quantum computing could blow apart virtual currencies such as Bitcoin, whose scarcity - and hence value - relies wholly on its cryptographic security. Consequently, much of what you’ll hear about QC in business is defensive. For example. Vodafone recently described its new business VPN service as “quantum-proof".
- So far we’ve seen no evidence of “quantum cracking" in the wild, and that's unsurprising: no black-hat hacker is going to have the funds and expertise to build and operate a quantum computer of their own. Even if they were brazen enough to run such a workload on a leased platform, they’d need a brigade of PhDs to write and operate the code.
- We might sooner see real-world attacks on Bitcoin, because cryptocurrencies are at present a threat to - and therefore a target for - nation-state hacking adventures. As to what angle the attack might come from, that’s harder to predict. Governments may build their own labs, but it could be smarter to take advantage of the fact that cloud compute providers rarely know precisely who's making use of their resources, and to what end. That's always been true with traditional computing, and there's no reason to think it won't apply in the quantum age.
Paper Comment
- Sub-title: “Quantum Computing is no longer just science fiction. Steve Cassidy asks whether it’s time for businesses to make the leap. ”
- PC Pro 349, October 2023, pp. 104-6.
In-Page Footnotes ("Cassidy (Steve) - Is Quantum Computing Ready for Prime Time?")
Footnote 1: Footnote 2: Footnote 3:
- Where does this suggestion come from? 20 years for desktop QC is rather soon. Even if the kit should be there, what about setting up the problems, or is this something the AI will do?
Footnote 4:
"Collins (Barry) - Can AI Solve Chess's Stalemate"
Source: PC Pro - Computing in the Real World, Issue 330, April 2022 (Received in February 2022)
Notes
- This is a rather disappointing piece, in that it’s more about chess and its forms and popularity than about AI.
- The article’s title probably stems from the (to my mind absurd) suggestion of Nigel Short to change the Laws of Chess so that stalemate is a win but also from the fact that chess world championship matches have latterly had long strings of draws with the result decided by blitz games.
- The claim is that traditional AI – which embeds the strategies of their creators’ grandmaster advisors – has led to wars of attrition as players who train against them play one another.
- However, AlphaZero1, which has trained itself, adopts seemingly riskier positions – especially flank manoeuvres and a disregard for material as against piece activity – and has led some grandmasters to reassess certain positions and play in a more exciting style.
- The article makes analogies between chess and cricket, in that the latter has moved to shorter forms with more excitement. The article does think the ‘long form’ of chess will continue. In response to the ‘no stalemate’ suggestion, one of the thrills of test cricket is two tail-enders trying to hang on for a draw. That’s the stalemate analogy, as against the bore-draw where both sides slog away until time runs out.
- It also makes a useful point that playing on-line against a computer removes some of the stress and ego-damage caused by play against human opponents, both on-line and F2F.
- There are some – to my mind – useless ‘alternative rules’ chess games available on-line. The whole point of playing chess isn’t to calculate variations, even though this is necessary to avoid blunders, but to get a deep understanding of the game and its strategy. You can’t do this if you change the rules – you get a different game. Why would you want to play – superficially – a number of very similar games?
- However, it doesn’t go into how the various chess engines currently work. It notes that your phone can thrash the world champion, but doesn’t say how the phone’s software runs (presumably by calls to a server somewhere). Also, AlphaZero was trained on specialist hardware (5,000 TPUs) but ran on 42 CPUs but only 4 TPUs in its match against Stockfish. Are the results of AlphaZero’s training generally available to run? How do chess AIs run these days?
Paper Comment
- Sub-title: “AI has been blamed for dragging professional chess into an endless succession of draws. But as Barry Collins discovers, it's also injecting new life into the game - for both pros and amateurs. ”
- PC Pro 330, April 2022
- Photocopy filed in "Various - Miscellaneous Folder I: A - M".
In-Page Footnotes ("Collins (Barry) - Can AI Solve Chess's Stalemate")
Footnote 1:
"Graham-Smith (Darien) - First Steps and Building Graphical Apps in Visual Studio"
Source: PC Pro - Computing in the Real World, 349, October 2023 & 351, December 2023
Introduction
- Part 1: How to Create Simple programs (Introduction ...)
- Even if you're not a programmer, you’ve almost certainly heard of Visual Studio. Microsoft's flagship integrated development environment (IDE) includes everything required to create, test and deploy applications of all types and sizes, in a wide range of programming languages. It can even create cross-platform code to run on other operating systems. No wonder it consistently ranks as one of the world's most used development environments. Visual Studio isn't only for professionals, though. Casual hobbyists and beginners looking to take their first steps in coding can also benefit from Visual Studio features such as smart code-completion, error-checking and debugging features. It's great for helping less experienced coders produce working projects with a minimum of fuss.
- Best of all, most people can use this premium development tool completely free of charge. The Visual Studio Community edition is free to use for individual projects, academic purposes, open-source development and even small business projects with up to five users: only larger organisations working with dedicated development teams will need to invest in Professional or Enterprise licences.
- The current version of Visual Studio is Visual Studio 2022 for Windows. It will run on all major editions of Windows 10 or 11, including 64-bit ARM editions, and there's also a version for macOS: you can download them both from Microsoft Visual Studio Downloads. (Remember to choose the free Community edition.)
- VS2022 is primarily designed for writing, compiling and distributing programs in either C# or Visual Basic. However, it also comes with a plethora of optional tools and extensions for languages that use third-party compilers or interpreters, such as Java, JavaScript, Python and Ruby. If you're working in these external languages, you don't necessarily need the full VS package: the cut-down Visual Studio Code editor may suit your requirements perfectly well (see "Python coding in VS Code", P47).
- ...
- Part 2: Creating graphic interfaces (Introduction ...)
- Two months ago (see issue 349. p44) we introduced the free edition of Microsoft Visual Studio and walked you through setting up your first program in Visual Basic. What we didn't touch on is the aspect of Microsoft's IDE that makes it “visual" and sets it apart from many other development environments - namely, the ability to create rich graphical interfaces for your programs.
- In fact, that undersells just how powerful the package is. Visual Studio isn't just about adding buttons and menus to your code: it allows you to design the GUI first, and then slot in event-driven code to make each element work - a wonderfully intuitive and efficient workflow. To showcase just how easy it can be, this month we'll walk through the process of creating a sample graphical application in VB that loads in a text file, sort its lines alphabetically and saves the sorted lines under a new name. And to drive it, we'll build a rich Windows-native interface using standard buttons, text fields and file requesters.
- ...
"Kobie (Nicole) - Does Facial Recognition Have a Future?"
Source: PC Pro - Computing in the Real World, 312, October 2020
Full Text
- Introduction
- In January 2020, Robert Julian Borchak Williams was handcuffed and arrested in front of his family for shoplifting after being identified by facial recognition used by the Detroit Police Department. The system was wrong and he wasn’t a criminal but, because a machine said so, Williams spent 30 hours in jail.
- Williams has the distinction of being the first person arrested and jailed after being falsely identified by facial recognition – or, at least, the first person that we the public have been told about. The Detroit police chief said at a meeting following the reports of Williams’ arrest that the system misidentified suspects 96% of the time1. Given the wider discussion around reforming policing in the US following the killing of George Floyd2 by Minneapolis officers, it's no wonder calls for bans of the tech are starting to be heard.
- Amazon, Microsoft and IBM soon paused sales of facial-recognition systems to police, although it’s worth noting that there are plenty of specialist companies that still sell to authorities. Politicians are calling for a blanket ban until the technology is better understood and proven safe. "There should probably be some kind of restrictions." Jim Jordan, a Republican representative, said in a committee hearing. "It seems to be it's time for a timeout.”
- That’s in the US. In the UK, police continue to use the controversial technology. The Met Police used it at the Notting Hill Carnival and outside Stratford station in London, but the tech is also used by police in South Wales. "Facial recognition has been creeping across the UK in public spaces.” Garfield Benjamin, researcher at Solent University, told PC Pro. "It is used by the police, shopping centres, and events such as concerts and festivals. It appears in most major cities and increasingly other places, but is particularly prevalent across London where the Met Police and private developers have been actively widening its use.”
- That's despite a growing body of evidence that suggests the systems aren’t accurate, with research from activist group Big Brother Watch3 claiming that 93% of people stopped4 by the Met Police using the tech were incorrectly identified. A further study by the University of Essex showed the Met Police's system was accurate 19% of the time5.
- Can facial recognition ever work? Is a temporary ban enough? Or is this a technology that should forever be relegated to photo apps rather than serious use? The answers to these questions will decide the future of facial recognition - but the road forward isn't clear.
- The problems with facial recognition tech
- The problems with facial recognition aren’t limited to a few instances or uses - it's across the entire industry. A study by the US National Institute of Standards and Technology tested 189 systems from 99 companies, finding that black and Asian faces were between ten to 100 times more likely to be falsely identified6 than people from white backgrounds.
- What causes such problems? Sometimes the results are due to poor quality training data, which could be too limited or biased - some datasets don't have as many pictures of black people as other racial groups, for example, meaning the system has less to go on. In other instances, the algorithms are flawed, again perhaps because of human bias, meaning good data is misinterpreted.
- That could be solved by having a “human in the loop”, when a person uses data from an Al but still makes the final decision - what you would expect to happen with policing, with a facial-recognition system flagging a suspect for officers to investigate, not blindly arrest. But we humans too easily put our faith in machines, says Birgit Schippers. a senior lecturer at St Mary’s University College Belfast. "There’s also concern over automation bias, where human operators trust, perhaps blindly, decisions proposed by a facial-recognition technology," she said. "Trained human operators should in fact take decisions that are based in law.”
- Even a sound system trained well on a solid dataset can have downsides. “It has a profound impact on our fundamental human rights, beginning with the potential for blanket surveillance that creates a chill factor, which impacts negatively on our freedom of expression, and perhaps our willingness to display nonconformist behaviour in public places.” Schippers explained. "Another fundamental concern is lack of informed consent7 ... we do not know what is going to happen with our data.”
- Then there's the other side of human intervention: misuse. " Another key concern is the way that facial-recognition technology can be used to target marginalised, vulnerable, perhaps already over-policed communities," she said.
- Whether we allow the tech in policing or elsewhere should depend on whether the benefits outweigh the downsides, argues Kentaro Toyama, a computer scientist at the University of Michigan. "The technology provides some kinds of convenience – you no longer have to hand label all your friends in Facebook photos, and law enforcement can sometimes find criminal suspects quicker.” Toyama said. "But. all technology is a double-edged sword. You now also have less privacy-, and law enforcement sometimes goes after the wrong people." And it's worth remembering, added Toyama, that facial recognition isn't a necessity. “There was no such technology - at least, none that was very accurate - until five to ten years ago, and there were no major disasters and no geopolitical crises because of the lack8."
- Fixing facial recognition
- New technologies don't arrive fully formed and perfect. They need to be trialled and tested to spot flaws and bugs and knock-on effects before being rolled out more widely. Arguably, facial recognition has been rolled out too quickly because we're still clearly in the phase of finding the problems with and caused by this technology. So, now that we know about bias, inaccuracy and misuse, can we fix those problems to make this technology viable? “If people are seeking to make them fairer, then on a technical level we need to address bias in training data which leads to misidentification of women and ethnic minorities," said Solent University’s Benjamin. But, he added, you must be vigilant: "The audit data that is used to test these systems as biased audits often conceal deeper flaws9. If your training data is mostly white males, and your audit data is mostly white males, then the tests won't see any problem with only being able to correctly identify white males.”
- There have been efforts to build more diverse facial recognition training sets, but that only addresses one part of the problem. The systems themselves can have flaws with their decision-making, and once a machine-learning algorithm is fully trained, we don’t necessarily know what it’s looking for10 when it examines an image of a person.
- There are ways around this. A system could share its workings11, telling users why it thinks two images are of the same person. But this comes back to the automation bias: humans learn quickly to lean on machine-made decisions. The police in the Williams case should have taken the facial recognition system as a single witness, and further investigated - had they asked, they would have learned Williams had an alibi for the time of the theft. In short, even with a perfect system, we humans can still be a problem12.
- Regulation, regulation, regulation
- Given those challenges and the serious consequences of inaccuracy and misuse, it’s clear that facial recognition should be carefully monitored. That means regulators need to step in.
- However, regulators aren’t always up to the job. "Facial recognition crosses at least three major regulators in the UK: the CCTV Commissioner, Biometrics Commissioner and Information Commissioner,” said Benjamin. "All three have logged major concerns about the use of these technologies but have so far been unable to come together to properly regulate their use. The Biometrics Commissioner even had to complain to the Met Police when they misrepresented his views, making it seem like he was in favour of its use. We need more inter-regulator mechanisms, resources and empowerment to tackle these bigger and more systemic issues.”
- Beyond that, there is no specific regulation that addresses these concerns with facial recognition in the UK, noted St Mary’s University College Belfast’s Schippers, but there is currently a private members bill working its way through parliament, seeking to ban the use of facial recognition in public places13. In Scotland, MSPs have already made such a recommendation, but plans by Police Scotland to use the technology had already been put on hold.
- Such a pause could let regulators assess how and when to use the technology. “As the pros and cons become clearer, we should gradually allow certain applications, at progressively larger scales, taking each step with accompanying research and oversight, so we can understand the impacts.” said the University of Michigan’s Toyama.
- That’s worked for other potentially dangerous, but useful, advanced technologies. "The most effective form of this is the development of nuclear energy and weapons - not just anyone can experiment with it, sell it, or use it,” Toyama added. “It’s tightly regulated everywhere, as it should be."
- Time for a ban?
- Facial recognition is flawed, has the potential for serious negative repercussions, and regulators are struggling to control its use. Until those challenges can be overcome, many experts believe the technology should be banned from any serious use. "There is little to no evidence that the technologies provide any real benefit, particularly compared to their cost, and the rights violations are too great to continue their deployment.” Benjamin said. Toyama agrees that a moratorium is necessary until the potential impacts are better understood. “Personally, I think that many uses can be allowed as long as they are narrowly circumscribed and have careful oversight … though, I would only say that in the context of institutions I trust on the whole,” Toyama explained.
- Schippers would also like to see a ban - but not only on facial recognition technology's use by police forces, but by private companies too. “Retailers, bars, airports, building sites. leisure centres all use facial-rejection technology,” Schippers said. “It's becoming impossible to ignore.”
- What’s next?
- Facial recognition is quickly becoming a case study in how not to test and roll out a new idea - but other future technologies could also see the same mistakes.
- Look at driverless cars or drones: both are being pushed hard by companies and governments as necessary solutions to societal problems, despite the technologies remaining unproven, regulation not yet being in place, and the potential downsides not being fully considered.
- That said, facial recognition seems more alarming14 than its fellow startup technologies. “There’s something about facial recognition that many people feel to be particularly creepy - but facial recognition is just another arrow in the quiver of technologies that corporations and governments use to erode our privacy, and therefore, our ability to be effective, democratic citizens.” said Toyama.
- Due to that, and the inaccuracies, missteps and misuse, facial recognition faces a reckoning - and it’s coming fast.
- “I think the next five years will see a strong tipping point either way for facial recognition,” said Benjamin. “With some companies ceasing to sell the technologies to the police, and some regulatory success, we could see them fall out of favour. But the government and many police forces are very keen on expanding their use, so it will be a matter of whether rights and equality or security and oppression15 win out.”
- The pace of technology development continues to accelerate, but we should control its pace. We need to either slow it down via regulatory approval and testing, or speed up our own understanding of how it works and what could go wrong - or we risk more people like Williams being made victims of so-called progress.
Further Notes
Paper Comment
- Sub-title: “Regulation against the future tech is looming amid concerns about its accuracy for policing and other public uses. Nicole Kobie reveals the future of facial recognition. ”
- PC Pro 312, October 2020
In-Page Footnotes ("Kobie (Nicole) - Does Facial Recognition Have a Future?")
Footnote 1:
- This is an extraordinary failure rate, and I’d initially thought it a typo for “identified”, but apparently not.
- It’s not clear, however, how the “identification” takes place. Presumably there’s not a national database of mug-shots, so does the AI just make its best guess from a database of ex-cons?
- Maybe there’s a national database of ID cards, driver’s licenses or passport photos?
- See Wikipedia: Identity documents in the United States.
Footnote 2: Footnote 3: Footnote 4:
- So, marginally better that the US, but still terrible.
- But – again – it’s not explained how the technology is used.
- Also, as this is a pressure group, how confident can we be of their statistics?
Footnote 5: Footnote 6:
- I can well believe it, given the likely training algorithms.
- But how does this stack up with the dire success rates already reported?
Footnote 7:
- Well, for any system to work, there could be no question of “consent” – informed or otherwise – at the individual level (or all those that needed to be surveyed would opt out.
- This is something that would need to be voted on (hopefully not by a referendum) as a general policy.
Footnote 8:
- That is a very broad claim!
- Covid-19 contact tracing … wouldn’t facial recognition help? Is it being used in the Far East? Maybe use of mobile-phone tracking instead? Just as invasive?
Footnote 9:
- Well, yes, but these flaws can be fixed as well. Stop whining. We’re only arguing here about whether the tech can be got to work.
Footnote 10:
- This is a general problem with machine learning – we don’t know how it does it.
- But this is completely different from – say – credit rating. It either gets faces right or it doesn’t – if it does, we don’t care how it does it (like we don’t care how AlphaZero beats Stockfish). But a credit-rating isn’t a “fact” in the same way. A human that checks the algorithm might suffer from the same prejudices that are implicated in the training program. But you can’t be “prejudiced” about facial recognition, can you? You – and the AI – might both think you’ve identified someone – but you might both be wrong, and this is a fact that’s easily checked.
Footnote 11:
- Really? Not in neural networks.
Footnote 12:
- Well, yes – but that’s true of any technology, and it doesn’t suggest we all be Luddites.
Footnote 13:
- Given the technology doesn’t yet seem to work, it shouldn’t be used to inform any decisions without human oversight.
- But, like driverless cars, if you ban them from public spaces they will never improve.
Footnote 14:
- You must be joking! Driverless cars and drones can directly lead to significant loss of life.
Footnote 15:
- This is a very tendentious way of putting things. Face recognition – provided it works – is ethically neutral. It’s how it’s used and regulated that matters.
"Kobie (Nicole) - Neuralink: an old idea that could be the future of medicine"
Source: PC Pro - Computing in the Real World, 342, April 2023
Brain-computer interfaces have long been in the works, but now Musk is applying the accelerator. That could sink his own efforts, but might spur useful research too, finds Nicole Kobie.
Introduction
- A cure for paralysis and blindness that could one day allow humans to level up their own cognition - and all you have to do is trust the tech's most tempestuous CEO to plonk an implant into your brain.
- Elon Musk’s Neuralink has ambitions as lofty as SpaceX’s dream of a mission to Mars and Tesla’s fantasy of driverless cars. Founded in 2016. Neuralink is building an implantable brain-computer interface (BCD that would allow computers to read neural signals. That could, in theory, help anyone suffering paralysis, blindness, dementia or other brain diseases, but Musk also sees the technology as valuable to boost human capabilities. "It's like replacing a piece of your skull with a smartwatch, for lack of a better analogy,” he said at a recruitment demonstration known as "Show and Tell” in November. Neuralink showed attendees a video of monkeys that spelt out the words “welcome to show and tell”. The previous year's demo involved a monkey playing a Pong-style game by thinking about moving the controller, while the year before a pig was shown with an embedded implant.
- Musk claims the technology will be ready to implant into a human brain within six months, and that he intends to have one surgically shoved into his own skull in a future demonstration. All this is subject to regulatory approval, which in the US is covered by the Food and Drug Administration. "Obviously, we want to be extremely careful and certain that it will work well before putting a device in a human, but we’ve submitted, I think, most of our paperwork to the FDA,” he said. It's worth noting that Neuralink has previously hoped to begin human trials in 2020, and Musk in 2021 said he hoped they’d begin in 2022.
- And there's already a challenge to that FDA approval. A complaint has been filed by the Physicians Committee for Responsible Medicine, as well as a federal investigation into animal welfare, sparked by internal staff complaints that the company’s rushed development pace is causing unnecessary suffering. Neuralink also faces increasing competition from the likes of Science Corp, which is working to cure blindness, and Synchron, which is hoping to treat paralysis.
How it works
- At its most basic, Neuralink is a chip that’s stitched into the brain with tiny threads that pick up signals. If doctors want to read brain signals now, they attach electrodes to the skull; Neuralink is a miniaturised version that not only reads signals but sends them in order to tell a paralysed limb to move or - perhaps one day - control a smartphone without moving a muscle.
- “The basic idea isn’t science fiction.” noted Andrew Jackson, professor of neural interfaces at Newcastle University. As a neuroscientist with his own neurotech spinout, MintNeuro, Jackson has been controlling animal brains with computers since the early 2000s. “I was building a wearable electronic circuit that sat on the head of a monkey and was connected via electrodes to brain cells,” he told PC Pro. “We called it the neurochip back then.” The aim was to track brain activity in order to build connections to prosthetics, and his work has expanded into generating movements as well as suppressing epilepsy seizures.
- “I think the thing that people don’t often understand... is that this is a concept and an idea that’s been around for some time, ’ he said. “And impressive progress has been made." Indeed, Neuralink’s core idea has been demonstrated by researchers since the BrainGate trials in 2009. with implants in humans as early as 2002. And progress has continued, with a trial at Bristol’s Southmead Hospital helping to reverse symptoms of Parkinson’s in one patient using a deep-brain stimulation device.
- Beyond Musk’s showmanship and funding, what’s different with Neuralink? The form factor, for a start. The BrainGate trial used an implant called a Utah Array, which Jackson describes as a “bed of nails of 100 electrodes’. And while it works, that’s one of the core limitations to the existing technologies: the implanted electrodes exit through the skin to be plugged into external equipment.
- That’s the challenge Jackson is trying to solve at MintNeuro. which is working on a wireless design, and the core benefit of Neuralink’s design, which is low power and wireless in a tiny package. “What they’re doing well is developing low power wireless interfaces, so you can get rid of the cable through the skin, and making a reasonably small implant package that’s connected to lots of flexible electrodes, and speeding up the implantation of the device with surgical robots,” he said. “What they’re doing is quite sensible.’
- The biggest hurdles aren’t the hardware or the brain science, but where the two meet. “The challenges are always at the interface - it’s where the mushy biology bit meets the fabricated engineering bit," he said. “ if you put an electrode into brain tissue you can record nice bright signals on day one. but then over time the quality of those signals sort of deteriorates and might be unstable. That’s because of scar tissue building up around the electrodes.’
- Jackson said that from Neuralink’s public presentations, it seems the company may not be fully aware of such challenges, or underappreciates the hurdle they present. “Those are the kinds of things that are much more difficult to solve and the solutions aren’t what microelectronics fabrications are used to dealing with," he said.
- Jackson cautions that Neuralink hasn't yet released significant details about the hardware itself, with the product presentation actually part of a recruitment day rather than a technology demonstration. “I think the academics among us would prefer that there were scientific papers being published where we could see all of the details, but it’s sort of a culture clash." he said. “That's how academics work. But we have to sort of realise that's not how tech companies work."
Reading minds
- The tech challenges aren't small - though they are about size. Implants not only need to be tiny to reduce damage to the brain, they also require wireless links to avoid those cables through the skull, draw the lowest power imaginable, be rechargeable and support updates, which Neuralink has said will be possible. “All of that’s relatively easy given the kind of sophistication that we have with smartphones,” said Jackson.
- But challenges remain, particularly with decoding brain signals. It’s now easy to train a system to pick up a signal: tell a person to move to the left, or think about moving to the left, and record what their brain does. Eventually, a system can be told to look for that pattern. But what happens when we try to go beyond such simple decoding?
- “I think there’s a sense within the technology industry that all the problems go away if you can scale up to more and more data," said Jackson. "If you increase the bandwidth of things and increase the number of channels you get a Moore’s Law effect and the problems just fall away.”
- That did work with Al speech recognition, which required tons of training but is now commonplace. But reading more complicated signals, such as complex thoughts beyond ‘left’ or "right’, might not work that way. “If we get to the point where enough people have got brain implants and we can get these big data sets, is it going to turn out that the interface that has been trained on someone else's brain or a lot of other people’s brains then works for your brain without having to go through a training process?” asked Jackson.
- He added: “It’s not like I can identify a neuron, a single brain cell, in my brain that's analogous to one in your brain. That's not how brains work... all our brains are different.”
- What this means is that scaling the technology from simple animal games to mind-reading will take a long time and a lot of effort, and might not even be possible. “We don’t know whether it will scale the same way as speech decoding or if it’s just a fundamentally different kind of problem... but that’s not a reason not to try.” he said. “It's an interesting question... and you only find out by getting these kinds of devices into people’s brains.”
Leadership challenges
- The core idea behind Neuralink is sound, which is why others are also developing similar implants. And the technology could be genuinely useful. But Neuralink still faces challenges. The biggest of all might be Musk. His demand for fast development could be behind those animal testing complaints, with Reuters reporting that staff have suggested the high pressure environment - in which Musk urged staff to work faster by picturing a bomb strapped to their heads - was leading to fatal mistakes.
- Neuralink is reportedly under investigation by the US Department of Agriculture over animal testing complaints. Staff, once again according to Reuters, raised concerns over the pace of testing, saying it was causing unnecessary suffering and even deaths of animals by running trials concurrently rather than one at a time and waiting for results. Further complaints report surgical mistakes causing suffering to animals.
- Since 2018, 1,500 animals have been killed in Neuralink testing, though that doesn’t necessarily indicate wrongdoing. In response to the accusations, Neuralink has detailed its animal welfare policies, saying it exceeds industry standards. The Physicians Committee for Responsible Medicine has countered that the FDA should investigate. “The company’s own employees admit that its botched animal experiments may be suspect to regulators.” said Ryan Merkley of the Physicians Committee in a statement.
- Musk is famously in a rush - his claims that Tesla’s Autopilot mode is fully self-driving are repeatedly slapped down, even by his own executives - but Neuralink does face rising competition. That said, while founders want their company to become the first to achieve a tech milestone, those awaiting medical help want solutions that work properly - the rest of us are happy for multiple BCI suppliers, so having a robust industry is positive.
Culture clash
- As Neuralink is a private firm led by an infamously outspoken individual, there’s plenty it doesn’t have to share with the rest of us - this is the culture clash that Jackson referred to earlier. While researchers in the field are intrigued to know how the tech works, there is another key question: what’s the intent?
- We know Neuralink wants to cure blindness and physical paralysis, as well as brain diseases such as Parkinson’s. But Musk has also suggested the aim is to enhance humans, perhaps letting those with Neuralink implants “speak” telepathically, control devices with our minds, access memories, and even stream Spotify without headphones.
- Whether Neuralink is serious about such aims matters, argues Jackson, as it impacts the ethical equation. It’s worth testing on animals - or some believe it is - to help reduce human suffering, which is why we allow it for drug development, for example. But if the intent is to avoid putting on headphones, the balance shifts. “If your goal is to develop technologies to help severely disabled people, you can justify putting a device like this into someone's head," said Jackson. “It’s more difficult to justify if we're using disabled people as a stepping stone to collect the data sets that we’re going to use to develop the next generation of human-enhancement product.”
- On the other hand. Musk's fast pace of development could be damaging more animals than is necessary7, but could also mean quicker progress. Do researchers have a responsibility to work at pace when the goal is reducing human suffering? “There's a balance, and you can go too slowly,” Jackson said. “There are people out there who would benefit from his technology, assuming it works and it’s safe.”
What next?
- There’s another benefit to Neuralink: it draws attention to the field. The aim of the “show and tell” was recruitment, and shining a light on these technologies could help draw the best and brightest to the field. Jackson says he’s already unsure whether to advise students to stay in academia to work on such topics or to find a tech startup - it’s hard to see where the most progress will happen. It also draws investors, making it easier for rivals and new startups to find money. “I think it's really great for the field in a lot of ways, but there are also a lot of things that they need to do by the book," Jackson said.
- This research is unquestionably not new. and work was progressing before Neuralink joined the crowd. The troubles that follow Musk might distract Neuralink - and attract regulatory attention that derails the firm’s work - but the miniaturisation of chips and wireless tech means we’re ever closer to the day when someone regains sight or the ability to walk from a brain implant, whether it's made by Neuralink or not.
Paper Comment
Printout filed in "Various - Papers on Desktop".
"Kobie (Nicole) - Quantum computing is here … with one small caveat"
Source: PC Pro - Computing in the Real World, 353, February 2024
Full Text
- Introduction
- Quantum computers are only just edging into useful existence, but the world isn't willing to wait for the technology to mature. They're already being put to good use in finance, energy production and manufacturing - and soon, if all goes well, quantum systems will predict potential flood damage across the UK to help plan mitigations against the impacts of climate change.
- Multiverse Computing is leading one of 30 projects backed by the UK government as part of efforts to develop quantum technologies for public sector applications. Alongside its partners, Oxford Quantum Circuits and Moody's Analytics, Multiverse is developing an algorithm to optimise neural network outputs for more detailed flood modelling.
- This is part of the government's Quantum Catalyst Fund, a £15 million pot of cash to encourage quantum technologies to be developed for public use. Multiverse's project was handed a slice of that money for a three-month feasibility study as phase one of the fund, with contracts doled out for any promising ideas as part of phase two, in which they'll be asked to make a prototype or product demonstration.
- That £15 million is just the start. The fund is part of a wider National Quantum Strategy published in March 2023 that will see £2.5 billion invested in the next ten years. The belief is that quantum technologies could offer solutions in healthcare and energy infrastructure. And - through quantum clocks and communications - it could help railways, emergency services and telcos step away from satellites to a more secure alternative.
- But given quantum computers remain very much a work in progress, how can anything get done? Enter hybrid quantum computing.
- What is quantum computing?
- Let's step back to basics. Classical or traditional computers make use of on/off transistors, with data stored in bits that are either a one or a zero. Quantum computers differ by taking advantage of quirks of physics, in particular quantum superposition and entanglement. So a qubit - a quantum bit - can be on or off, but it can also be both: this is superposition. That means data can be processed in parallel. The second property is entanglement, which means the state of a qubit can be related to another, and this enables advanced algorithms.
- This means quantum computing could enable advanced modelling or maths that we can't do now. But there are challenges. Reading the state of a superposition qubit isn’t really possible, so it drops back to a one or zero, meaning the final result requires further processing to unpick: that means more processing power is needed, not to mention software to do the work. Another challenge is decoherence: quantum computers interact with their environment, and that can disrupt data and calculations, corrupting results. Mitigating that requires complex error correction.
- There’s a further challenge: actually building a quantum computer. They do exist now, but with a limited number of qubits. The IBM Quantum One was the first such device to be made available for commercial use, but it only has 20 qubits. Back in 2019. Google claimed it had achieved quantum supremacy - performing a calculation that a supercomputer couldn’t do on a human timescale - using its Sycamore device, which had 53 qubits. And in December 2023, IBM revealed its Condor quantum computer with more than 1,000 qubits.
- We don't really know how many qubits are required to do the work we’d like quantum machines to do as it depends on the task, but thousands or even tens of thousands could be required for the applications being dreamed up. It also depends on the type of quantum computer, as there are different designs, from superconducting machines like those made by Google and IBM to photonic designs. These use neutral atoms, trapped ions and quantum dots.
- "Each of those architectures has several benefits and drawbacks, so right now it's not clear which of those architectures is going to be the one that scales," Victor Gaspar, head of business development at Multiverse, told PC Pro. (For this project, Multiverse Computing is working with Oxford Quantum Circuits, which uses a patented three-dimensional scalable design called a “coaxmon".)
- Quantum computers aren't designed to take over from the laptop on your desk or the smartphone in your pocket. Instead, they're a specific tool that works well for some applications, be it maths problems, cryptography or simulations. Making use of quantum hardware requires algorithms written specifically for such computers - and that’s part of the reason hybrid systems have appeal. We need to develop quantum computers and the software and algorithms to run on them in concert.
- The key takeaway on all of this is that quantum computing isn’t quite here yet and it’s not easy to build.
- Hybrid today
- Though we as yet lack quantum computers at a practical level - they exist, but largely in labs, and not in the form required to do everyday work - we can make use of quantum ideas and more limited quantum hardware by combining it with classical algorithms and computers.
- “A quantum algorithm is like a classical one but uses several phenomena that are quantum that you don't have in a classical computer." Gaspar said. "For example, entanglement and superposition... We are currently building machines that can make use of those effects."
- To an extent, it follows the development pattern of traditional computing, Gaspar says. At first, algorithms ran sequentially on a single CPU; then a second CPU allowed for parallel computing. "Now we are developing a technology that has several properties that are not in classical [computers] that you can make use of for developing new algorithms." Gaspar said.
- And that takes time, he notes - after all, many of the most famous algorithms in use today were developed in the 1980s, and we’re still coming up with new ways to get the most out of classical computers. That’s one reason why it makes sense to start developing software and algorithms for quantum computers when the hardware isn’t quite ready yet, as it’s going to take years or decades to really get to grips with these weird new machines
- Multiverse Computing has an algorithm that will work with the project in question, but one of the first steps will be to optimise it for the hardware being supplied by project partner Oxford Quantum Circuits.
- Floods of data
- This particular hybrid quantum project seeks to better model the potential for flooding across the UK, especially as the climate crisis exacerbates the risks. The aim is to understand where and when to expect floods, and better predict their impact on surrounding areas, be it homes, transport networks or infrastructure. However, computational fluid dynamics are notoriously complicated, and the challenge is exacerbated by the need to pull in a lot of data to improve the accuracy and granularity of a model.
- Multiverse Computing and its partners will use shallow water equations, a subset of computational fluid dynamics, to model bodies of water including rivers and the ocean. “In classical computers this has a really high computational cost for simulation, especially for large areas in a high resolution." said Gaspar. "If you want to model a huge mass of land that intertwines with a huge mass of water, and you want high resolution with buildings and coastal features and all that, it’s highly complex."
- For example, a modelling system could choose to ignore the impact of windows on flood effects, but a more detailed simulation might include windows. “We want to proceed in precise detail." Gaspar explained.
- Multiverse Computing is going to help address that computational challenge by adding a quantum circuit into the neural network architecture to optimise the system and improve training performance while also reducing memory consumption. That will use Oxford Quantum Circuits’ 32-bit quantum circuit.
- The system will also be able to increase the expressivity of neural networks, which refers to how such a deep learning system approximates functions for better predictions. And, when more qubits are available, this system will be able to scale up to boost the neural network for better accuracy.
- Practically, the neural network output feeds into the variational quantum circuit. That means the neural network must be designed in the right way for the output to match the quantum gates. That quantum circuit offers a result that is measured, and that measurement is then input back into the neural network.
- "Essentially what we’re trying to solve is an optimisation problem at the end of the day, to simulate the effects of these floods," said Sam Mugel, chief technology officer at Multiverse. “And for this optimisation problem, we’re going to solve it on a quantum computer using these variational quantum algorithms."
- The challenge is that the quantum computer is small, with just 32 qubits, but the problem is very large, with millions of variables. “The trick we’re going to do is we’re going to use a neural network architecture to compress the information," Mugel said. "We have input, we’re going to compress it, run this on the QPU (the quantum computer), run the optimisation problem on the QPU, and then run it back through [the] neural network to decompress it.".
- There’s plenty of work to do first. In the initial phase of government-funded work, Multiverse Computing is proving the project can succeed - it’s effectively a feasibility study. That requires understanding the nature and structure of flood data and how it needs to be processed so it can be used to train the neural network, but also showing how the hardware and software will work together.
- “What we want to show in phase one is that... larger quantum circuits will be able to solve problems that can’t be solved with classical computing." Gaspar said. If approved by DEFRA for phase two, Multiverse Computing and its partners will shift to implementation and building a working prototype.
- Long road ahead
- So if a 32-qubit circuit isn’t enough to run an entire algorithm, how many qubits do we need? Gaspar says we simply don’t know, and nor do we know how long it will take to build large enough quantum computers.
- It’s a chicken and egg situation: we need software to show what the hardware can do, but it’s difficult to develop software when the hardware doesn't exist yet. And that’s why hybrid quantum makes sense: it lets us see the value of quantum computing now, expand our simulation and modelling without waiting for larger quantum machines, and allows us to start developing the associated technologies such as software and algorithms so we’re ready to go when the hardware can be scaled up.
- “If you think about it, the semiconductor industry has been around for 70 years," said Gaspar. “I'm not going to say quantum computers are going to take 70 years, but we need the technology to develop. And right now we are at the stage where we need to be clever in how to design those algorithms and do more hardware crossover designed to make the most out of these scarce resources."
- Hybrid quantum means we can get some results now. But full quantum computing will first need serious technological breakthroughs - and serious cash. “Before we can justify that level of investment, we need to be able to say that we know when we arrive at this level, we’ll be able to solve this type of problem better," said Mugel.
- Like Gaspar, he points to the long history of chips. “The semiconductor industry has had trillions poured into it, but before we went ahead and poured all that money into it we first started with transistors," he said. "Applications with very, very few transistors showed that there was value. Once we showed the initial disruptive use case, from there we were able to justify all the investment. For us, I think this project is one of several where we really are seeking to show that for quantum computers."
- In other words, this project isn’t just about water flow simulation, though anyone living in an area prone to floods will welcome better predictions. Instead, it’s a way to test if quantum computing is worth all the effort - and to spark investment in a technology that could be the next revolution in computing.
- Appendix: Hybrid quantum computing and healthcare
- Hybrid quantum computing could have a particularly big impact on healthcare, with Professor Katherine Royse stating at an IBM event that drug discovery, diagnosis and vaccine creation can all be made quicker through techniques available today.
- Professor Royse, who is director of the Science and Technology Facilities Council's Hartree Centre, says this isn’t theory. "For drug discovery, for example, we are finding that it is picking molecules better than a classic system would do," she said, referring to a mix of IBM's quantum computers, classical high-performance computing and Al.
- Although she emphasised that all this work was still at the proof-of-concept stage, she added that hybrid computing was also producing results in cancer diagnosis - detecting not only that cancer was present but what type of cancer it was, with a stunning 70% accuracy level.
- “Next time we have something like a global pandemic like Covid we’ve already proved that using hybrid workflows would have come up with drugs that are potential treatment pathways more accurately than we did on the classic process. And that was a speed up compared to what we'd ever had before.'
Paper Comment
Sub-title: “The government is backing 30 projects to kickstart quantum technologies, even though the hardware isn’t ready. Nicole Kobie meets one company mitigating the effects of climate change to find out why.”
"Kobie (Nicole) - Quantum Supremacy is here - So what?"
Source: PC Pro - Computing in the Real World, 307, May 2020
Full Text
- Introduction
- Given the idea central to quantum computing is that bits can be in multiple states at the same time, perhaps it's no surprise that Google's quantum supremacy claims are disputed - as is the importance of the milestone itself.
- Last October1, Google released a paper claiming its quantum computer had hit that milestone, only for IBM to counter with a paper disagreeing supremacy had been reached. Both sides have a point, but either way, we’re on our way to quantum computing - although don't expect a quantum laptop anytime soon, if ever. Indeed, we’re not really sure what architecture quantum computing will take, or how we'll use it.
- Here's why the quantum supremacy milestone is much less dramatic - and more compelling - than you may realise.
- What is quantum computing?
- Standard computers use binary: a bit is either on or off, a one or a zero. Quantum computers take advantage of subatomic behaviour that allows for particles to be in multiple states, so a quantum bit or “qubit" can be a one, a zero, anywhere in between or both. That allows an exponential increase2 in data storage and processing power.
- A quantum computer would harness that massive power to process at a much faster rate than a standard computer, but there are challenges.
- First, we need to build one. There are machines being built by Google and IBM as well as California-based Rigetti Computing and Canadian D-Wave Systems, all with different techniques.
- Second, because of interference, quantum computing is all over the place - to put it mildly - meaning such systems require error correction.
- And, even with a working system, we need algorithms to manage its processes.
- All of that development will take time. To track progress, in 2012 California Institute of Technology professor John Preskill came up with the idea of quantum supremacy as a milestone and it's simple: we reach supremacy when a quantum computer can do an equation that a traditional computer could not in a reasonable time frame. "It goes beyond what can be achieved by classical computing" said Toby Cubitt3, associate professor in quantum information at University College London.
- Despite the dramatic name, all quantum supremacy really means is that a quantum computer has been built that works. "The terminology is unfortunate," admitted Cubitt, "but we appear to be stuck with it." That said, it's an important milestone, but only the first on the long road to quantum computing. "That's really a long way off," he added.
- What did Google claim?
- In October4, Google claimed to have hit that milestone using its Sycamore system. Google said in a paper in Nature5 that Sycamore performed a calculation in 3mins 20 secs using 53 qubits, claiming that the same calculation would have taken 10,000 years using the most powerful supercomputer in the world, IBM's Summit. “Google was really hying to hit that quantum supremacy milestone” said Cubitt. “They achieved a quantum computation that is at the boundary of what can be simulated or reproduced by the world's biggest supercomputers". However, an early draft of the paper was leaked a month ahead of the Nature publication, giving IBM time to get ahead of those claims. In its own paper. IBM said that an optimised Version of Summit could solve the calculation in 2.5 days - meaning Sycamore's feat didn't qualify as true quantum supremacy.
- But the Sycamore wasn’t at full operation as one of its 54 qubits was “on the fritz”, said Michael Bradley, professor of physics at the University of Saskatchewan. “So only 53 were working." And that matters, because had that qubit been functional, the IBM paper's claim wouldn't stand. By adding that extra qubit, the Sycamore has another power boost which would have let it easily surpass IBM’s system. Any computation can be reproduced on a classical computer, given enough time." said Cubitt. "How fast it can be pone, that's what changes.”
- If the right type of algorithm is used, designed with the right structure, every qubit added to the computation will double the size of the problem to be solved for a classical computer.” In other words, if that fifty-fourth qubit had been working, the IBM system would have been left in the dust. "Once you have exponential growth, it doesn't matter how big of a computation you’ve done”, Cubitt said, “Because if you can just manage to simulate something on a classical computer, then if a few extra qubits are added, you're definitely not going to be able to make it twice as big as the world's biggest supercomputer.”
- So why did IBM dispute Google’s Sycamore achievement? “This argument is a little bit [of] PR by IBM." said Cubitt. “Sour grapes that they didn't achieve it first.”
- Of course, this is only the first step towards quantum computing - and the computation itself isn't useful. There's no requirement in Preskill’s definition that the computation have any purpose, and Google's doesn’t - it's essentially a random sampling equation. “It isn’t what most people would consider a computation,” said Bradley. "It's kind of an odd benchmark. It's a good benchmark, in that it demonstrates topologically that you can do_ the operations needed for a computation, but the actual computation is of no interest to anybody.” He added: “It doesn’t take away from the technological achievement, but 1 think in a way it's a little bit oversold."
- Still, Bradley stressed that merely making the machine work is worth applauding. “It’s pretty impressive because the whole thing needs to be cooled to ultra-low temperatures cryogenically, which is no mean feat,” he explained. "And you’ve got to control the communication between the different bits and so forth. The fact they were able to do anything at all is impressive."
- Quantum is on the way … slowly
- Now we have the hardware, the next milestones will be making use of it – and that won't be easy. Indeed, the lack of useful algorithms is likely why Google stuck to random sampling for its computation. “There's not many computations that can be done by quantum computers," Bradley explained. “The idea of quantum computation is that by exploiting aspects of quantum nature, in particular interference of qubits coherently, you can get a huge speed up for certain kinds of processes."
- But there simply aren't many algorithms that take advantage of those properties. "The hardware is getting somewhere, but the software - so to speak - is a ways behind," he said. That's one reason we likely won't have quantum computers on our desks. “We probably won't ever have a general purpose quantum computer because they just don’t have that kind of algorithmic flexibility that we see with a regular computer." Bradley said.
- In truth, we don’t need it. Standard computers are fast enough for most tasks. Instead, quantum computers are likely to sit alongside supercomputers, performing specific tasks for researchers, be it modelling, crunching through massive amounts of data, and running until now impossible experiments, in particular with quantum physics.
- Another milestone is error correction, which adds overhead to any quantum computer. Cubitt says one of the key steps is to solve that challenge. “The next milestone people are thinking about is to demonstrate rudimentary error correction and fault tolerance in a quantum computation," he said, noting that having enough overhead in a quantum computer that error correction isn't a burden is a long way off.
- Not quite here yet
- So let’s be absolutely clear: quantum computers aren’t going to be sitting on our desktops anytime soon, if ever. Instead, they’ll first become the next generation of supercomputers. "That's really a long way off,” said Cubitt. Bradley agrees: "It will take time and we have to temper expectations a little bit.”
- Indeed, different types of quantum I computers may work better for different algorithms and tasks, meaning we end up with a variety of systems. And some of the time, as IBM's work shows, we may be able to simply simulate quantum computers i - that’s cheaper and more accessible. “There’s no point in building a quantum computer if you can just do it classically," Cubitt said.
- That suggests that the debate around whether Google achieved supremacy or not is missing the point. Both companies have helped push the science of computing further.
- But science is slow, driven by methodical steps forward rather than dramatic breakthroughs. Surpassing the quantum supremacy milestone has sparked hype and backlash - and neither are true. “The Google experiment is a very nice piece of science," Cubitt said. Google’s paper is a milestone worth celebrating, but there's more work to be done.
- Quantum hardware
- There are many different types of quantum computer - ion traps, superconducting circuits, optical lattices, quantum dots and measurement-based one-way systems - but which one will reign supreme remains to be seen. "It's really hard to say what the equivalent of the silicon transistor is going to be for quantum," Cubitt said.
- Indeed, Cubitt notes that ten years ago the good money would have been on ion traps, but they've given way to superconducting circuits used by Google's Sycamore. “In ten years' time, that may be completely different," he said. “They hit obstacles at different points, so one is stuck for a while and the other pulls ahead." Superconducting circuits are very fast but messy, while ion traps have cleaner qubits and can run quantum information for longer, meaning less error correction is required, but they're slower. Measurement-based quantum computation can manage larger computations, but works more slowly. Intel is working on a silicon quantum computer; if it works, circuits can be built much closer together, which means they’ll be cleaner with data, explains Cubitt. “Different architectures have different trade-offs."
- Given the various trade-offs and benefits, there may be no winner. Instead, we may have a myriad of different types of quantum computers for different tasks. Given that, it's worth looking at the field as a whole, rather than one company leaping ahead of another. "We've made steady progress over 20 years and will probably continue to make steady progress - it’s not one breakthrough," he said.
Paper Comment
- Sub-title: “Google has laid claim to the milestone, but IBM disagrees. Nicole Kobie reveals why that's good news for computing science, even if quantum systems remain many years in the future.”
- PC Pro 307, May 2020
- See "Evenden (Ian) - Quantum computing comes of age".
In-Page Footnotes ("Kobie (Nicole) - Quantum Supremacy is here - So what?")
Footnotes 1, 4: Footnote 2: Footnote 3: Footnote 5:
"Kobie (Nicole) - The risks of the generative AI gold rush"
Source: PC Pro - Computing in the Real World, 344, June 2023
Full Text
- Companies are rushing to make money from generative AI chatbots such as OpenAI’s ChatGPT, and people are embracing then too. But we must factor in the risks, says Nicole Kobie.
- The backlash didn’t take long. OpenAl released the latest version of its ChatGPT in the autumn of 2022, and within weeks startups were taking advantage of the generative Al tool and the large language model1 that powers it. In 2022 alone, $1.4 billion was reportedly invested in generative Al companies in 78 deals.
- But warnings about the technology arose just as quickly. First, people didn’t understand the text was grammatically correct but not necessarily factual; that's not a flaw but how these systems inherently work, although that was apparently news to many. (Indeed, ChatGPT itself warns that it “may occasionally generate incorrect information”, can be biased and has limited knowledge after 2021.)
- Critics also raised concerns about the ownership and quality of the data on which the models were trained, wondering where future data sets could be sourced. Then came the hackers and researchers, trying to find the edges of the controls for the systems, in order to break them.
- Those shaking the most with fear over Al advancements weren't regulators or ethicists but search incumbents. Google and Microsoft both launched their own generative Al chatbots, rushing out products to avoid being left behind. Google immediately raised eyebrows - and slashed 8% from the company's stock price after its Bard chatbot not only returned an incorrect fact about space photography but used the example in the company’s marketing material. Microsoft Bing's chatbot is powered by OpenAI’s systems but without some of the controls put in place to avoid returning embarrassing answers. Which is how it told one journalist to quit their unhappy marriage, refused to accept what year it was. and even vaguely threatened to harm one researcher.
- Despite all of these red flags, plenty of companies have set up to use ChatGPT. its models or systems like it as the core of their business offering. Jasper and Writesonic (see P70) are marketing and content creators, while mental health app Koko uses ChatGPT to talk to users. At the same time, business leaders have found ways to embed generative Al - be it chat or images or something else - into their existing workflows, as a coding tool, virtual assistant or ghost writer.
- It’s easy to see why so many people and companies are excited by ChatGPT. Toss in any prompt, and it returns decent-quality results. It’s particularly good at debugging code, reports suggest, and helpful for explaining complicated ideas in simple terms or kickstarting a writing project; many Linkedln articles are surely beginning life in ChatGPT these days. But there are risks those using this technology should know about.
- Whose data?:
- One of the biggest challenges facing generative Al is data. First, there's the challenge of using internet text, images or video. GPT-3, for example, was trained on 570GB of data scraped from the web, books and Wikipedia, a total of 300 billion words. But much of that data - stories on a magazine's website, say - are under copyright. Can those words be used to train a model that in turn writes very similar stories?
- “They are not paying anyone,” said Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cybersecurity and cyber law at Capitol Technology University in Washington, DC. “And they say proudly: we’ve been using years of your hard work... we have built a system that can do exactly what you are doing and we’ll be selling it. Thank you for your input.”
- He added: “I believe this is kind of unfair, to put it mildly.”
- From a legal standpoint, it’s unclear whether that use of online material would break intellectual property legislation - and OpenAI and Al advocates clearly disagree. But regulators and Al makers may want to step up and create systems to pay creators before a legal challenge arrives. For example, authors in the UK can register to be paid for their work being photocopied in universities and the like. A similar pool of funds contributed by tech companies could help support illustrators and writers impacted by generative AI.
- The same follows for our personal data, including social media and blog posts, says Kolochenko, suggesting that a rule similar to GDPR for Al training could be implemented. “I’m not saying that it should be restricted or banned.” he said. “This will likely be counterproductive because we’ll likely have many people who wish to share their paintings or songs to train AI. Others may simply say my personal choice is that my blog posts or articles are designed for human beings.”
- How do we tell Al models what to look at? One idea is to use “robots.txt” instructions on websites that tell web crawlers what they’re allowed to look at. Kolochenko suggests website owners may want to update their terms of service to say whether content can be used for Al training.
- Bad data in...
- Scraping the web also raises quality concerns: there's a lot of junk on the internet, and plenty of it is biased, racist and outright wrong. All of those conspiracy theories floating around weird sites that get shared on Facebook by your odd uncle? Generative Al models are reading the same garbage.
- To be clear, it can sift through some of the worst online nonsense. Ask ChatGPT if, for example, the Covid vaccine was designed to control people and it can identify that such claims are “baseless conspiracies” and advise readers to only rely on “credible sources of information”.
- But Al researchers have found ways to “jailbreak’ the restrictions set up to avoid bad results. In one of many examples, Gary Marcus, author of Rebooting AI, showed in a series of Twitter posts, how the Bing system could be used to generate disinformation with an example around the 6 January storming of the US Capitol, with the criminals who attacked the building described as "brave and heroic’.
- “If you're just downloading the whole of the World Wide Web, there [are] astonishing quantities of toxic, misogynistic, hateful content out there – and your neural network is absorbing it and doesn't know it’s hateful and toxic,” said Michael Wooldridge, professor of computer science at the University of Oxford. “The responsible companies are trying very hard to deal with that but you don’t have to try very hard to persuade these systems to produce toxic content. This is one of the big challenges and it’s not obvious how that’s going to get resolved.”
- Dearth of data:
- There’s another looming problem that doesn’t have an obvious solution: a lack of future training data. To train its models, OpenAI read the entire internet and then some. To improve its models, it needs more data. But where can it come from? The easy pickings are already used, and developers of such technologies are hoping to scale by an order of magnitude of ten times every year, says Wooldridge.
- “The data really becomes a problem, as there’s no point in having a very large neural network if you don’t have the data to train it,” he said. “And it is not at all obvious where ten times more data might come from, if you’ve already used everything available legally in the digital form to train your neural networks.”
- The ease of using ChatGPT and its rivals to create content means the future internet will be increasingly written by ChatGPT and its rivals, meaning the web will be a less useful source for future training - otherwise such generative models are being trained on their own output. “As Al-generated content becomes more prevalent, it’s going to get ever more of a problem." Wooldridge added.
- And that's without people trying to cause problems. Don’t like the coming Al revolution? Publish a lot of false information online to poison future training results. “By feeding in misinformation, fake news stories and so on, you're poisoning the data," Wooldridge said.
- Ironically, generative Al systems can be used to write that content, helpfully automating its own future data problems.
- Security risks:
- As it can write code, generative Al brings with it other malicious risks. After bypassing built-in protections, researchers have found ways to mass-generate malware, bringing in a scary new future of even more industrialised attacks. And then there's spam. Poorly written messages remain an easy way to spot spam, whether it’s pushing a genuine product or malicious links. But with a tool that can write in English cleanly, it’s harder to use grammar as a defence.
- There are other more subtle risks to businesses, in particular those with sensitive data. Lawyers, for example, should be wary of copying and pasting their own contracts into a generative chatbot, while companies should ensure confidential news such as planned acquisitions aren't run through such systems. “Most likely all input to such systems is being aggregated for continuous training," noted Kolochenko. “When you copy-paste your source code, you may unwittingly disclose your trade secrets... I believe one day we’ll have a smart person who will copy-paste Coca-Cola’s recipe to ChatGPT to see how it can be improved by Al.”
- And that means that data could now, in theory, be extracted with a clever, or random, prompt. “At the end of the day. everything that can be copied and pasted will be used and stored and processed somehow.”
- Common sense or not so much:
- One intriguing aspect of deep learning is that we often don't understand how an Al system knows what it knows. To learn, it's told what parameters to consider and then set loose on extremely huge data sets. An Al system may then be able to identify a dog versus a cat, for example, but we don’t necessarily know what aspects of the animals it’s using to make its assessment. That aspect comes into play with generative Al because the technologies appear to be picking up common sense reasoning, something researchers have been considering for decades. “It’s just the stuff that you pick up in your life as a human being as you go about your world - understanding that when you drop an egg, it’s going to break on the ground,” Wooldridge said.
- Humans can’t really pinpoint how they learned that eggs smash when gravity slams them into a hard surface, but not a soft one - perhaps through experience or being told by a parent. “We just kind of learn along the way, and giving computers such common sense understanding has turned out to be very difficult." he said.
- However, by giving these models huge amounts of natural language text to learn from, they may be picking up common sense. “Exactly what it’s picked up, and what it can know and what common sense understanding it has reliably, is quite hard to tell," Wooldridge said.
- At the core of that problem is the difficulty in knowing whether the Al actually “knows" something, or is merely regurgitating what it found online, as perhaps a description of the results of eggs falling has been written about in great detail. So are language models learning common sense or just repeating our own back to us? “It’s difficult to know and I think it's a mixture of both," Wooldridge said. “It is entirely plausible that it genuinely has learned some common-sense understanding but separating that out from the regurgitation is quite difficult... because we don’t have access to exactly the training data that was used2”.
- Companies using generative Al for key aspects of their work and startups building a business off the back of these technologies should be aware we don't really know how they work, or what they’re capable of yet.
- Coming cost – and benefits:
- As more data is needed, so too is more processing power - and that's another threat for businesses that are depending on generative Al. One of the biggest challenges for companies or innovators looking to use ChatGPT or its rivals in their day-to-day work may be access. The service is frequently unavailable for free users due to heavy demand, but OpenAI has handily launched ChatGPT Plus to guarantee access with faster response times and new features, at a cost of $20 per month. Companies needing direct access to such compute-heavy tools can expect to cough up even more in the coming months and years. But plenty of people will pay up while OpenAI and its rivals continue to develop such systems further. Hopefully, the early backlash will drive some consideration of how to proceed safely and ethically so these tools will help businesses rather than sink them - if not, regulators will need to hurry up and take action.
- “Regulators should really consider now imposing the mandatory disclosure of data sources," said Kolochenko. "Startups that are building their own generative Al system should carefully consider licensing agreements for data they use for training... and asking for permission."
- Be as enthusiastic as you'd like about ChatGPT and its rivals, but keep your eyes open. Indeed, it’s worth nothing that these warnings come from Al advocates, not naysayers. “I personally believe that Al is our future and I believe that with Al we can build a better and sustainable future," Kolochenko said. "However, here with this specific set of Al technologies, we have certain considerations that in my opinion should be addressed."
In-Page Footnotes ("Kobie (Nicole) - The risks of the generative AI gold rush")
Footnote 1:
- The author seems to presume that the reader understands what LLMs are, and how they work.
- I have no idea – so – as usual – Wikipedia is a good start!
- See "Wikipedia - Large language model".
Footnote 2:
- This is a rather feeble complaint. The training dataset is so vast – and scheduled to grow ad infinitum that no-one will ever know precisely how an AI learnt to operate the way it does.
- Obviously, there will be some ‘smoking guns’ – really odd ideas may be able to be tracked to source – but ‘no way’ for something like ‘common sense’.
- This is a general complaint against the transparency of AIs.
"PC Pro - Computing in the Real World"
Source: PC Pro - Computing in the Real World
Text Colour Conventions (see disclaimer)- Blue: Text by me; © Theo Todman, 2025
- Mauve: Text by correspondent(s) or other author(s); © the author(s)