ECONOMY

Latest Biometric Surveillance Scandal in UK Reveals Another Dark Side of AI-Powered Big Brother


Where AI-powered surveillance and control technologies meet capitalism 101.

A fresh expose by civil rights group Big Brother Watch has revealed that over the past two years eight train stations across the UK — including busy hubs such as London’s Euston and Waterloo, Manchester Piccadilly, and several smaller stations — have conducted facial and object recognition trials using AI surveillance technology. By rigging Amazon’s AI surveillance software to the stations’ CCTV cameras, the initiative was ostensibly meant to alert station staff to safety incidents and potentially reduce certain types of crime.

The data collected was sent to Amazon Rekognition, according to a Freedom of Information Act (FOIA) request obtained by Big Brother Watch. As WIRED magazine reports, “the extensive trials, overseen by the government-owned rail infrastructure body Network Rail, have deployed object recognition — a type of machine learning that can identify items in videofeeds — to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behaviour (“running, shouting, skateboarding, smoking”) and spot potential bike thieves.”

In other words, it was all intended to help keep rail passengers safe, train stations clean and tidy and bikes in their place. A Network Rail spokesperson said:

We take the security of the rail network extremely seriously and use a range of advanced technologies across our stations to protect passengers, our colleagues, and the railway infrastructure from crime and other threats.

When we deploy technology, we work with the police and security services to ensure that we’re taking proportionate action, and we always comply with the relevant legislation regarding the use of surveillance technologies.

That is probably not as comforting as it may sound. As I will show later in this article, the (almost certainly outgoing) Sunak government has tried everything it can to gut the limited safeguards protecting the British public from the potential downsides and dangers of AI-empowered surveillance.

Measuring Passenger “Satisfaction”

A particularly “concerning” aspect of the train station trials is their focus on “passenger demographics,” says Jake Hurfurt, the head of research and investigations at Big Brother Watch. According to documents released in response to the FOIA request, the AI-powered system could use images from the cameras to produce “a statistical analysis of age range and male/female demographics,” and is also able to “analyse for emotions” such as “happy, sad and angry.”

This is where AI-powered surveillance and control technologies meet capitalism 101. From the WIRED article (emphasis my own):

The images were captured when people crossed a “virtual tripwire” near ticket barriers, and were sent to be analysed by Amazon’s Rekognition system, which allows face and object analysis. It could allow passenger “satisfaction” to be measured, the documents say, noting that “this data could be utilised to maximum advertising and retail revenue.”

The article offers no indication as to how that might be achieved, but the proposal itself should hardly come as a surprise. Besides serving as an instrument of government surveillance control, biometric systems will be used to maximise corporate revenues and profits — whether for the tech giants providing the hardware and software, in this case Amazon, the large financial institutions facilitating the transactions or the retail companies honing their targeted advertising techniques.

It brings to mind two scenes from the 2002 sci-fi movie (based loosely on a Philip K Dick short story), “Minority Report.” In the first, the camera takes a retina scan of the protagonist John A Anderton and a billboard calls out to him, “John Anderton! You could use a Guinness right about now”? In the second, Anderton visits a mall where he is met by an attractive female hologram advising him what clothes to buy. Set in 2054, the film imagines that advertisers will be able to personalise messages on billboards or through holograms via retinal scans.

Apart from the occasional still-born attempt, this particular dystopian scenario is yet to creep into most of our lives, though the widespread use of augmented-reality “wearables” like Apple Vision Pro will certainly make it more possible. As the WIRED article notes, AI researchers have frequently warned that using face analysis technology “to detect emotions is ‘unreliable’ and some say the technology should be banned due to the difficulty of working out how someone may be feeling from audio or video.”

On the other side of the English channel the EU Parliament has voted for a broad ban on the use of Live Facial Recognition systems in public spaces, as too have some US cities. By contrast, as we reported in October last year, the UK government is escalating its deployment of the controversial surveillance technology.

Prime Minister Rishi Sunak, the son-in-law of Indian tech billionaire N R Narayana Murthy, is determined to transform the UK into a world leader in AI governance. Said governance apparently involves gutting many of the limited safeguards protecting the public from the potential downsides and dangers of AI, of which there are many…

As we reported in early August, live facial recognition (LFR) surveillance, where people’s faces are biometrically scanned by cameras in real-time and checked against a database, is being used by an increasing number of UK retailers amid a sharp upsurge in shoplifting — with the blessing, of course, of the UK government. Police forces are also being urged to step up their use of LFR. The technology has also been deployed at the Coronation of King Charles III, sports events including Formula 1, and concerts, despite ongoing concerns about its accuracy as well as the huge ethical and privacy issues it raises.

In what is surely one of the most brazen and egregious examples of mission creep you’re likely to find, the government has also authoritsed the police to create a vast facial recognition database out of passport photos of people in the UK . The ultimate goal, it seems, is to get rid of passports altogether and replace them with facial recognition technology. In January, Phil Douglas, the director general of UK Border Force, said he wanted to create an “intelligent border” that uses “much more frictionless facial recognition than we currently do”.

From The Guardian:

Douglas has been touting the potential benefits of biometrics and data security in managing the UK’s borders in recent months. In February 2023, he suggested the paper passport was becoming largely redundant – even as some celebrated the post-Brexit return of the blue document.

He told an audience at the Airport Operators Association conference in London at the time: “I’d like to see a world of completely frictionless borders where you don’t really need a passport. The technology already exists to support that.” Douglas added: “In the future, you won’t need a passport – you’ll just need biometrics.”…

According to polling carried out by the International Air Transport Association in 2022, 75% of passengers worldwide would be happy to ditch passports for biometrics.

“Snooping Capital of the West”

This is a reminder that most of these trends — particularly the tech-enabled drift toward authoritarianism and centralised technocracy — are generalised, not only among the ostensibly democratic nations of the so-called “Free West” but across the world as a whole. But the UK is at the leading edge of most of them.

In an article earlier this year, Politico described the UK as “the snooping capital of the West,” snarkily noting that the country “is finally leading the world… on AI-powered surveillance.” The government last year passed the Online Safety Bill, opening up the possibility of tech firms being forced to scan people’s mobile messages – ostensibly for child abuse content. As Open Democracy warns, this is likely to make people’s digital communications less, rather than more, secure:

The more of daily life that becomes digital, the more we rely on secure connections to ensure our data is not exploited. Encryption is the main method stopping miscreants from stealing passwords or personal information.

If firms are forced to weaken security, more attacks will ensue, just at a time that we need to boost security across society.

For example, if WhatsApp were instructed to make messages visible to law enforcement, that back door could be found by others, exposing personal messages. It is a pillar of information security theory that the more ways there are to access a system, the more likely an attacker will be to gain access.

The UK government has also granted police new powers to shut down protests as well as force employees to work during industrial action – or face being sacked. Police forces are also resorting to Section 60AA to require protesters to remove any item being worn for the purpose of concealing their identity, including, presumably, KN95 masks. Plus, as readers may recall, the Sunak government has also granted full management of the National Health Service’s federated data platform to Palantir, a US tech giant with intimate ties to US defense and intelligence agencies.

In its Data Protection and Digital Information Act (DPDI), the Sunak government even planned to abolish the roles of the Biometrics and Surveillance Camera Commission (BSCC), an independent advisory board that was, to some extent, helping to hold the public sector to account for its use of AI. As we pointed out late last year, the government clearly wanted to have even freer reign to surveil and control the lives of British citizens. The proposed legislation also sought to scale back the UK GDPR and Data Protection Act of 2018.

The former Biometrics and Surveillance Camera Commissioner, Professor Fraser Sampson,  described the move as “shocking” and “tantamount to vandalism.” In the end, the DPDI was ultimately excluded from the “wash-up” process before Parliament’s dissolution in the lead-up to the UK’s general elections, leaving the BSCC in tact — for now.

Another Slippery Slope 

When it comes to biometric surveillance technologies, the UK’s independent watchdogs appear to hold limited influence anyway. The two-year trials in the eight train stations all took place despite previous warnings from the UK’s Information Commissioner’s Office (ICO) against using the technology. Speaking in 2022, the ICO’s deputy commissioner Stephen Bonner said:

Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever. While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgments about a person that are inaccurate and lead to discrimination…

The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science… As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.

If there’s one silver lining about the technology used in the station trials, it is that it does not identify people, Carissa Véliz, an associate professor in psychology at the Institute for Ethics in AI, at the University of Oxford, told WIRED. But there is always the risk of a slippery slope, she said, citing similar AI trials on the London Underground that had initially blurred faces of people who may have been dodging faces before changing tack, unblurring photos and keeping images longer than initially planned.

Lastly, if British voters are expecting a reversal of policy on the use of digital surveillance and control technologies by a future Keir Starmer government, they are likely to be sorely disappointed, especially given the Starmer team’s cosy ties to the Tony Blair Foundation for Global Change, which often touts digital technologies and biometric surveillance systems as the cure-alls to many of the world’s deep-seated problems.

In a speech at the WEF’s 2020 cyber attack simulation event, “Cyber Polygon”, Blair said that Digital Identity would form an “inevitable” part of the digital ecosystem being constructed around us, and so government should work with technology companies to regulate their use  — as the EU and Australia have already done. It is the perfect manifestation of the 21st century Public Private Partnership — a digital panopticon designed and built by global tech companies, paid for with taxpayer funds, so that the government, security agencies and their corporate partners can more easily track, trace and control the populace.

As a recent report by Big Brother Watch documents, the Labour Party under Jeremy Corbyn’s leadership pledged to ban facial recognition but the Labour 2024 Manifesto includes no such commitment. There is also “no formal commitment in the manifesto to reject bank spying powers in the future” or the adoption of a central bank digital currency. Nor is there any commitment to prevent mandatory ID or digital identity.

Print Friendly, PDF & Email





Source link

MarylandDigitalNews.com