Ethical AI in Government
How do governments across the globe integrate ethical principles into AI applications and why is this necessary in serving and protecting societal values? The digital world is in a constant state of flux and has experienced exponential growth in recent years. The expanding capabilities of AI permeate everyday life from all angles; virtual assistants have infiltrated homes, whilst augmented reality is transforming the customer experience within the retail sector. The technology is a goldmine and has taken business and industry by storm, garnering popularity across a multitude of fields.
It is to no surprise then that governments worldwide, seeing the success AI is giving rise to, have sunk their teeth into the technology. Those in power have utilised it for reasons that vary in accordance with what their government is aiming to achieve. These range from lowering crime rates to flexing political muscles; praiseworthy cases of AI implementation and of course the opposite. Where there is good, there is bad. So let’s proceed to take a good long look at what constitutes ethical AI in government bodies and where there might be room for improvement, the latter of which there is a lot to be said.
The Matrix has you
Welcome to China, a land of copious delights. The birthplace of tea, porcelain, silk and other treasures. Home to a very impressive wall and the finest, most scrumptious dumplings one will ever indulge in. A country saturated with culture and history. Cue the 21st century. China has well and truly shed its ancient skins and dived headfirst into the digital future. A technological powerhouse has set up shop in the orient.
The country is a melting pot of innovation. New inventions, redefining the limits of technology, pop up like daisies in the spring. But as the saying goes, great power brings great responsibility, and it may be that China is on the brink of overstepping the line (or already has). Recently, the country’s government has stirred up debate over its use of AI technology in exerting power over its citizens. It has come under scrutiny regarding its facial recognition systems, and it’s easy to understand why.
The state-of-the-art technology takes surveillance to a whole new level. It illuminates a disconcerting side to AI; one that we might expect to emerge in the far-flung future, and yet it has arrived prematurely. It brings to light an unsettling reality for Chinese citizens, that to a significant extent, their every action is monitored. It rings true of George Orwell’s Nineteen Eighty-Four maxim, ‘Big Brother Is Watching You.’ There is no room for dissent in a watched state.
Many hold the view that China has exploited AI as a way of constantly keeping tabs on individuals: their identities, their movements, their day-to-day activities. Citizens are held hostage by an oppressive regime that breaches privacy, freedom and creates a hierarchy within society. From a Western standpoint, it looks to be a totalitarian nightmare. AI is being used to reinforce the country’s autocratic government, ensuring conformity and stamping out any notion of resistance. In this capacity, the technology has conceived an omnipresent government.
Dystopia in the East
Biometric technology developed by companies such as YITU and SenseTime can identify an individual within seconds. Through these means, Chinese authorities can exercise control over the public, convert them into data and target individuals who potentially pose a threat to the establishment. Officials sport facial recognition glasses that allow them to pick out those who have defied state law and convention. The system is a disturbing portrayal of what could be a dystopian future for China. The technology is growing increasingly sophisticated; soon to bloom, the possibility of facial recognition technology being able to learn a person’s thoughts. A spokeswoman from YITU stated that it is already able to detect emotion.
China’s integration of AI begs the question, at what point is technology an impingement on our humanity. Is the time approaching when we must we embrace a reality in which we are more digital data than human; food for thought indeed. It is evolving at an alarming speed. Therefore, might we suggest it is only a matter of time before a machine achieves success in every dimension of the Turing Test. China has taken the first step in testing the boundaries of how far we can incorporate AI into society. The government leverages surveillance to wield control over its citizens, a power tactic that has met with disapproval in other countries.
The US Ban of Facial Recognition
States across the US have outlawed facial recognition, with major cities in California and Massachusetts banning operation of the technology for select purposes. For example, San Francisco has barred the use of it in the city’s 53 departments, including the police department and transport services. The vote was eight to one amongst the city’s Board of Supervisors. Aaron Peskin, one of the board’s members, considers the technology ‘fundamentally invasive.’
However, the law does not prohibit it from being used by residents on personal security cameras; this applies to businesses also. Furthermore, both the port of San Francisco and the international airport have been allowed leeway with effectuating the technology in federally controlled facilities at these locations. Regardless, San Francisco has taken a stance against facial recognition, putting forward an argument that will inevitably spread as the technology does.
Algorithms of Repression
The Chinese government has fired up the AI engine as ammunition for its ‘war on terror.’ The country uses its facial recognition to single out citizens who are deemed a danger to society. This has been the case with ethnic minority groups such as the Xinjiang Uighur Muslim community. Uighur Muslims in this province have been branded as likely terror suspects and suffer racial persecution. IJOP, Integrated Joint Operations Platform, is a data system powered by AI that runs throughout the Xinjiang region. The platform pulls information from a multitude of sources: CCTV cameras, petrol stations, face scanners at security checkpoints, electricity consumption and religious practices.
Police officials use an app that is linked with IJOP to observe individuals’ activities and undertake investigations, should they stumble upon any ‘deviance.’ The app holds data such as a person’s height, blood type, and whether they have foreign contacts. It alerts the authorities when an individual has gone on hajj without express permission from the government. Cities throughout the region are rife with security checkpoints, where individuals can be flagged up as a potential threat and refused entry to services and public spaces. These include parks, shopping centres, banks and even hospitals. Discrimination to this degree is indicative of a flawed, repressive society that harnesses AI as a weapon.
IJOP’s reservoir of information enables officials to categorise individuals; if an individual behaves in a manner that strays from what is accepted as the norm, they will attract police attention and be listed as ‘unsafe.’ There are 36 ‘person types’ that specify suspicious behaviour. Examples include making donations to mosques, not socialising with neighbours and not using the front door enough. From an outside perspective, it is as clear as day that there is a severe lack of reasoning here. Everyday behaviour is twisted into something sinister, for officials to justify their hostility towards the Uighurs. An unsafe individual may find themselves detained indefinitely in a ‘transformation through education’ camp.
Xinjiang’s re-education camps specialise in indoctrination as a way of eradicating ‘extremism’ and reinforcing the country’s communist values. The majority of inmates are imprisoned without any criminal charges. This illustrates how the camps operate outside legal parameters. Detainees are forced to abandon Islam and pledge loyalty to the CPC. Their education consists of such things as learning Mandarin and singing the Chinese national anthem daily. They are degraded and their religion grossly disrespected; inmates are made to eat pork which is forbidden in Islamic law. It is social conditioning sponsored by racism and fuelled by the most unethical kind of AI.
The country’s social credit system is also a major point of controversy. The system’s primary source of information gathering is the mass surveillance network across the country. Government eyes survey the behaviour and habits of citizens, and from this, social credit scores are generated. Presently, there is no single system that runs nationwide. Provincial governing bodies and private companies have their own social credit structures in place, there being differences in how they each work. The Chinese government envisions a unified, self-contained system, running the length and breadth of the country by 2020. Given the current situation, it seems this might well materialise.
Who Let the Dogs Out
In 2017, the city of Jinan, capital of the Shandong province in Eastern China, imposed a scoring system aimed at responsible dog ownership. Registered dog owners receive a license and are allotted twelve points. Similar to a driving license, points are deducted for insubordination; this includes dog fouling, walking it without a lead, and failing to control the animal, should it be causing disruption. Should an owner repeatedly reoffend, the penalties become harsher, including steep fines and the risk of having their pet temporarily removed from their care. Having your dog returned requires passing a test that deals with the codes of conduct regarding pet ownership.
What’s more, it is mandatory for dog owners to reregister their animals yearly; not doing this means temporary rehoming for the animal, until their owner fulfils their duties. On the other hand, owners can be rewarded with points for helpful and productive behaviour, such as offering one’s services at a shelter. This method of ensuring that citizens toe the line has yielded positive results. In 2018, a report was published that detailed some of the outcomes of the scheme, including a hefty 43 percent drop in complaints concerning dogs being walked off the lead. The state-owned Chinese newspaper, Legal Daily, called for it to be adopted across the country. After all, the proof is in the numbers.
The scheme cracks down on what is deemed intolerable behaviour. There is an unparalleled stringency about it. By autumn 2018, 1430 dog owners had received punishment for their carelessness. The system bears an eerie resemblance to The Sims 2 Pets video game: pet points, owner competency, bonuses for quality pet handling. Like the game, a citizen faces repercussions if they do not adhere to the rules of what is acceptable when owning a pet. It is unnerving to think that an aspect of life in China can be likened to that of a video game.
Look before you Cross
Like other cities across China, Jinan uses facial recognition to identify jaywalkers. The perpetrators are then subject to public shaming, with their faces posted on social media and screens at road junctions. This happens at lightning speed; facial recognition cameras snap pedestrians crossing when they shouldn’t and also capture a brief video. The individual’s identity is then sourced via a police database. This is then relayed onto a public screen in around twenty minutes. Personal information is also displayed, such as their home address and ID number. It is a humiliating and intrusive form of law enforcement and one that has knock-on effects. It can impact a person’s social credit score, meaning further ramifications for the individual.
Carrot and Stick
Pilot projects in different municipalities and regions have varying punishment and reward plans in place. Banned from bullet trains, purchasing plane tickets, having your child rejected by a prestigious school – a handful of disciplinary measures. It is a game where points mean prizes, with AI umpiring. It goes as far as too much time spent playing video games; the fallout of this will be a deduction in your social credit score. The system is even being trialled within healthcare services. Those with a high score will be fast-tracked when waiting for a doctor. A reputable citizen deserves quicker treatment; the morality of this is without a doubt, questionable.
Moreover, the algorithms make presumptions, spawning further complications. For instance, buying baby products indicates attentive and responsible parenting. This could boost a person’s score, the presumption here being that the person purchasing these items is a parent. Therefore, if the same person buys alcohol, their score might drop. Algorithms dictate acceptable lifestyles. AI fashions a default existence for citizens that grants them trouble-free lives, abundant with benefits. This is on the basis that they abide by state ideology.
The decision-making process for what should add and subtract from a person’s score is arbitrary and illogical. The process is premised on what is considered virtuous behaviour. In China, virtue is manned by a fairly odd crew. Acts such as selling coal that doesn’t meet certain specifications is a wrongful and punishable deed. And all of this sails on an expansive sea of data, accumulated by the government and private technology and financial companies.
An example is Sesame Credit, owned by the Chinese fintech company Ant Financial. Sesame Credit is an opt-in system and runs in conjunction with Alipay, a payment app ubiquitous across China. It works by utilising the information gathered by Alipay to calculate a credit score for an individual. A person’s score is determined by a number of factors, including but not limited to: purchases made, online behaviour, relationships and the scores of those they are connected with, and personal details, e.g. do they live at the address they are registered to.
Sesame Credit has a close partnership with the government, granting it access to a wealth of data; public records such as an individual’s financial history are obtainable. Ant Financial is one of eight companies that in 2015 were authorised by the People’s Bank of China, to take the idea of social credit and run with it. Laying the foundation stones meant collaborating with the government, to build software designed exclusively for this mission. Sesame Credit is a precursor of what is to arrive in 2020, should everything run smoothly. Although signing up is voluntary, the scheme emits the same message underpinning the compulsory system that will flood the nation next year; correct conduct merits a hassle-free life, as well as everyday perks and advantages.
Playing the Game of Life
The system provokes competition, constructing a social ranking system rooted in data. Some of the lifestyle bonuses a citizen with a high score can enjoy include easier procedures when applying for apartments, loans and deposit-free bike rentals. Those with lower scores are denied such treatment and can face consequences, such as being barred from dating apps. The appeal of Sesame Credit lies in the fact that it promotes transparency, urging people to achieve and retain their status as a ‘trustworthy’ individual. In turn, their compliance will be recognised with privileges.
However, the system poses a threat to how individuals view their worth, can their value as a person be translated into three digits. Accomplishing a harmonious society through this kind of social engineering, can to some extent, strip people of their sense of self. It reduces individuals to numbers dancing in an invisible vault and could lead to social isolation. Once the official state social credit system is underway, those with low scores may become subject to segregation and a poorer quality of life.
It’s not all doom and gloom. Examples of ethical AI in the Government – Australia, Denmark and the UK.
Contrary to the above discussion, technological developments and AI specifically can allow modern society to prosper, when used responsibly, ethically and with the public’s best interests at heart. The universal applications of AI have secured it a position within national governments, it being employed for a variety of purposes. These include public engagement with the government. For example, in February 2016, the Australian Taxation Office introduced ‘Alex’, a virtual assistant for their website. By 30th June 2017, Alex was capable of answering over 500 questions and had dealt successfully with 81% of queries at first contact. As of this date, the chatbot had interacted with circa 1.5 million users. This is a prime example of how AI can be extremely beneficial when it comes to saving time, resources and providing a public service.
The UK government also uses AI; in their case, to combat fraud. Ministers within the Department of Work and Pensions have deployed AI algorithms fit for detecting fraudulent claims. The technology shields the welfare and benefits programme from criminal enterprises attempting to swindle the system. It safeguards taxpayers’ money and ensures benefit payments reach those who require and are eligible to receive them. The department fell on hard times in 2016, suffering a loss to the tune of approximately £2.1 billion; this was an increase of £200 million from the previous year.
Departmental debt as a result of fraud is equivalent to the Foreign Office’s annual expenditure, a staggering £1.1 billion. The implementation of AI is proving to be a fruitful strategy, in terms of ensnaring criminal gangs and tackling the problem on a wider scale. It is set to slice losses in half and further, which expectedly will boost the UK economy. It is estimated that the UK economy could gain an extra £630 billion by 2035, by virtue of AI across various sectors.
How AI operates in terms of fraud detection is a fascinating realm to explore. It is far more effective in that it saves human investigators the job of having to probe every claim that arouses suspicion, thus eliminating the need for manpower in that section of the process. The technology uses algorithms that can sift through large quantities of data quickly and pick up on patterns; these can include anomalies and recurring information in the data.
Furthermore, machines can identify techniques used frequently by scammers, such as fake identity cloning. A human investigator handles the case once a claim is exposed as potentially fraudulent. Additionally, enquiries are made into social media accounts; algorithms can unearth information that does not correlate with the claim in question. For instance, if an individual posts a photo of a recently-purchased sports car, this is not in agreement with their entitlement to Jobseeker’s Allowance.
AI mitigates the risk of criminals slipping through the nets and stealing from those who are in genuine need and meet the criteria for financial support. In this sense, the technology serves and protects integrity, preventing the actions of those with skewed moral compasses. To put it plainly, the UK government has hopped on board the AI bandwagon and is reaping the benefits. It is embracing innovation, replacing laborious jobs with highly advanced technology that surpasses the work rate of a human. These machines have the ability to reshape the government’s approach to societal issues, performing tasks at an optimal level and attaining results with zero human input. The technology is vastly reducing the need for human effort, and it’s catching on fast.
The Great Danes
On a final note, the Danish government has set the benchmark for the responsible implementation of AI, through its National Strategy for Artificial Intelligence, that made its debut earlier this year. The focal point of the strategy is that the development of AI within the country’s infrastructure will be set on an ethical framework, a far cry from the goings-on in China. This will guarantee the defence of fundamental human rights and make certain that organisations in both the private and public sector respect the boundaries of ethical AI.
For the technology to be successfully integrated into society without encroachment on human rights, it is imperative that those involved in research and development can differentiate between the ethical and the unethical in this field. Six ethical principles within the strategy constitute the framework: self-determination, equality and justice, responsibility, explainability, development and dignity. These will be the foundation stones for building AI that works in alliance with the core values of Danish society, these being equality, security, liberty and freedom. The ambition behind the strategy is that Denmark will be in the AI vanguard. The technology will enhance public services and businesses will flourish. This will be achieved humans and science working hand in hand, with neither one taking precedence over the other.
Handle with Care
AI is fertile ground when venturing beyond the frontiers of science and technology. However, like any discovery, it is vital that progress in this field does not come at the expense of humans. Scientific developments are to come hand in hand with relevant legislation and liability, to defend against malicious and harmful intent. It is then and only then that society can thrive from the creations that knowledge and research spawn.