AI is here - What is the role of government
You ask your smartphone virtual assistant to make an appointment for you. You receive a message alert from your bank enquiring if you made a certain transaction. You receive recommendations for music or movies or online purchases based on your past behaviour. These are all examples of Artificial Intelligence (AI) entering your daily life.
Making sense of the terminology
What is AI?
There is no widely accepted definition of AI or what constitutes AI. Definitions are usually based on some variation of computerized systems or computers exhibiting behaviour or thought that is normally demonstrated by humans or requires intelligence (which itself is hard to define). It could involve rationally solving complex problems or taking appropriate actions to achieve objectives in real world circumstances.
There is a distinction to be made between Narrow AI, focused on specific, narrowly defined tasks, such as autonomous vehicles and image recognition and Strong AI, which refers to general intelligence and is closer to what most people would imagine when they think of AI. It is close to the sentient AI of science fiction. Strong AI appears to be some way off, but the narrow verson is already here.
Many instances of AI could just as easily be interpreted as applications of Big Data analytics. A problem is considered as requiring AI before it has been solved, but once a solution is well known it is considered routine data processing and predictive or prescriptive analytics. The final product could be viewed as part of the exploding Internet-of-Things (IoT) network or it could feed into Smart cities and Industry 4.0 by combining automation and prediction with human expertise.
AI, Machine learning and deep learning
When talking of AI, there are frequent mentions of machine learning and deep learning. Machine learning is one of the technical approaches to AI development and deep learning is one of its subsets. It is the driver of many recent advances and applications. Machine learning is reliant on data. It starts with a body of data and then tries to derive rules to explain the data or predict future data. A definition by Tom M. Mitchell says, “"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." It is no longer a question of whether the computer can think, but if it can act, as if it is thinking.
Then there is ‘supervised’ and ‘unsupervised’ learning. Unsupervised learning is open-ended. It presents a learning algorithm with an unlabelled set of data and asks it find structure in the data. There are no right or wrong answers. Supervised learning uses a labelled data set to train a model, which can then be used to classify or sort a new set of data.
Deep learning uses structures inspired by the human brain, to do machine learning. It uses a set of units or “neurons”. Each unit combines a set of input values to produce an output value, which in turn is passed on to other neurons downstream. Layers are frequently in excess of 100 and often consist of a large number of units at each layer, to enable the recognition of extremely complex, precise patterns in data.
What does it mean for governments?
Availability of unprecedented amounts of data, further augmented by the data deluge from IoT sensors, relatively inexpensive massively parallel computational capabilities and improved learning techniques, have led to significant leaps in AI capabilities and will only continue to do so for the foreseeable future.
The pace is accelerating. And governments need to figure out how to deal with it. If they do it right, harnessing the opportunities and mitigating the threats, then AI could help us overcome many of the world’s biggest challenges and improve people’s lives. Using technology to save the world might have become an utopian cliché. But AI with its vast range of potential applications could be truly transformative, if used and regulated with thoughtful foresight.
Since October, the White House has released three papers* providing in-depth analysis of the implications of AI, the outline of a government strategy and its potential impact on economy and how to deal with it. In October, a committee of MPs in the UK, the Commons science and technology committee called upon the government to establish a commission on artificial intelligence to provide global leadership on the social, legal and ethical implications of AI. We looked at all these papers and more and our interactions with governments in the Asia-Pacific region for the following.
Direct applications for government
AI can improve the design and delivery of essential government services, in areas such as health, social care, emergency services by
- Enhancing efficiency by predicting demand and tailoring services to requirement, making it easier for officials to make informed-decisions and creating responsive services (Example- In August, the Land Transport Authority in Singapore launched Autonomous Mobility-On-Demand Trials, for complementing existing public transport by a system of shared mobility-on-demand services powered by fleets of self-driving vehicles.)
- Automating interaction where possible to make responses faster, as close as real-time as possible and providing more intuitive interfaces (Example- In November 2016, at the World Cities Summit, Dr Vivian Balakrishnan, Minister for Foreign Affairs and Minister-In-Charge of the Smart Nation Initiative, talked about developing a conversational computing platform using intelligent software programmes known as chatbots. It will be rolled out in three stages. Chatbots will draw on a stored database to answer simple factual questions from users, whether spoken or via text input, about selected public services in the first phase. In phase two, the bots will be able to help the public complete simple tasks and transactions within selected government websites. In the final phase, chatbots are expected to respond to even more personalised queries from users.)
AI will also play an essential role in security in three ways:
- Autonomous offence and defence- Some degree of autonomy has been present in weapon systems since a long time, such as precision-guided munitions. In recent years, Unmanned aerial vehicles (UAVs), more commonly known as drones, have been a centrepiece of the American war on terror. They have AI based autopilots. Further automation of weapon systems, which appears inevitable, raises legal and ethical issues. International standards need to be arrived at by consensus for such weapons.
AI can also play a role in defence. At the Defence Technology Prizes for 2016 in Singapore, two of the winning projects involved collation of information from multiple sources, real-time processing and seamless dissemination, facilitating decision-making.
- Local law enforcement- Local law enforcement can use pattern detection to detect anomalous behaviour in individual actors, or to predict dangerous crowd behaviour. Intelligent perception systems can protect critical infrastructure, such as airports and power plant. In the US, the criminal justice is already using data-based decision-making through projects such as Data Driven Justice and the Police Data Initiative.
- Cybersecurity- AI could anticipate cyberattacks by generating dynamic threat models from available voluminous, ever-changing data from multiple sources. It could help in understanding the behaviour of users and help deal with insider threats. Advanced AI systems could detect, evaluate, and patch software vulnerabilities before adversaries have a chance to exploit them.
Understanding diverse applications- promoting innovation and regulating
AI has the potential to improve social well-being by transforming a number of areas. A few examples could be:
- Education- Cognitive, virtual tutors can provide customised learning experience for students, depending on their objectives and requirements. It can enable lifelong learning, by thoroughly understanding a person’s learning process and facilitating acquisition of new skills.
- Medicine- AI can help identify genetic risks based on large-scale genomic studies. It can help in diagnostics and prescribe personalised treatments. It can predict the safety and efficacy of new pharmaceuticals. Earlier this year, Dr. Tan Tin Wee spoke to us about his vision of a seamless bench-to-bedside flow, creating personalised medical treatments dynamically and with great precision in the not too distant future.
- Finance- AI can enable early detection of unusual financial risk and automation in financial systems can reduces opportunities for malicious behaviour, such as market manipulation, fraud, and anomalous trading. They can increase efficiency and reduce volatility and trading costs, all while preventing systemic failures such as pricing bubbles and undervaluing of credit risk.
Teaming together humans and machines can be more effective, than either one alone and can lead to reduction in error rates. In a recent study (Deep Learning for Identifying Metastatic Breast Cancer), given images of lymph node cells, and asked to determine whether or not the cells contained cancer, an AI-based approach had a 7.5 percent error rate, where a human pathologist had a 3.5 percent error rate; a combined approach, using both AI and human input, lowered the error rate to 0.5 percent, representing an 85 percent reduction in error.
AI’s potential is limited only by our imagination. The University of Southern California launched the Center for Artificial Intelligence in Society, an institute dedicated to solving socially relevant problems in areas such as climate change, security, health and homelessness using computational game theory, machine learning, automated planning and multi-agent reasoning techniques. The Sustainability and Artificial Intelligence Lab at Stanford University combines machine learning with high-resolution satellite imagery to provide new data on socioeconomic indicators of poverty and wealth.
Impact on the economy and jobs
Increasing penetration of AI would almost definitely lead to overall productivity gains for the global and national economies. A 2015 study based on data from industries in 17 countries from 1993-2007 found a 0.37% increase in countries’ average growth rates. However, concerns have been raised regarding the distribution of the benefits and the effects on the job market.
During the past 250 years, since the onset of the industrial revolutions, there have been anxieties that technology would render jobs obsolete. But repeatedly, the productivity improvements have resulted in average wage increases for the overall economy. Some jobs disappear and better, higher paying new jobs are created. Workers have been able to devote more time to leisure and are able to afford to consume more goods and services.
Will history repeat itself this time or will it be different? Predictions can only go so far. But as a UN report discusses, it is about substitution and complementarity and susceptibility to automation. Routine tasks, which can be easily broken down into detailed steps can be codified and automated. But jobs involving involving judgement, creativity and persuasion or requiring adaptability and in-person interaction are much less vulnerable, at least for the time being. In the latter, AI can complement humans, resulting in improved outcomes. In fact, it can free up workers, so that they can focus on more critical tasks.
Some governments are beginning to look into the implications and planning for the future. If they do not do so fast enough, it can have serious repercussions for the stability of political, economic and social systems.
So, what is the government’s role in this?*
- Investing in and promotion of research and development- At the moment, most of AI research is driven by private sector and academia (the former is snapping up a lot of the talent from the latter). For the Manhattan project and the Apollo program in the US, peak year funding reached 0.4% of GDP. Here too, governments which are able to, can invest into AI research, in partnership with the private sector, in areas of strategic national interest or direct effects on public good. They can set ambitious but achievable goals, encourage and incentivise investments in the sector. Also, smaller companies might not be in a position to invest into AI research because they would have to wait too long to draw returns. But some of the best, innovative ideas might come from there. Public platforms can facilitate participation of SMEs.
- Building and driving partnerships- In continuation of the previous point, governments might be in the best position to build the requisite coalitions, bringing together industry and academia and direct applications in the right direction. International collaboration is also essential to reach the full potential of AI, for which governments have to start taking initiative.
- Early adoption of AI technologies and their applications and initiating pilots for public use- Governments can integrate AI into delivery of services, as mentioned previously to improve the lives of citizens. However, for this all agencies might not have the budget to invest in R&D but might benefit greatly from AI applications. Therefore, systems have to be established for knowledge exchange and sharing of best practices across government. Centralised platforms (like the ones adopted by the UK government) can play an important role once the technology reaches a higher maturity level.
- Ensuring data availability- Greater the volume and better the quality of data available, the stronger the AI. Public sector agencies often hold some of the best, most valuable data. Governments should galvanise the agencies to release the data, without compromising on privacy and security. Open data standards have to be set, if it hasn’t been done yet and platforms provided for releasing the data. The private sector can also be encouraged to share data for faster AI research.
- Funding rigorous evaluations of AI applications to measure their impact and cost-effectiveness- One of the biggest challenges in AI is to transition safely from the “closed world” of the laboratory into the outside “open world”, with unpredictable conditions. A 2016 study talks about 5 problem: having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behaviour during the learning process ("safe exploration" and "distributional shift" or operating in an environment different from its training environment ). The private sector might hurry to the market, without adequate testing, driven by competition (an issue also seen in cybersecurity). Governments can play a crucial role there.They can also drive the inclusion of ethics in training and education programs, so that practitioners understand their responsibilities to all stakeholders. Some of the ethical issues on hand can be seen in this article at IMDA Singapore.
- Creating a policy, legal, and regulatory environment that achieves the right balance: Drafting regulation is a tricky issue, as it must nit stifle innovation, when a technology is still evolving and is yet to achieve widespread adoption but it must also avoid negative impact on society. Effective regulation would require staff knowledgeable about the existing regulatory framework and regulatory practices generally, and technical experts with knowledge of AI. Necessary technical talent must be recruited or identified in existing agency staff, and they must be involved regulatory policy discussions. New regulations should be created only when required. For instance, for autonomous vehicles, current regulation can provide the structure and only necessary additions should be made.
- Ensuring education and training for jobs of the future: Governments have to invest heavily in high quality education and lay stress on STEM areas (Science, technology, engineering and mathematics) and familiarity with computers and computer science. There will be some element of computer science in many more areas going forward. And the process has to be started from an early age, from primary school education. Universities can modify courses or introduce new ones to meet the market demands and make their students employable. Governments also have to assume the responsibility of re-training workers, so that they can find a place in the brave new world. (Governments in Singapore and Australia are already planning for this future.)
Preparing For The Future Of Artificial Intelligence, Executive Office Of The President, National Science And Technology Council, USA
The National Artificial Intelligence Research And Development Strategic Plan Executive Office Of The President, National Science And Technology Council, USA
Artificial Intelligence, Automation, And The Economy, Executive Office Of The President, USA
Automation and artificial intelligence – what could it mean for sustainable development?, United Nations Department of Economic and Social Affairs
Artificial intelligence: opportunities and implications for the future of decision making, Government Office for Science, UK