News

Article

Dr Janil Puthucheary, Senior Minister of State, Ministry of Education and Ministry of Communications and Information and Minister-in-charge of the Government Technology Agency/ Credit: Lee Kuan Yew School of Public Policy

Dr Janil Puthucheary, Senior Minister of State, Ministry of Education and Ministry of Communications and Information and Minister-in-charge of the Government Technology Agency/ Credit: Lee Kuan Yew School of Public Policy

Minister-in-charge of GovTech on Singapore’s approach to AI governance

“The hyperbole, the speculation, the breathless commentary on artificial intelligence (AI) can potentially generate quite a lot of confusion, quite a lot of anxiety, but that process doesn’t necessarily lead us any closer to a solution.”

Dr Janil Puthucheary, Senior Minister of State, Ministry of Education and Ministry of Communications and Information and Minister-in-charge of the Government Technology Agency (GovTech) started his public lecture on the ‘Societal implications of AI’ at the Lee Kuan Yew School of Public Policy (LKYSPP) on 6 April, with these words. The session was moderated by Associate Professor and Co-Director, Institute of Water Policy, Eduardo Araral.

In his address, the Senior Minister of State attempted to cut through the hyperbole. He talked about the role of Government policy and regulation and outlined Singapore’s approach in the area of AI.   

He said that we need a healthy exchange of considered views between regulators, academics, practitioners, and technical professionals together to work out a practical approach to the socio-economic issues and concerns posed by AI.

A pragmatic policy response

Dr Puthcheary pointed out that the hyperbole suggests that AI is already transformative to the point of causing huge disruption. But the reality is that many of the technologies that are mature enough today for deployment, such as image recognition and speech processing, are narrow and function-specific. They are focused on existing business opportunities, and process. They augment and assist human decisions.

Also, as the technological revolution progresses, what was previously seen as ground-breaking technology and was viewed as the remit of AI researchers, becomes commonplace accepted computational capabilities.

“It means that from a policy perspective, we need to focus on the risks and issues that can be foreseen on the basis of what is happening today and perhaps accept that there will be unknown unknowns and we will deal with them when the time comes,” Dr Puthucheary said.

This requires a pragmatic policy response, to ensure that the technology is allowed to develop and mature without hindrances, while protecting public interests.

The speed and the scale and the pocket-sized nature of the computing ability might be new. But the approach is not. Singapore has adopted this approach previously around technological revolution and has always recognised that risks are inherent in all systems, as they are even in human control systems. The question then is how can the potential risks be mitigated, while maximising the benefits. The idea is to develop a risk-based approach, building accountability and trust.

Key features of the Singapore approach

Being proactive and providing early regulatory clarity

Dr Puthucheary said, “On a national basis, we need to, we want to, and we will be pro-active. We need to be an active player and influence and shape the development of this space.”

This complements Singapore’s hundreds of millions of dollars of investment in developing AI capability, through initiatives such as the Smart Systems Strategic Research Programme and AI Singapore, along with support for the industry to adopt AI and using AI for improving public service mechanisms.

Concurrently with the above-mentioned initiatives, Singapore has to develop governance frameworks to support trustworthy and acceptable use of AI. Doing so early and thereby providing regulatory clarity will encourage businesses to invest in Singapore and create jobs and also help in the development of international norms.

Moving towards an industry-based approach

Dr Puthucheary said that Singapore has started putting in place structures and programmes to address AI risks and governance approaches.

There is a regulators’ roundtable, with regulators across domains such as transport, health and finance. This horizontal approach of looking across domains is useful for areas where AI development is at a nascent stage and the impact of AI where regulatory response might be required is not yet clear.

“Where things are still developing, we are looking horizontally across domains, trying to bring the skills together, making sure we produce the platforms, such that when the knowledge is there, when the opportunity is there, they will be rapidly propagated across our system,” Dr Puthucheary explained.

But in sectors where disruption is already happening, Singapore is taking an industry-specific approach. One example is finance. The Monetary Authority of Singapore (MAS) is bringing together thought leaders, practitioners of data analytics and various professionals in the financial sector to develop a guide for promoting the responsible and ethical use of AI and data analytics by financial institutions.

While setting up broad horizontal structures, Singapore aims to move towards an industry-specific approach as far as possible. Dr Puthucheary emphasized, “When it becomes clear that there is the need, the capability and the regulatory opportunity to do so, go for an industry-specific approach.”

Adoption of cross-industry principles for Explainable, Fair and Safe AI

Though there is no one-size-fits-all solution or a generic AI risk governance framework, still some broad principles are required for the purposes of public accountability and business confidence.

1. The first is to try as hard as possible to ensure that the algorithmic decisions, the output of AI-driven processes are explainable and transparent, to a reasonable degree. 

“But from the point of view of the relationship between the industry and the regulators we need to develop the maturity around that process to expect that we have to explain ourselves to the public about these AI-driven algorithmic decisions in way, that the public accepts is sufficiently transparent and fair,” the Senior Minister said.

2. The second principle is about fairness, in the sense of the removal of human bias. 

Depending on how we engineer AI-driven solutions, there is a possibility that existing human biases will be get hardcoded into the system, that they will be amplified and institutionalised.

So, it needs to be objectively demonstrable that the deployment of AI-based or algorithm-based solution is not amplifying, institutionalising or coding in human bias. This includes bias that may have been present before or new bias that has been allowed to creep in.

In areas such as healthcare, defence and security, this is of critical importance.

3. The third and final principle is safety. 

As increasingly consequential decisions are entrusted to these support technologies, the public has to be repeatedly reassured of their safety and well-being.

The tolerance of harms and risks needs to vary with industry. But as Singapore goes down this development path, there has to be high regard for the role of safety. Both industry and regulatory organisations have to assume responsibility for risk and impact assessments, adequate testing, and necessary limits to mitigate harm.

Dr Puthucheary cited the Land Transport Authority (LTA)’s approach to autonomous vehicles as an example. LTA has imposed a progressive testing process, which progressively moves up to more challenging conditions and wider spaces, and holds the autonomous vehicles to a safety bar that is much higher than a human would be.

This seems to go against the grain of the argument that ultimately autonomous vehicles would be safer than human drivers. But along the journey to get to that the ultimate onjective, Dr Puthucheary explained, that argument has to be reversed and the regulator must ensure that that higher safety standard is delivered and objectively demonstrated.

Conclusion

The Government needs to be agile in responding to a rapidly evolving technology such as AI. All of the above might need to change in six months from now.

The Government must also continue to invest in people and ensure that Singapore has the right educational base; develop infrastructure to support the connectivity that is a key requirement for the development of these types of technologies and their deployment; and develop public sector capabilities to ensure that these tools are used maximally for the public good.

The Government wants to partner with industry for every step of the way, from thinking about the development, research, regulations and to finally providing these solutions.

“We are only at the start of what could potentially be a very exciting and transformative journey around deployment and development of AI,” Dr Puthucheary concluded.

“We should move away from the hyperbole, think about it with some degree of pragamatism, but also a bold vision. We have to believe that this a space which we can get into and do things with, manage the risks, reduce the harm, and ultimately exploit the benefits to help our society.”

[1] Earlier algorithmic decision-making systems relied on rules-based, “if/then” reasoning. But machine learning and its subset, deep learning systems create more complex models in which it is difficult to understand why and how decisions were made. That’s why deep learning systems are often referred to as black boxes.

Visit site to retreive White Paper:
Download
FB Twitter LinkedIn YouTube