Regulation of AI in Singapore amidst a World of AI Proliferation

Abstract

The article "Regulation of AI in Singapore amidst a World of AI Proliferation" examines Singapore's regulatory response to the rapid global spread of artificial intelligence (AI). Although Singapore lacks comprehensive AI laws, it has implemented frameworks like the Personal Data Protection Act and the Model AI Governance Framework to ensure transparency, accountability, and ethical AI use, particularly in sectors like finance and healthcare. This article will compare Singapore's approach to AI governance with its regional counterparts in ASEAN and Asia and international standards, such as the EU's AI Act. The analysis will evaluate whether Singapore's existing light-touch regulatory model is sufficient in an environment that accelerates AI deployment. It will also explore how Singapore's National AI Strategy aims to build an AI-ready workforce and foster international collaboration while emphasising the need to balance innovation and regulatory oversight in an evolving technological landscape.

I. Introduction

Amidst the rapid proliferation of AI globally, Singapore has taken a distinct, light-touch approach to AI governance. Existing frameworks such as the Model AI Governance Framework and the Personal Data Protection Act have served as the foundation of this approach in place of stringent AI-specific legislation. These frameworks are crucial in addressing challenges in transparency, accountability, and ethical AI use in critical sectors such as finance and healthcare, where Singapore hopes to achieve a delicate balance between innovation and oversight. This article explores the efficacy of Singapore’s AI governance by benchmarking it against global and regional standards. Firstly, the article will evaluate the current model's performance in adequately regulating AI in an era of accelerating AI development. Secondly, the article will analyse Singapore’s National AI Strategy, elucidating Singapore’s focus on workforce development and international collaboration. Finally, the article will consider the necessity of continuously refining its regulatory framework to stay ahead of AI advancement and remain a global leader in ethical AI adoption.

II. Singapore’s AI governance frameworks

A. Model AI Governance Framework (“MAGF”)

The MAGF provides organisations with practical guidelines to deloy AI responsibly. The framework addresses areas such as internal governance, human involvement in AI decisions, operations management, and stakeholder communication1. The MAGF has also been expanded to address generative AI concerns by introducing nine key dimensions—including accountability, data quality, trusted development, incident reporting, testing and assurance, security, content provenance, safety and alignment research, and AI for public good2. Through this, the MAGF is intended to comprehensively guide the ethical use of emerging AI technologies.

In the finance and healthcare sectors, the MAGF has excelled in ensuring transparency, fairness, and explainability for AI systems. In the finance sector, MAGF enhanced the traditional credit scoring system through the mandating of fairness audits and requiring transparency in the decision-making process, ensuring equitable access to credit3. As such, public confidence in financial institutions has improved, and systemic biases in lending practices have been reduced.

In the healthcare sector, bias in diagnostic tools is mitigated by ensuring training datasets are balanced and tools are explainable4. In Singapore, AI systems used for early detection of diabetic retinopathy are implemented under MAGF guidelines. These systems improve detection rates and address demographic variations, which is apparent from their deployment in community health settings5. Furthermore, the MAGF aids clinicians in interpreting AI-driven diagnostic outcomes, thereby establishing a connection between AI and medical practice6.

The MAGF has achieved success because it is flexible and allows for integration across industries without imposing rigid constraints. Through its sector-agnostic profile, organisations are able to adapt MAGF’s principles incrementally, fostering innovation while maintaining ethical standards. However, as adherence to MAGF is currently voluntary, its adoption is limited, especially in less regulated sectors and organisations lacking incentives to prioritise ethical AI practices. Moreover, smaller organisations with resource challenges often struggle to implement the framework.

B. Personal Data Protection Act 2012 (“PDPA”)

Similarly, the PDPA safeguards privacy and security within data-intensive AI systems. The PDPA’s key provisions include strict controls on data management, mandating organisations to obtain consent before data collection, transparency in processing, and securing personal information effectively. This framework is enforced via significant penalties, promoting accountability and reinforcing trust in those responsible for handling data7. This is evident in the requirement under PDPA for AI-driven customer analytics tools to anonymise user data, thereby balancing innovation and privacy protection8. As such, the PDPA successfully provides clear, structured guidelines that facilitate organisations’ navigation of complex data protection requirements while fostering responsible practices across industries by mandating ethical data use. This has allowed the PDPA to enhance public confidence in AI systems.

Nevertheless, the PDPA faces challenges posed by emerging risks such as cross-border data sharing in machine learning processes. These risks highlight gaps in the framework that require revision to address identified vulnerabilities9. Furthermore, the rapid evolution of AI technologies requires the PDPA to be dynamic and evolve through periodic revisions to ensure the PDPA remains effective in governing AI systems in Singapore.

III. Comparison of Singapore’s approach to global standards

A. European Union Artificial Intelligence Act (“EU AI Act”)

The EU AI Act operates on a prescriptive, risk-based framework that categorises AI systems into four risk categories: unacceptable, high, limited, and minimal. High-risk systems such as medical diagnostics and biometric identification tools face more stringent compliance requirements, including accountability measures, risk assessments, and transparency obligations10. In contrast, minimal-risk systems like spam filters and AI-powered email categorisation tools are largely exempt from regulatory scrutiny, but must adhere to ethical guidelines and transparency principles11. Hence, the EU AI Act prioritises accountability and public safety, while ensuring adequate and robust safeguards against potential AI risks.

However, significant challenges arise due to the EU AI Act’s strict requirements. In particular, small and medium enterprises (“SMEs”) may be disproportionately affected by the high compliance costs and administrative complexities, thereby stifling innovation12. Singapore’s MAGF addresses these concerns by emphasising flexibility and voluntary compliance. This light-touch approach lowers barriers to entry for SMEs, balancing innovation and ethical oversight. The adaptability of this framework provides sector-specific guidance that allows organisations of different profiles to implement AI responsibly without the concerns that arise from the rigidity of a one-size-fits-all regulatory framework.

B. China’s AI regulatory framework

China’s approach to AI governance involves algorithmic transparency and content moderation. Recommendation algorithms are mandated to disclose their operational mechanisms to the government to address critical concerns such as misinformation and national security13. China prioritises accountability, requiring platforms to ensure the responsible use of AI, especially in sensitive areas. For example, there is significant emphasis on the regulation of AI-driven news recommendation systems utilised by platforms that are intended to minimise bias and safeguard against the spread of misinformation14.

Although China’s stringent regulations have effectively addressed pressing issues, they may inadvertently hinder creativity and innovation. Developers are limited in their algorithmic experimentation, which can stifle progress in emerging AI applications such as personalised content delivery15. Conversely, Singapore’s voluntary compliance approach avoids restrictive mandates to foster meaningful collaboration and innovation. Hence, Singapore is able to advocate for businesses to integrate responsible practices without compromising creativity.

C. AI regulatory framework of ASEAN counterparts

Singapore’s regional partners in ASEAN such as Malaysia and Indonesia, have adopted a policy of voluntary compliance for AI regulation that is in line with the ASEAN Guide on AI Governance and Ethics (“ASEAN Guide”). The primary focus of this regional framework is on flexibility, cultural sensitivity, and collaboration, prioritising growth by avoiding rigid mandates16. The dynamic guide accommodates each member nation’s diverse economic and cultural nuances. Therefore, tailored AI adoption strategies may be conceptualised to align with local priorities.

In Malaysia, the AI governance framework prioritises responsible AI use in sectors such as financial services. For instance, banks in Malaysia that utilise AI for fraud detection must conduct regular audits of their algorithms and maintain clear documentation records on the fraud identification process for provision to regulators17. This is pertinent as, without proper oversight, such algorithms can potentially reinforce biases in training data and mistakenly flag transactions disproportionately for specific demographics or socioeconomic groups. As such, the framework does well in requiring financial institutions to validate their AI models to ensure their AI systems are unbiased and explainable.

Similarly, Singapore’s approach to AI regulation closely aligns with the ASEAN Guide in terms of adopting flexible governance and reinforcing its regional leadership. By implementing ethical AI practices rooted in shared philosophies, Singapore is a gold standard for striking a balance between innovation and accountability in the ASEAN region. Singapore’s proactiveness in shaping regional policies is indicative of its commitment to promoting further collaboration within ASEAN and addressing global AI challenges18.

IV. Emerging challenges in AI governance

A. Risks posed by Generative AI

Generative AI introduces several risks ranging from hallucinations to copyright violations. Hallucinations occur when AI generates false or misleading content for users. This is problematic, particularly in crucial sectors such as education, where Generative AI can misinform students with fabricated information, compromising their learning integrity. This is seen when AI tools produce incorrect historical facts or scientific concepts. In healthcare, hallucinations can have dire consequences, which include erroneous diagnoses or treatment recommendations. Thus, patient safety may be endangered, eroding trust in AI systems19.

Although the Model AI Governance Framework for Generative AI adopts guidelines that focus on voluntary compliance to address the identified risks, its inherent optional nature reduces its enforceability and impact, particularly in critical sectors20. The risks posed by Generative AI hallucinations may be mitigated by mandating certain safeguards, such as requiring AI systems utilised in the healthcare sector to validate outputs against clinical standards.

B. Risks posed by algorithmic bias

A core issue in AI governance is algorithmic bias, which arises from imperfect training datasets that tend to perpetuate discrimination. For example, in hiring, AI algorithms have favoured specific demographics, exacerbating inequalities in the recruitment processes21. The MAGF has proven useful in providing tools to identify and mitigate bias through regular audits of algorithms and fairness assessments. Thus, organisations have been able to address bias in hiring processes better.

However, the voluntary nature of MAGF guidelines reduces their overall impact. Mandating compliance in areas such as employment and credit scoring could do better to ensure fair outcomes and rally public support for AI-driven decisions. In particular, requiring regular audits on hiring algorithms to determine and root out bias could promote diversity and reduce discriminatory outcomes.

C. Data privacy and security

Due to AI’s reliance on extensive datasets, data breaches and misuse risks are prominent, especially in cross-border data sharing. The PDPA includes provisions for robust protection by enforcing stringent controls for data collection, processing and storage22. One way organisations meet this requirement is by anonymising sensitive data and securing it against unauthorised access through two-factor authentication.

Nevertheless, the PDPA remains too static. Hence, its provisions may not adequately address vulnerabilities that continue to evolve and arise, such as from international data transfers involving machine learning models23. By being cognisant of the advancements in AI systems and their risks, the Personal Data Protection Commission may assess gaps within the PDPA that require updates. Subsequently, measures focusing on revising the PDPA may be adopted, such as introducing standards for secure cross-border data sharing that could enhance data protection.

V. Focus of Singapore’s National AI Strategy

A. Workforce development

Singapore’s National AI Strategy prioritises workforce development, aimed at equipping Singapore Citizens to thrive in an AI-driven economy. For instance, Singapore has introduced the TechSkills Accelerator (TeSA), spearheaded by the Infocomm Media Development Authority (IMDA). TeSA provides targeted training in key areas relevant to AI, such as AI ethics, algorithmic governance, and data analytics24. This is evident in TeSA programs training professionals in financial services to address algorithmic bias in credit-scoring systems. For instance, to ensure adequate algorithmic governance, TeSA’s programs prepare professionals to develop and audit their frameworks to ensure AI operates ethically and within regulatory guidelines.

The Singapore University of Technology and Design (SUTD) Academy offers the "Mastering AI Ethics and Governance" course. The course focuses on AI governance frameworks within the Singapore context and places a strong emphasis on global best practices25. Participants are equipped with the relevant knowledge and skillsets to build AI systems that prioritise fairness, explainability, and human-centricity, addressing critical risks such as discrimination in automated decision-making processes. Thus, participants would be better placed to design, implement and manage AI systems responsibly, enhancing public confidence and achieving Singapore’s Smart Nation goals through sustainable AI adoption26.

B. International collaboration

International collaboration serves to enhance the alignment of Singapore’s strategy with global standards. For example, through global partnerships with organisations such as the Organisation for Economic Cooperation and Development (OECD), Singapore contributes to developing global AI governance principles, including the OECD’s AI Principles27. These collaborations facilitate regulatory interoperability, allowing Singapore businesses to navigate international markets more confidently while ensuring their AI systems adhere to shared ethical norms.

Global partnerships also pave the way for knowledge-sharing to mitigate emerging risks such as cross-border data sharing. The robust PDPA benefits from the flow of insights between countries through international collaboration, enabling it to maintain its dynamism and evolve to address vulnerabilities associated with AI technologies28. For instance, Singapore engaged with the OECD and other international bodies to introduce provisions regarding data portability, aimed at allowing individuals to hold more significant control over their data while ensuring adherence to global standards like the EU’s General Data Protection Regulation (GDPR)29. Therefore, international collaborations ensure Singapore’s frameworks remain relevant and adaptable in an increasingly interconnected world.

VI. Future developments

A. Introduction of risk-based legislation

Singapore’s approach to AI regulation may be enhanced by incorporating the risk-based framework employed by the EU AI Act. Although the MAGF advocates principles of accountability and transparency, there is no classification of AI systems according to their appropriate risk level, nor does it stipulate binding measures for AI applications30. Consequently, introducing a tiered system could enable target oversight and foster innovation.

For instance, AI systems used in hiring may be classified as high-risk due to potential risks from systematic biases, necessitating mandated fairness audits and heightened transparency measures. Conversely, low-risk applications like spam filters could be allowed greater flexibility. This proportional approach to AI governance balances the government’s concerns about efficient resource allocation in critical AI applications while avoiding unnecessary regulatory burdens on low-risk systems. Additionally, this refined approach would enable Singapore-based businesses to align with global best practices, enhancing Singapore’s regulatory interoperability with international markets. As a result, compliance complexities would be significantly reduced for companies operating across different jurisdictions, fostering more substantial international confidence in Singapore’s AI governance frameworks31.

However, despite risk-based legislation, entities involved in high-risk applications like biometric surveillance or automated medical diagnostics will likely receive exemptions from stringent AI compliance measures, reducing the effectiveness of risk-based legislation.

Several approaches may be adopted to remedy this32. Firstly, legislation should be comprehensive to encompass all entities, including government agencies, by clearly defining the scope of AI regulations, thereby preventing automatic exemptions. Secondly, creating and authorising autonomous agencies to monitor AI implementations across public and private sectors is crucial to maintaining consistent standards and accountability. Finally, mandating the disclosure of AI deployment practices for all entities, including state-backed organisations, would promote transparency and enhance public trust in AI systems. By adopting these measures, Singapore can ensure that AI accountability requirements are uniformly applied, fostering a trustworthy and ethical AI ecosystem across all sectors.

B. Expansion of Generative AI governance

Although the voluntary compliance guidelines of the Generative AI Framework are a step forward, it is key to address practical applications to enhance the relevance and impact of these guidelines. For example, for Generative AI used in public healthcare, inputs used for information generation should be derived only from official medical databases to prevent misdiagnoses caused by hallucinated outputs33. Similarly, in content creation industries such as journalism, generative AI must incorporate transparency features like source attribution and veracity checks to combat disinformation34.

Furthermore, Singapore can draw on global best practices to refine its AI regulatory frameworks. Insights from the OECD AI Principles draw attention to the pertinence of integrating trustworthiness measures into Generative AI systems to prevent unethical use35. Collaborations with academia and the private sector may also provide benefits by establishing robust evaluation criteria for Generative AI systems that would uphold accountability and foster innovation. Consequently, Generative AI could contribute positively to Singapore’s digital landscape without compromising public confidence or safety.

C. Greater investment in workforce development

To address emerging challenges posed by Generative AI, TeSA has recently expanded its curriculum to include training in fairness audits, AI transparency, and mitigating misinformation36. For instance, dedicated and specialised modules are now focusing on helping data scientists understand the nuances of Generative AI, such as identifying and correcting bias in AI-generated content37. Consequently, professionals from any sector can be equipped to implement AI systems responsibly in line with the MAGF and PDPA.

With the pervasiveness of AI and the integration of AI into workflows on the rise, certain jobs have inevitably become obsolete. As such, mid-career reskilling initiatives are paramount and have become increasingly inclusive. There has been a rise in programs targeting industries heavily affected by AI advancements, such as healthcare and finance. These programs help educate employees to integrate AI tools into their workflows and help them stay relevant in an increasingly dynamic work environment38. Healthcare workers, for example, have been trained to utilise AI diagnostic tools in line with clinical guidelines, leveraging the efficiency of AI while not compromising on accountability. Similarly, financial analysts have cultivated the necessary skills to take advantage of Generative AI for risk assessment and introduce safeguards to preclude bias or inaccuracies in decision-making systems39.

VII. Conclusion

Singapore's AI regulatory framework balances innovation with ethical oversight through its voluntary and sector-specific guidelines. Thus, Singapore is well positioned as a burgeoning AI hub, attracting global tech firms and talent. However, compared to more prescriptive models like the EU's AI Act, Singapore's framework can be further enhanced to address the lack of enforceability in high-risk applications. To remain competitive while ensuring comprehensive oversight, Singapore should consider integrating mandatory compliance measures for high-risk sectors to address accountability concerns.

Footnotes:
1 Personal Data Protection Commission, Model AI Governance Framework (2nd edn, 2020) <https: data-preserve-html-node="true"//www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework> accessed 14 February 2025.

2 AI Verify Foundation, Model AI Governance Framework for Generative AI (May 2024) <https: data-preserve-html-node="true"//aiverifyfoundation.sg/wp-content/uploads/2024/05/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1.pdf> accessed 14 February 2025.

3 Kayla Goode, Heeu Millie Kim, and Melissa Deng, Examining Singapore's AI Progress (2023).

4 Warren Chik and Haran Sugumaran, ‘Legal Governance of Artificial Intelligence in Healthcare’ in Handbook on Health, AI, and the Law (Edward Elgar 2024) <https: data-preserve-html-node="true"//www.elgaronline.com/edcollchap-oa/book/9781802205657/book-part-9781802205657-30.xml> accessed 08 November 2024.

5 Jing Li and others, ‘Artificial Intelligence in Screening for Diabetic Retinopathy in Community Settings: A Success Story in Singapore’ (2023) Journal of Medical Systems <https: data-preserve-html-node="true"//pmc.ncbi.nlm.nih.gov/articles/PMC10591058/> accessed 08 November 2024.

6 Nurhadhinah Ridzuan and others, 'AI in the Financial Sector: The Line Between Innovation and Ethical Responsibility' (2024) Information.

7 Carol Soon and Beverly Tan, 'Regulating Artificial Intelligence: Maximising Benefits and Minimising Harms' (2023) Institute of Policy Studies Working Papers No 52 <https: data-preserve-html-node="true"//lkyspp.nus.edu.sg/docs/default-source/ips/ips-working-paper-no-52_regulating-artificial-intelligence-maximising-benefits-and-minimising-harms.pdf> accessed 8 November 2024.

8 Jason Allen, 'Comparing Smart City Data Protection Approaches: Digital Consent and the Accountability Framework in Singapore' (2023) SMU Centre for AI & Data Governance <https: data-preserve-html-node="true"//ai.smu.edu.sg> accessed 8 November 2024.

9 Gregory Corning, 'The Diffusion of Data Privacy Laws in Southeast Asia: Learning and the Extraterritorial Reach of the EU's GDPR' (2024) 30(5) Contemporary Politics 656.

10 Asress Gikay, 'Risks, Innovation, and Adaptability in AI Regulation' (2024) International Journal of Law and Information Technology <https: data-preserve-html-node="true"//academic.oup.com/ijlit/article-pdf/doi/10.1093/ijlit/eaae013/58364253/eaae013.pdf> accessed 8 November 2024.

11 Pietro Dunn and Giovanni Gregorio, 'The Ambiguous Risk-Based Approach of the Artificial Intelligence Act: Links and Discrepancies with Other Union Strategies' (2022) CEUR Workshop Proceedings <https: data-preserve-html-node="true"//ceur-ws.org/Vol-3221/IAIL_paper7.pdf> accessed 18 November 2024.

12 Luka Grozdanovski and Jef Cooman, 'Risk-Based Approach in EU AI Regulation' (2022) 49 Rutgers Computer and Technology Law Journal <https: data-preserve-html-node="true"//heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/rutcomt49&section=11> accessed 18 November 2024.

13 Warren Chik, 'China’s Algorithm Governance Regulations: Emerging Lessons for AI Transparency' (2023) Singapore Management University Centre for AI & Data Governance.

14 Araz Taeihagh, 'Governance of Artificial Intelligence in Asia' (2021) 40(2) Policy and Society 137 <https: data-preserve-html-node="true"//academic.oup.com/policyandsociety/article-abstract/40/2/137/6509315> accessed 8 November 2024.

15 Robert Walters and Marko Novak, Cyber Security, Artificial Intelligence, Data Protection and the Law (Springer 2021).

16 ASEAN Secretariat, ASEAN Guide on AI Governance and Ethics (2022) <https: data-preserve-html-node="true"//asean.org/book/asean-guide-on-ai-governance-and-ethics/> accessed 18 November 2024.

17 Malaysia Digital Economy Corporation, Malaysia's National AI Framework: Ethics and Practices (2023).

18 Ibid.

19 Ziyan Guo, Li Xu, and Jun Liu, 'Trustworthy Large Models in Vision: A Survey' (2024) arXiv preprint < https://arxiv.org/pdf/2311.09680> accessed 18 November 2024.

20Allen & Gledhill, 'Singapore Launches Model AI Governance Framework for Generative AI' (30 May 2024) <https: data-preserve-html-node="true"//www.allenandgledhill.com/sg/publication/articles/28262/launches-model-ai-governance-framework-for-generative-ai>.

21 Sung Lee and others, 'Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment' (2024) arXiv preprint <https: data-preserve-html-node="true"//arxiv.org/abs/2408.11820> accessed 18 November 2024.

22Personal Data Protection Act 2012 (Act No. 26 of 2012), ss 24–25.

23 Greg Corning, Cross-Border Data Privacy and AI Governance in Southeast Asia (Taylor & Francis 2024).

24 Infocomm Media Development Authority, Singapore Digital Economy Framework for Action (2018) <https: data-preserve-html-node="true"//www.imda.gov.sg/-/media/imda/files/sg-digital/sgd-framework-for-action.pdf> accessed 8 November 2024.

25 Singapore University of Technology and Design, 'Mastering AI Ethics & Governance' <https: data-preserve-html-node="true"//www.sutd.edu.sg/Admissions/Academy/Our-Offerings/Short-Courses/Data-Analytics/Master-AI-ethics-and-governance> accessed 18 November 2024.

26 Smart Nation and Digital Government Office, National AI Strategy (2019) <https: data-preserve-html-node="true"//www.smartnation.gov.sg/nais/> accessed 18 November 2024.

27 Singapore Ministry of Communications and Information, 'OECD and AI Governance Collaboration' (2022).

28 Nydia Leon and Josephine Seah, 'How to Address the AI Governance Discussion? Lessons from Singapore' (2019) Singapore Management University Centre for AI and Data Governance Working Paper No 2019/03 <https: data-preserve-html-node="true"//ink.library.smu.edu.sg/caidg/1/> accessed 18 November 2024.

29 OECD, 'The Impact of Data Portability on User Empowerment, Innovation, and Competition' (2022) <https: data-preserve-html-node="true"//one.oecd.org/document/DSTI/CDEP/DGP(2022)12/FINAL/en/pdf> accessed 18 November 2024.

30 Rohit Nayak, 'Singapore's Forward-Thinking Approach to AI Regulation' (20 August 2024) Diligent <https: data-preserve-html-node="true"//www.diligent.com/resources/blog/Singapore-AI-regulation>.

31 Singapore Ministry of Communications and Information, 'Regulatory Interoperability in AI Governance' (2023).

32 Cyber Security Agency of Singapore, 'Guidelines on Securing AI Systems' (15 October 2024) <https: data-preserve-html-node="true"//isomer-user-content.by.gov.sg/36/42140c27-030f-4bb9-b4c7-b543ad2ddad4/guidelines-on-securing-ai-systems_2024-10-15.pdf>.

33 Ivanna Tan and others, 'Navigating Governance Paradigms: Comparative Study of Generative AI Processes' (2024) Singapore Management University.

34 Ibid.

35 OECD, AI Principles and Governance in Emerging Technologies (2023).

36 IMDA, TechSkills Accelerator (TeSA): AI Training Initiatives (2023).

37 Ibid.

38 Singapore Ministry of Manpower, 'Future Economy Council Workforce Transformation Report' (2024).

39 Carol Soon and Beverly Tan, 'Responsible AI in Finance: Training Frameworks' (2023) Lee Kuan Yew School of Public Policy.

Next
Next

The Obesity Issue in Singapore