In recent years, the rapid advancement of artificial intelligence (AI) technologies has given rise to artificially-generated deepfake media that is often indistinguishable from genuine content to the naked eye. While deepfake technologies offer potential socioeconomic benefits, they are frequently weaponized for malicious purposes that threaten privacy, security, and public trust. To illustrate, the exposure of an epidemic of digital sex crimes in South Korea demonstrated the alarming challenge for legal regulation worldwide.
In South Korea, thousands of women are targeted in a Telegram channel, which allows approximately 227,000 active members to generate and customise nude deepfake photographs of women within seconds. The most blood-curdling detail is that members are required to serve up victims on a silver platter – complete with snapshots and real personal info, such as their ages, phone numbers, and even addresses. No one is safe. Women in South Korea range from middle school students to international K-pop stars are all at risk of being sexually degraded by people around them, including their blood-related family members.
The South Korean incident is neither the first nor the last of its kind. As deepfake technologies become more accessible, it has become a widespread issue where an estimated 90-95% of online deepfake applications are non-consensual pornography. As Caroline Quirk notes, this technology poses a significant threat by blurring the boundaries between fact and fiction if left unchecked. While countries worldwide grapple with AI technology regulation, Malaysia also finds itself at a crucial juncture in developing its own approach to AI governance. Hence, this article aims to examine the current governance of Artificial Intelligence (AI) generated deepfake technology in Malaysia within the global landscape.
What is Deepfake?
Deepfake technology refers to artificial images or videos created by artificial intelligence (AI) that mimic how the human brain processes information. This neural network technology collects people's voices, facial expressions, and behaviours to produce hyper-realistic media that is difficult to distinguish from reality.
Globally, deepfakes are increasingly used to spread misinformation, commit fraud, and manipulate public opinion. Therefore, lawmakers and governments are called upon to address these rapidly evolving issues while balancing regulation with ethics and fundamental freedoms.
Overview of International Legislation on Deepfake Regulation
European Union (EU)
The European Union has taken a leading role in AI regulation with the EU Artificial Intelligence Act (EU AI Act), which came into force in August 2024. This Act aims to specify comprehensive obligations for AI developers and users while safeguarding fundamental rights. In addition to providing a clear definition of AI, this Act adopts a risk-based approach that classifies AI applications into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. AI practices posing unacceptable risks, such as those threatening fundamental human rights, are banned, while stricter requirements are imposed for higher-risk AI applications..
According to Article 50(4) of the EU AI Act, deepfake content creators must explicitly inform the public about the artificial nature of their work via labelling and mentions of the AI origin. Exceptions exist for law enforcement agencies investigating crimes or gathering evidence. Similarly, AI disclosure in artistic works, such as movies or art exhibits, is limited to a degree that does not infringe upon freedom of expression and artistic creation, as protected by EU Charter Articles 11 and 13. Nevertheless, the effectiveness of this Act in tackling deepfakes remains questionable, as it avoids explicitly designating deceptive deepfakes as high-risk.
China
China has shown a dedicated initiative in rolling out AI regulations by passing several policies and regulations to govern deepfake technologies. These include the Interim Measures for Administration of Generative AI Services, the Administrative Provisions on Deep Synthesis of Internet-based Information Services, and the Administrative Provisions on Algorithm Recommendation for Internet Information Services. These regulations have a global impact due to their extraterritorial nature. They impose strong regulations on local and foreign generative AI service providers intending to conduct related business activities in China.
To meet the transparency requirements first stipulated by the EU AI Act, China has also enforced labelling requirements for published generative AI content. Furthermore, a new regulation titled "Cybersecurity Technology – Basic Security Requirements for Generative Artificial Intelligence (AI) Service" is currently in the drafting process. This regulation aims to further delineate critical security requirements for generative AI services, including clarity of training data and other security measures.
The United States (US)
Recently, the Content Origin Protection and Integrity from Edited and Deepfaked Media Bill (COPIED Act) has been proposed to ensure the source of AI synthetic content is trackable and easier to detect. This development indicates that the US is taking steps towards more comprehensive AI regulation. However, there is yet to be any federal legislation specifically related to the regulation of AI-generated deepfake technology to be passed in Congress.
While the US lacks federal legislation specifically addressing deepfakes, individual states have taken action. According to the State Legislation Tracker published by Public Citizen, over 20 state laws have been enacted by individual states to limit the use of deepfake technologies in future elections. For example, Senate Bill 751 in Texas and California Assembly Bill 730.
Moreover, around 23 states in the US have passed related deepfake laws to combat non-consensual deepfake pornography and the transmission of falsely created media depicting nudity, such as Virginia Senate Bill 1736 and Georgia Senate Bill 337.
Malaysia Responding to Global Calls
Despite being a fast-growing hub for AI technologies and investments from international technology partners, Malaysia currently lacks specific legislation regulating the ethics of AI-generated technologies. Basic protection against the misuse of AI technologies primarily relies on existing legal frameworks. For instance, Section 211 of the Communications and Multimedia Act 1998 prohibits the dissemination of indecent, obscene, false, or offensive online content, which could be interpreted to cover the extent of misuse of deepfake technologies. Moreover, threatening blackmail involving deepfake-generated videos or images and extortion are criminalised under Section 383 of the Penal Code.
It is also noteworthy that Malaysia's legislature reached new milestones in AI regulation with the issuance of the Cyber Security Act 2024 on 26 August 2024. This Act introduces the National Cyber Security Committee and establishes provisions to regulate cybersecurity service providers through licensing. Although this Act is insufficient to combat specific misuse of Deepfake technology, it does establish an essential framework to strengthen cybersecurity. Meanwhile, the Personal Data Protection Act 2010 is currently under review and may be expanded to provide wider protection for data sharing.
Despite the lack of specific deepfake legislation, Malaysia is making strides towards more comprehensive AI governance. The Ministry of Science, Technology, and Innovation (MOSTI) has launched the National Artificial Intelligence Roadmap 2021-2025, which aims to foster a regulated and ethical ecosystem for AI evolution. The Roadmap targeted five national priority areas: agriculture, healthcare, smart city, education, and public service. Notably, this roadmap lays down several principles to direct the ethics of AI technology innovation, such as justice, reliability, transparency, and accountability.
In addition, the regime of Artificial Intelligence Governance and Ethics Guidelines (AIGE) was launched in September 2024. These guidelines provide a clear framework and responsibilities related to the usage of AI technology among multiple parties, such as the public, policymakers, developers, and technology providers. Although the AIGE is not legally binding, it demonstrates the commitment of the Malaysian government to adapting the legal system to changing global trends. Meanwhile, the Government has announced its plan to create a national cloud policy. This policy is drafted to encourage the ethical use of AI in public service innovation, economic competitiveness and growth, strengthening user trust and data security, and empowering citizens through digital inclusivity.
Conclusion
The rapid advancement of AI technology is a double-edged sword that offers powerful tools for innovation and development while continually challenging the boundaries of ethics and law. Considering the multi-sector AI application, the path forward will require ongoing collaboration between policymakers, technologists, legal experts, and civil society in the pursuit of effective regulation. The legislature might draw insights from proactive approaches taken by other countries, such as the risk-based approach, and could impose strict scrutiny for data clarity.
With that being said, Malaysia is highly encouraged to put more emphasis on developing its AI governance legislation to ensure that Malaysian legislation is capable of balancing innovation with ethical considerations and public safety as AI technology evolves. There is no one-size-fits-all approach, but it is important for the legislature to tailor legal regulation to local needs and values.
Written by: Kee Yi Gin
Edited by: Benjamin Chung
Comments