China and Us Initiate the Measures to Regulate Generative AI

China and Us Initiate the Measures to Regulate Generative AI

Despite being competitors in the technology trade, the United States and China share a common concern regarding accountability and potential misuse of AI. Recently, both countries have announced measures for regulating AI development. The National Telecommunications and Information Administration (NTIA), a branch of the US Department of Commerce, has requested public input on policies shaping an AI accountability ecosystem. These policies include data access, measuring accountability, and varying approaches to AI in different industry sectors.


The Cyberspace Administration of China (CAC) also released draft measures for managing generative AI services on the same day. The proposed rules require providers to ensure the validity of data used to train AI tools and prevent discrimination in algorithm design and training data sets. Additionally, security assessments must be submitted to the government before launching AI tools to the public, and inappropriate content generated by platforms must be prevented within three months. Non-compliance can result in fines, service suspension, or criminal investigations. The CAC also emphasized that any content generated by generative AI must align with China's core socialist values.


China's tech giants, such as Alibaba and Baidu, have already made significant progress in AI development. Alibaba Cloud has announced a new large language model called Tongyi Qianwen, while Baidu has introduced its own Chinese language ChatGPT alternative, Ernie bot.


AI regulation vs. innovation

While China has established regulatory guidelines for AI, other governments worldwide have taken a distinct approach. The UK government recently announced that it would not delegate AI governance to a single regulator to avoid stifling innovation with heavy-handed legislation. Instead, existing regulators were urged to develop approaches that best fit the way AI is used in their industries. However, industry experts criticized this approach, stating that current frameworks may be insufficient in regulating complex and multilayered AI tools, inevitably leading to overlapping regulations.


Additionally, the UK's data regulator issued a warning to tech companies about safeguarding personal information while developing and deploying large language, generative AI models. Meanwhile, Italy's data privacy regulator prohibited ChatGPT due to alleged privacy violations. A group of 1,100 technology leaders and scientists called for a six-month halt in developing systems more potent than OpenAI's newly launched GPT-4.


Frank Buytendijk, an analyst at Gartner, commented on the natural course governments and legislators follow regarding technology innovation and regulation. He explained that after new technology is introduced, mistakes are made as people learn to use it responsibly. This stage is where AI regulation begins to emerge, providing an opportunity for developers, users, and legal systems to learn about responsible use through law interpretation and case law. The final phase is where technologies have responsible use built-in, and people learn about responsible use through these best practices.


Recommend