China Clamping Down on AI with Stringent Government Reviews

China AI government crackdown stringent reviews

Photo Credit: Alejandro Luengo

The Chinese government will require stringent government checks on AI services operating within the country. The move comes after Chinese tech companies rolled out ChatGPT-like services.

According to the guidelines, any company providing an AI bot service must ensure the content it generates is accurate, at least according to the fact-checkers at the Chinese Communist Party.  Additionally, the AI bot must not infringe intellectual property. discriminate, or endanger security, according to guidelines released by the Cyberspace Administration of China. AI operators must also clearly label AI-generated content. 

Companies like SenseTime, Alibaba, and Baidu are rolling out AI platforms for the largest internet market in the world. While it hasn’t yet, it’s likely that China will bar foreign AI services like OpenAI’s ChatGPT or Google’s Bard. None of the American social media services are available in China, as Beijing maintains tight control over the content discussed online on Chinese-controlled platforms. 

“There’s real potential there to affect how the models are trained and that stands out to me as really quite important here,” Tom Nunlist, a Senior Analyst at Trivium China told Bloomberg. AI models in China will have to follow a series of guidelines established by regulators to ensure content meets these new guidelines. So how will the United States government respond?

Back in the States, the Biden administration is seeking public comments on accountability measures for AI and how it could impact national security and education. The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, is seeking more input. 

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” says NTIA Administrator Alan Davidson. “For these systems to reach their full potential, companies and consumers need to be able to trust them.” The NTIA will draft a report as it looks at efforts to ensure these AI systems work as claimed without causing harm. 

Share This Article