The central government in Japan is taking steps to ensure the safety of artificial intelligence by discussing the establishment of laws and regulations. The AI strategy council, chaired by Yutaka Matsuo, a professor at the University of Tokyo, met on May 22 to address the potential risks associated with AI technology.
Some of the risks highlighted during the meeting include the development of AI weapons, privacy violations, and criminal applications. The council emphasized the need for legal regulations to be applied to high-risk AI, particularly targeting companies like Open AI, known for developing ChatGPT.
While the council acknowledged the importance of leaving responsibility to the voluntary efforts of private businesses and industry associations, they also suggested penalties for companies that violate regulations. This move aligns with global trends, as many countries have already begun establishing laws and regulations on AI.
In April, Japan released non-legally binding guidelines on AI for businesses, marking a shift from its previous focus on promoting AI development without strict regulations. The government’s decision to consider legal regulations comes after concerns were raised about the risks posed by generative AI.
The council will closely examine regulations passed in Europe and the United States to determine appropriate laws for Japan. Their goal is to submit a bill to an ordinary Diet session next year. With the world’s first Artificial Intelligence Act passed in the EU and President Joe Biden’s executive order in the US, Japan is following suit to ensure the safe and responsible development of AI technology.