New Australian laws require search engines to remove AI-generated child abuse images

 Exclusive: E-Security Commissioner says companies must work to develop tools to promote better online security, including detecting fake images

New Australian laws require search engines to remove AI-generated child abuse images
New Australian laws require search engines to remove AI-generated child abuse images

Australia's eSafety Commissioner has issued a stern warning regarding the potential misuse of artificial intelligence tools for generating child abuse images and terrorist propaganda. Alongside this alert, the commissioner has unveiled a groundbreaking industry standard aimed at compelling tech giants such as Google, Microsoft's Bing, and DuckDuckGo to eradicate such content from AI-powered search engines.

The new industry code, set to be disclosed soon, places an obligation on these tech behemoths to eliminate child exploitation material from their search results. Furthermore, they must take proactive measures to prevent generative AI technologies from being used to produce deepfake versions of such disturbing content.

Julie Inman Grant, the eSafety Commissioner, emphasized the need for tech companies to lead the charge in mitigating the harmful consequences of their products. She expressed concerns about the emergence of "synthetic" child abuse material and the exploitation of generative AI by terrorist organizations for propaganda purposes, highlighting that this alarming trend is already underway.

Microsoft and Google have recently announced plans to integrate their AI tools, ChatGPT and Bard, with their widely-used consumer search engines. Inman Grant pointed out that the swift evolution of AI technology necessitates a reevaluation of the search code governing these platforms.

Previously, the code only addressed online content that search engines retrieved in response to queries but did not encompass material generated by these services. The revised code mandates that search engines continuously assess and enhance their AI tools to ensure that “class 1A” content, which includes child sexual exploitation, pro-terrorism, and extreme violence material, is not surfaced in search results. This entails delisting and blocking problematic search outcomes.

Additionally, the companies are required to explore technologies that enable users to detect and identify deepfake images accessible through their platforms. The eSafety Commission views this as one of the first frameworks of its kind globally.

Inman Grant likened the rapid development of AI to an “arms race” and stressed the need for regulators to take a proactive approach by incorporating further regulation into the design and deployment phases. She emphasized that this proactive approach is more effective than reactive measures once issues arise, drawing parallels with car manufacturers' mandatory installation of seatbelts.

The eSafety Commissioner acknowledged that some bad actors were exploiting new AI tools for illicit purposes, including the creation of child exploitation material. The new rules aim to compel tech companies not only to reduce harm on their platforms but also to develop tools that enhance safety, particularly in the detection of deepfake images.

Australia's attorney general, Mark Dreyfus, also mentioned separate efforts by the Australian federal police to employ AI for detecting child abuse material, moving away from manual image examinations. An initiative within this endeavor is encouraging adults to submit childhood photos to help train AI models, a move supported by Dreyfus himself.

In conclusion, Australia's eSafety Commissioner is taking significant steps to address the potential misuse of AI for generating harmful content, emphasizing the responsibility of tech giants to proactively combat these issues. These efforts reflect the evolving landscape of AI technology and the need for comprehensive regulation to ensure its responsible use.

0 Comments

Post a Comment

Post a Comment (0)

Previous Post Next Post