According to Irish watchdog DPC, Google has not provided enough information about Bard and specifically about how its generative AI tool ensures Europeans’ privacy.
Tech giant Google LLC (NASDAQ: GOOGL) has been forced to postpone the launch of its Google Bard chatbot in the European Union following the ban from Irish Data Protection Commission (DPC). The latter cited privacy concerns to explain its decision. According to DPC, Google has not provided enough information about Bard and specifically about how its generative artificial intelligence (AI) tool ensures Europeans’ privacy. In other words, Google has not justified the EU launch of Bard.
Deputy Commissioner Graham Doyle commented:
“Google recently informed the Data Protection Commission of its intention to launch Bard in the EU this week.The DPC had not had any detailed briefing nor sight of a DPIA [data protection impact assessment] or any supporting documentation at this point. It has since sought this information as a matter of urgency and has raised a number of additional data protection questions with Google to which it awaits a response and Bard will not now launch this week.”
Meanwhile, a Google representative stated that the company had been in talks with Irish Data Protection Commission and made the date of Bard’s launch in the EU clear to the regulators.
Google spokesperson explained:
“We said in May that we wanted to make Bard more widely available, including in the European Union, and that we would do so responsibly, after engagement with experts, regulators and policymakers. As part of that process, we’ve been talking with privacy regulators to address their questions and hear feedback.”
Prior to getting approval for Bard to go live in Europe, Google has to provide detailed answers to a list of questions from the Irish Data Protection Commission.
Google launched Bard in the US back in February. The chatbot quickly gained traction, expanding to the United Kindom, Australia, India, Argentina, and more. It is available in over 180 countries but has not reached the EU yet.
EU’s Regulation of AI
EU’s approach to regulating the development and use of artificial intelligence is so far the strictest in the world. The European AI Strategy aims to make the EU a world-class hub for AI and ensure that AI is human-centric and trustworthy. In pursuit of this goal, the EU regulators have come up with an AI legal framework that includes an AI Act, an AI Liability Directive, and a revised Product Liability Directive.
As per the AI Act, AI systems can be categorized according to four levels of risk: minimal, limited, high, and unacceptable, with specific regulations applied for each level. While AI with minimum risk requires minimum regulations, unacceptable risk applications are banned from use. They include systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring.
The established rules have raised concerns in the tech industry. Some experts believe that the scope of the AI Act had been broadened too much, it may affect forms of AI that are harmless.