Record GDPR fine by the Hungarian Data Protection Authority for the unlawful use of artificial intelligence
The Hungarian Data Protection Authority (Nemzeti Adatvédelmi és Információszabadság Hatóság, NAIH) has recently published its annual report in which it presented a case where the Authority imposed the highest fine to date of ca. EUR 670,000 (HUF 250 million).
The case involved the personal data processing of a bank (acting as a data controller) which automatically analysed the recorded audio of customer service calls. The bank used the results of the analysis to determine which customers should be called back by analysing the emotional state of the caller using an artificial intelligence-based speech signal processing software that automatically analysed the call based on a list of keywords and the emotional state of the caller. The software then established a ranking of the calls serving as a recommendation as to which caller should be called back as a priority.
The purposes of the processing activity was determined by the bank as quality control based on variable parameters, the prevention of complaints and customer migration, and the development of its customer support’s efficiency. However, according to the Authority, the bank’s privacy notice referred to these processing activities in general terms only, and no material information was made available regarding the voice analysis itself. Furthermore, the privacy notice only indicated quality control and complaint prevention as purposes of the data processing.
The bank based the processing on its legitimate interests to retain its clients and to enhance the efficiency of its internal operations. The data processing activities in connection with these interests, however, were not separated in the privacy notice and in the legitimate interests tests, they became blurred.
In the course of the procedure before the Authority it became evident from the statements of the bank that for years it had failed to provide to the data subjects proper notice and the right to object, because it had determined that it is not able to do so. The Authority emphasised that the only lawful legal basis for the processing activity of emotions-based voice analysis can only be the freely given, informed consent of the data subjects.
Additionally, the Authority highlighted that although the bank had carried out a data protection impact assessment (DPIA) and identified that the processing is of high risk to the data subjects, capable of profiling and scoring, the DPIA had failed to present substantial solutions to address these risks. Furthermore, the legitimate interest test performed by the bank had failed to take into account proportionality, the interests of the data subjects, it merely established that the data processing is necessary to achieve the purposes it pursues. The Authority further emphasised that the legitimate interest legal basis cannot serve as a ‘last resort’ when all other legal bases are inapplicable, and as such data controllers cannot refer to this legal basis at any time and for any reason. Consequently, the Authority, in addition to imposing a record fine, obliged the bank to cease the analysis of emotions in the course of voice analysis.
In conclusion, the Authority highlighted that “artificial intelligence is generally difficult to understand and monitor due to the way it works, and even new technologies pose particular privacy risks. This is one of the reasons why the use of artificial intelligence in data management requires special attention, not only on paper but also in practice.”
You may also be interested in
DLA Piper GDPR Fines and Data Breach Survey: January 2023
A report produced by DLA Piper’s cybersecurity and data protection team Data protection supervisory authorities across Europe have issued a total of EUR2.92bn (USD3.10bn/GBP2.54bn) in fines since 28 January 2022. This figure is taken from DLA Piper’s latest annual General Data Protection Regulation (GDPR) Fines and Data Breach Survey of the 27 European Union Member... Continue Reading…
Data Protection Impact Assessments meet AI: Smart questions for building compliant AI systems that use personal data
As the tech world continues to discuss the future of AI regulation, it is important to remember that there are already robust legal regimes that impact the development and launch of AI systems, including the General Data Protection Regulation (the “GDPR”).
IFPMA issues principles encouraging ethical use of AI in healthcare
Use of AI in healthcare continues to solidify itself as a solid practice for the benefit of patients and healthcare providers. No more than a cursory search of the subject provides a number of existing examples, such as cancer detection, of AI intervention in AI.