In the era of digital health and artificial intelligence, we are witnessing an unprecedented ability to collect and analyze personal data for health insights. One emerging area of interest is the use of voice data as a health indicator. While this technology holds promise for early detection and monitoring of various health conditions, it also raises some privacy concerns. In this article, we will explore the potential of voice analysis in healthcare and the challenges it presents for data protection.

Voice as an indicator of health

Recent research has shown that our voices can reveal much more about our health than we might realize. Subtle changes in vocal patterns, tone, pitch, and rhythm can be indicative of various physical and mental health conditions. For example:

Advanced AI and machine learning algorithms can detect these subtle changes, potentially allowing for early diagnosis or continuous monitoring of health conditions. This technology is being explored for practical applications, with several start-ups leveraging these features to design their business models.

Voice data as health data

The General Data Protection Regulation (GDPR) considers data concerning health as a special category of personal data and therefore grants it a higher level of protection. While the GDPR does not provide an exhaustive list of what constitutes „health data,“ a landmark case has shed light on its interpretation.

In the 2003 Bodil Lindqvist case, the Court of Justice of the European Communities (now CJEU) addressed the definition of „health information“ within the context of data protection laws. The Court interpreted „health information“ widely, including any “information concerning all aspects, both physical and mental, of the health of an individual”.

Based on ongoing research into the possibility of deducing physical and mental health status from voice and speech patterns, these data have the potential to be considered health information under this interpretation when subject to specific technologies that can draw these conclusions.

Implications for your business

Even if your company processes voice data for non-diagnostic purposes, such as recording calls for customer service or call center operations, it is useful to understand its potential to inadvertently reveal intimate information. A data breach that allows unauthorized parties to access voice data could be more intrusive to the rights and freedoms of data subjects than initially apparent. This risk should be carefully considered in your risk assessments to determine the necessary security measures. Practices to consider may include:

  • Consent: With some exceptions, consent of the data subject is the generally recommended legal basis for call recording.
  • Data minimization: Ensuring that only necessary data is collected and processed is critical. This can be achieved by different means: controllers can consider the possibility of initiating the recording at a point where identifying information is not present (e.g., after callers have introduced themselves). In some cases, recording may be avoided altogether. For example, if the purpose of the recording is to train employees, an organization may consider the feasibility of using transcriptions or listening to live calls instead.
  • Technical and Organizational Security Measures: Voice data requires robust security measures to prevent unauthorized access or breaches. Encryption and strict access rights are paramount. Depending on the specific case, controllers could consider creative strategies such as storing recordings with an altered pitch to make it impossible to identify biomarkers.
  • Purpose limitation: Speech data collected for a specific purpose (e.g., training employees) should be used only for that purpose. „Mission creep”, where data is used for purposes beyond those originally disclosed, should be avoided. For example, the voice of a person calling to inquire about the status of their health insurance should not be analyzed for undisclosed health conditions.

Conclusion

As voice analysis technology continues to advance, privacy laws and organizational practices will need to evolve to address these challenges. Striking a balance between the potential health benefits of voice analysis and the fundamental right to privacy will be crucial in shaping the future of this technology.