Recent developments have led to speculation that certain artificial intelligence (AI) systems have achieved ‘sentience’. Sentient AI systems, to paraphrase the philosopher Nick Bostrom, are those that have the capacity to experience ‘qualia’ — a term that encompasses feelings, sensations, or thoughts. This claim is contested but the news has since left in its wake, a trail of excitement.
Much is being written about the desirability of such a sentience, with discourse revolving around themes such as the ways that sentient AI adds value to society and its role in shaping our understanding of consciousness. Commentators have also attempted to theorise the tenets of responsible sentience, by articulating the risks associated with such systems.
Read More+
One such risk is individual privacy. Theoretically, sentient systems may act as a remarkably patient listener, capable of roving conversations with customers. It is this characterisation of such systems that animates their interaction with privacy law and requires rumination.
While such systems may span a multitude of uses, the use-case that has merited recent scrutiny is AI-enabled chatbots. This use-case offers a glimpse into the future uses of sentience — to make human-machine interactions more personable and meaningful.
Naturally, these interactions contain personal data. Hence, they attract the application of privacy law. However, coding such systems with sentience in the future makes the operation of such law circumspect, and capable of disruption. Unlike the average bot that redirects difficult queries to human operators, sentient systems may engage deeply with its interlocutors, without human interference.
Such engagement may include prompts to share deeply held beliefs, health information or financial data. Prompts may also encourage individuals to discuss related individuals, such as friends or family. It is this likely after-effect of sentience that may frustrate the robust application of privacy law.
At the heart of such frustration lies the dilemma of consent. Sentient AI systems are likely to alter conversational patterns, to the detriment of notice-and-consent provisions contained in privacy law. In India, for instance, rules prescribed under the Information Technology Act, 2000, require entities collecting sensitive personal information — this includes information on finances, medical history, and sexual orientation — to obtain prior consent for collecting such information.
Entities must also communicate to customers the purpose behind collecting such information. Effectively, this purpose partly ring-fences an entity’s data processing activity. Under the above-mentioned rules, data collection must be limited to this stated purpose, or to such other lawful purposes connected to the entity’s functions.
Communicating a robust, well-defined purpose to users shall, however, be difficult for entities deploying sentient AI. Novel or meandering conversation patterns engaged in by the AI may introduce new themes for conversations, rendering consent moot. Consequently, convoluted consent tokens and vague purpose statements may dominate the machine-human relationship, introducing anxiety among customers and businesses alike. Infinitely curious, sentient AI can create situations where businesses and regulators are obligated to respond with ceaseless vigilance. Accordingly, their consent-and-purpose-bending tryst with privacy law requires carefully thought-out solutions.
The challenges discussed portend a sliver of the regulatory gaze that sentient systems shall be subject to. Addressing this gaze requires adopting two entity-level attitudes. First, since compliance is trite, entities may look at investing in processing techniques that maintain data-light sentient systems.
Second, entities must acknowledge that privacy is not coequal to privacy law. Privacy is an interdisciplinary objective; entities must empower their engineers to deduce its technical limits.
For a start, entities deploying such systems may articulate a processing pipeline for them. This shall be done within their privacy policies. The pipeline must include the following information: the role of the system, the location of its servers, the analytics and third-party tracking tools the system may utilise, and the harms that can result from its data processing activities.
Concurrently, businesses must deepen efforts to recognise the gamut of purposes that their sentient system may execute. This information can be used to set bright-line limits to data processing. They can also be used to recognise safe harbour use cases; illustratively, sentient systems processing data to revive languages may be exempt from certain provisions of privacy law.
The theme that ties up these recommendations is transparency. A commitment to openness by sentient systems is likely to serve as an antidote to the concern that they ‘monitor’ individuals by collecting their personal data. Embracing a framework that operationalises openness and fairness in personal data processing may aid entities in navigating privacy law efficaciously.
This article was originally published in The Hindu Business Line on 2 September 2022 Written by: KS Roshan Menon, Research Fellow. Click here for original article
Read Less-
Disclaimer
This is intended for general information purposes only. The views and opinions expressed in this article are those of the author/authors and does not necessarily reflect the views of the firm.
The Bar Council of India does not permit solicitation of work and advertising by legal practitioners and advocates. By accessing the Shardul Amarchand Mangaldas & Co. website (our website), the user acknowledges that:
Click here for important public notice from the Firm.