Nowadays, important decisions, previously left to humans, are delegated to algorithms, which may advise, if not decide, how personal data should be interpreted and what actions should be taken as a result. Business models driven by personal data or at least supported by processing of such data have become the rule and not the exception. The automation of individual decision-making processes becomes an everyday reality in banking and finance, insurance, employment, healthcare, taxation, as well as broadly understood marketing and advertising. This is because it allows to considerably improve efficiency and accuracy of decisions, especially when it is necessary to analyse large amounts of data in a limited time, while also reducing their cost. In short, increasingly, algorithms regulate our lives. This already strong trend has only intensified due to the coronavirus COVID-19 pandemic.
At the same time, in recent years, society’s deferential attitude toward algorithmic objectivity has weakened. It is widely known that the shift toward automation can significantly affect the rights and freedoms of natural persons, which means that appropriate safeguards must be provided. To the extent that automated individual decision-making is based on personal data, in the European Union it is subject to the General Data Protection Regulation (GDPR). However, it is still unclear whether and how the numerous challenges can be addressed within the existing framework of data protection law, or even if it is even the best candidate to tackle them. At first glance, it seems to defer control to the data subjects with regard to automated individual decision-making, as it contains dedicated provisions in this regard. However, a closer look shows that such protection is in fact curtailed.
The key provision of the GDPR aimed at protection of natural persons with regard to automated individual decision-making that is the right not to be subject to solely automated decision-making set out in Article 22 is narrow, vague and full of potential loopholes, which may be abused by the controllers. Even the legal nature of this right is uncertain, as it may be read either as a right to opt out that the data subject has to actively exercise or as a general prohibition that does not require any action. Besides, it envisages three exceptions, which are so considerable and prone to abuse by the controllers, that they erode the data subject’s right not to be subject to such processing to the point that the exceptions become the rule.
In addition, the special rights of the data subject which aim to address concerns regarding automated individual decision-making, notably the dedicated right to be informed, the dedicated right of access, the right to express point of view, the right to human intervention and the right to contest the decision are applicable to a very narrow range of cases meeting all of the requirements set out in Article 22 of the GDPR. The preeminent safeguards apply only if the processing is necessary for the formation or performance of a contract or based on the data subject’s explicit consent. Thus, if solely automated individual decision-making is authorised by the European Union or Member State law to which the controller is subject, the data subject does not have the said rights. Although the law in question must lay down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, in these cases the level of protection of the data subject may significantly diverge between the Member States.
The scope of the abovementioned rights is very vague. Not to mention that the data subject may even not be informed about them, as there is no explicit obligation of the controller to do so. Despite much debate, consensus has not yet emerged in the legal doctrine concerning the supposed right to explanation of a particular solely automated individual decision after it has been taken. In addition, some of the special rights of the data subject, in particular the dedicated right to be informed and the dedicated right of access, may be limited by intellectual property rights of the respective holders. Aside from that, these special rights of the data subject are further weakened by technical obstacles to their effective implementation, particularly when applied to complex solely automated individual decision-making systems.
Given that the GDPR provisions dedicated to solely automated individual decision-making may not provide sufficient protection for the data subject, supplementarily, general provisions of the regulation relevant to such processing may play a major role in such processing. The provisions in question are: fundamental data protection principles, which are the most relevant to such processing, that is lawfulness, fairness and transparency, purpose limitation, data minimization, accuracy and storage limitation principles, as well as provisions on data protection by design and by default and data protection impact assessment. Certainly, they can aid the controller in deploying better automated individual decision-making systems. Importantly, they apply to all cases of automated individual decision-making and they are not restricted by the narrow scope of Article 22 (1) of the GDPR. Moreover, they do not require any action of individuals, which relieves the latter of an undue burden. These principles may address some of the key concerns with regard to automated individual decision-making, which are difficult, if not impossible to tackle with the use of specific provisions dedicated to solely automated individual decision-making. While provisions data protection by design and by default and data protection impact assessment force the controller to be involved in the design of less privacy-invasive systems.
For further information on the subject, you might be interested in my newly published book, Protection of Natural Persons with Regard to Automated Individual Decision-Making in the GDPR, (Wolters Kluwer, 2020).