Commoditization of Data

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #209
    Ashith
    Keymaster

    Friend and foe agree that our society is undergoing a digital revolution that is in the process of transforming our society as we know it. In addition to economic and social progress, every technological revolution also brings along disruption and friction.[2] The new digital technologies (and, in particular, artificial intelligence-AI) are fueled by huge volumes of data, leading to the common saying that “data is the new oil.” These data-driven technologies transform existing business models and present new privacy issues and ethical dilemmas.[3] Social resistance to the excesses of the new data economy is becoming increasingly visible and leads to calls for new legislation.[4]

    Commentators argue that a relatively small number of companies are disproportionately profiting from consumers’ data, and that the economic gap continues to grow between technology companies and the consumers whose data drives the profits of these companies.[5] Consumers are also becoming more aware of the fact that free online services come at a cost to their privacy, where the modern adage has become that consumers are not the recipients of free online services but are actually the product itself.[6]

    U.S. legislators are responding by proposing prescriptive notice and choice requirements which intend to serve the dual purpose of providing consumers with greater control over the use of their personal information and at the same time enabling them to profit from that use of their information.

    An illustrative example is California Governor Gavin Newsom’s proposal that consumers should “share the wealth” that technology companies generate from their data, potentially in the form of a “data dividend” to be paid to Californians for the use of their data.[7] California’s Consumer Privacy Act (CCPA) also combines the right of consumers to opt out of the sale of their data with a requirement that any financial incentive offered by companies to consumers for the sale of their personal information should be reasonably related to the value of the consumer’s data.[8]

    These are not isolated examples. The academic community is also proposing alternative ways to address wealth inequality. Illustrative here is Lanier and Weyl’s proposal for the creation of data unions that would negotiate payment terms for user-generated content and personal information supplied by their users, which we will discuss in further detail below.

    Though these attempts to protect, empower, and compensate consumers are commendable, the proposals to achieve these goals are actually counterproductive. The remedy is here worse than the ailment.

    To illustrate the underlying issue, let’s take the example of misleading advertising and unfair trade practices. If an advertisement is misleading or a trade practice unfair, it is intuitively understood that companies should not be able to remedy this situation by obtaining consent for such practice from the consumer. In the same vein, if companies generate large revenues with their misleading and unfair practices, the solution is not to ensure consumers get their share of the illicitly obtained revenues. If anything would provide an incentive to continue misleading and unfair practices, this would be it.

    As always with data protection in the digital environment, the issues are far less straightforward than in their offline equivalents and therefore more difficult to understand and address. History shows that whenever a new technology is introduced, society needs time to adjust. As a consequence, the data economy is still driven by the possibilities of technology rather social and legal norms.[9] This inevitably leads to social unrest and calls for new rules, such as the call of Microsoft’s CEO, Satya Nadella, for the U.S., China, and Europe to come together and establish a global privacy standard based on the EU General Data Protection Regulation (GDPR).[10]

    From privacy is dead to privacy is the future. The point here is that not only technical developments are moving fast, but also that social standards and customer expectations are evolving.[11]

    To begin to understand how our social norms should be translated to the new digital reality, we will need to take the time to understand the underlying rationales of the existing rules and translate them to the new reality. Our main point here is that that the two concepts of consumer control and wealth distribution are separate but intertwined. They seek to empower consumers to take control of their data, but they also treat privacy protection as a right that can be traded or sold. These purposes are equally worthy, but cannot be combined. They need to be regulated separately and in a different manner. Adopting a commercial trade approach to privacy protection will ultimately undermine rather than protect consumer privacy. To complicate matters further, experience with the consent-based model for privacy protection in other countries (and especially under the GDPR) shows that the consent-based model is flawed and fails to achieve privacy protection in the first place. We will first discuss why consent is not the panacea to achieve privacy protection.

     

    Why Should We Be Skeptical of Consent as a Solution for Consumer Privacy?

    On the surface, consent may appear to be the best option for privacy protection because it allows consumers to choose how they will allow companies to use their personal information. Consent tended to be the default approach under the EU’s Data Protection Directive, and the GDPR still lists consent first among the potential grounds for processing of personal data.[12] Over time, however, confidence in consent as a tool for privacy protection has waned.

    Before GDPR, many believed that the lack of material privacy compliance was mostly due to lack of enforcement under the Directive, and that all would be well when the European supervisory authorities would have higher fining and broader enforcement powers. However, now these powers are granted under GDPR, not much has changed and privacy violations are still being featured in newspaper headlines.

    By now the realization is setting in that non-compliance with privacy laws may also be created by a fundamental flaw in consent-based data protection. The laws are based on the assumption that as long as people are informed about which data are collected, by whom and for which purposes, they can then make an informed decision. The laws seek to ensure people’s autonomy by providing choices. In a world driven by AI, however, we can no longer fully understand what is happening to our data. The underlying logic of data-processing operations and the purposes for which they are used have now become so complex that they can only be described by means of intricate privacy policies that are simply not comprehensible to the average citizen. It is an illusion to suppose that by better informing individuals about which data are processed and for which purposes, we can enable them to make more rational choices and to better exercise their rights. In a world of too many choices, autonomy of the individual is reduced rather than increased. We cannot phrase it better than Cass Sunstein in his book, The Ethics of Influence(2016):

    [A]utonomy does not require choices everywhere; it does not justify an insistence on active choosing in all contexts. (…) People should be allowed to devote their attention to the questions that, in their view, deserve attention. If people have to make choices everywhere, their autonomy is reduced, if only because they cannot focus on those activities that seem to them most worthy of their time.[13]

    More fundamental is the point that a regulatory system that relies on the concept of free choice to protect people against consequences of AI is undermined by the very technology this system aims to protect us against. If AI knows us better than we do ourselves, it can manipulate us, and strengthening the information and consent requirements will not help.

    Yuval Harari explains it well:

    What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?[14]

    The reality is that organizations find inscrutable ways of meeting information and consent requirements that discourage individuals from specifying their true preferences and often make them feel forced to click “OK” to obtain access to services.[15] The commercial interests of collecting as many data as possible are so large that in practice all tricks available are often used to entice website visitors and app users to opt in (or to make it difficult for them to opt out). The design thereby exploits the predictably irrational behavior of people so that they make choices that are not necessarily in their best interests.[16] A very simple example is that consumers are more likely to click on a blue button than a gray button, even if the blue one is the least favorable option. Telling is that Google once tested 41 shades of blue to measure user response.[17] Also established companies deliberately make it difficult for consumers to make their actual choice and seem to have little awareness of doing something wrong. In comparison, if you would deliberately mislead someone in the offline world, everyone would immediately feel that this was unacceptable behavior.[18] Part of the explanation for this is that the digital newcomers have deliberately and systematically pushed the limits of their digital services in order to get their users accustomed to certain processing practices.[19] Although many of these privacy practices are now under investigation by privacy and antitrust authorities around the world,[20] we still see that these practices have obscured the view of what is or is not an ethical use of data.

    Consent-based data protection laws have resulted in what is coined as mechanical proceduralism,[21] whereby organizations go through the mechanics of notice and consent, without any reflection on whether the relevant use of data is legitimate in the first place. In other words, the current preoccupation is with what is legal, which is distracting us from asking what is legitimate to do with data. We see this reflected in even the EU’s highest court having to decide whether a pre-ticked box constitutes consent (surprise: it does not) and the EDPB feeling compelled to update its earlier guidance by spelling out whether cookie walls constitute “freely given” consent (surprise: they do not).[22]

    Privacy legislation needs to regain its role of determining what is and is not permissible. Instead of a legal system based on consent, we need to re-think the social contract for our digital society, by having the difficult discussion about where the red lines for data use should be rather than passing the responsibility for a fair digital society to individuals to make choices that they cannot oversee.[23]

    #930
    aa
    Participant

    reply 1

Viewing 2 posts - 1 through 2 (of 2 total)
  • You must be logged in to reply to this topic.