Andrew Wheatley | AI and data protection conundrum - Part 1
If you have an iPhone, iPad, Android phone or tablet, Fitbit, Apple watch, a Google Home or Alexa device, your personal data have probably been shared by you, in some cases without your knowledge, with myriad business-related entities.
Not only have your data been shared, but sophisticated systems analyse these data to make decisions about you, of which you then become a victim.
This article is the first of a two-part series that I hope will put the importance of data protection laws squarely in the sights of our citizens, especially with the advent of emerging technologies like artificial intelligence and machine learning.
Over the past 18 months, I have been able to participate in, and contribute to, robust discussion on the Data Protection Bill currently before a joint select committee of Parliament - a bill that will eventually lead to a law that will, for the first time, give a Jamaican citizen control over his or her own personal data.
Data in today's age are largely digital, very portable, and extremely valuable. Even more important is that data, without the right laws, in the wrong hands, can lead to significant damage to a person's ability to go about his daily life.
Going about your daily life and the effect that your personal data has on this seemingly mundane exercise is often grossly misunderstood or even taken for granted because many of the operations done on your data may shock you.
We have seen many instances of data breaches in recent months on popular social-media platforms. Only when this happens do the users of these platforms and services become concerned about the state of their data and the redress that they have or do not have under the law.
Perhaps one of the most stunning examples of data privacy has to do with the case of the sophisticated biometrically equipped smartphones. Many users of smartphones, including me, use the biometric features on these devices to smartly and conveniently secure access to the device.
Strangely, though, a look of utter shock and puzzlement overcomes anyone whenever the individual is asked, "Do you know where that fingerprint of yours is being stored?" Perhaps the same thing is going through your mind as you read this.
It is always curious to me that many people who question, or are overly concerned about, providing their biometric information to the government, an entity they know and can query, are always at ease and never question providing the same information to an entity like Apple or Google or perhaps a foreign power with no knowledge of how that information is then used or stored. Bear in mind that these entities are not bound by any law in Jamaica, and the laws of the jurisdictions that these entities are established in do not apply to or provide sound redress for citizens of Jamaica.
I give these examples because this has been the modus operandi for many of us for many years, and I want to now provide an insight into a much scarier reality that has existed for a few years but that will become mainstream sooner than we think.
I am talking about artificial intelligence and machine learning and the effect that these technologies will have on our lives when we consider how these technologies will be able to treat with our personal and private data. Hopefully, it will elucidate the urgent need for not only data protection laws, but also the need for a more efficient and quicker process for creating, reviewing, and changing laws in a digital age.
AI and Machine Learning
Technopedia defines AI as the creation of intelligent machines that work. AI has been around for quite some time, but with the exponential advancements in computing power in recent years, AI has seen a rise in its use and popularity.
Machine learning is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to 'learn' from data without being explicitly programmed.
Some of the most popular AI/ML systems are IBM Deep Blue, IBM Watson, Google Brain, but companies like Amazon, Google, Netflix, Salesforce, Facebook, and Twitter all use artificial intelligence and machine learning to deliver better, more efficient services and user experiences.
Why should this concern you? Why would this be a bad thing? After all, if AI and ML help us to get what we want quickly in a convenient and efficient manner, then count me in! In fact, that might be a really good idea when it comes to provisioning government services and making that more efficient, too.
It turns out that the issue is not entirely with the derived benefits of the technology, but with the way in which it gets there.
Machine-learning systems utilise huge amounts of data to come to conclusions about the subjects of the data it is interrogating. In many instances, data are not coming from a single source, and as such, a composite and extremely detailed data set or profile of a single individual can be created by a machine-learning system from seemingly unrelated data. In addition to those inferences, conclusions and actions can be taken in relation to an individual based on this composite profile.
The important question now becomes: How will personal data that are derived and created from the assimilation and analysis using methods like machine learning be treated by the proposed legislation, and what will the effects be for all of us and the associated technologies?
I will take this up in Part 2.
- Dr Andrew Wheatley is a research scientist and senior lecturer in basic medicine and biotechnology and is the holder of two international and two local patents and has published extensively in peer-reviewed international journals.