The Path From Enigma To ChatGPT: Artificial Intelligence And Data Protection Law

If Alan Turing had not posed the question “Could machines think?” in the early 1940s, an era that we could describe as dreadful, we would

Data Protection Law

If Alan Turing had not posed the question “Could machines think?” in the early 1940s, an era that we could describe as dreadful, we would probably be discussing very different topics today. In today’s world, which is constantly developing and where the rate of globalization is almost four times that of 1990, it is not easy to stay away from technology. However, it is easy to organize workflows according to technological requirements. Even though it’s amazing to see the face and hear the voice of someone with a time difference between us, order food with one click, and buy plane tickets, it may be too late to question what the artificial intelligence that performs these operations wants from us.

But still, it’s time to add another one to Alan Turing’s question.

“Machines can think. So, can we protect our data from the speed of thought of machines?”


 Artificial intelligence is defined as the ability of machines to solve complex thought systems and answer questions by understanding specific mathematical algorithms, unlike the natural intelligence found in humans and animals. The starting point is the Second World War, which everyone remembers with sadness.

 During World War II, the Enigma Machine of the Nazis provided the transmission of the most intricate information of the period with difficult codes. Although many mathematicians and scientists worked day and night to decipher Enigma, they could only decipher unimportant messages, while complex Kriegsmarine messages could not be deciphered. So much so that the news of the German victory, which did fall on the newspaper headlines, upset the morale of the soldiers during the hot conflict while people from almost all over the world were trying to decipher the machine. The new decryption center in Bletchley Park, on the other hand, recruited Cambridge’s young talent, Turing, who broke the code and helped to end the war two years early, according to some historians.

 The first computer prototypes that Turing created based on hypotheses and Boolean algebra revealed the main lines of today’s concept of machine intelligence. The Turing Test, which he created in 1950, helped John McCarthy and Marvin Minsky use the word “artificial intelligence” for the first time at a conference in mid-1955. Later, Winograd’s development of a natural understanding of computer language SHRDLU, Raj Reddy’s first explanations on natural language processing, and Judea Pearl’s article “Probabilistic Thinking in Intelligent Systems” allowed us to talk to an artificial intelligence chatbot today, and appeared as milestones that paved the way for us to get ideas.

 When we came to the millennium age, artificial intelligence started to evolve into a different dimension with the new robot race of technology giants. Today, we know that machines can think and even perform automated operations for us. However, this rapid development has brought with it a bigger problem than we ever imagined: protecting our data.  The unpredictable increase in the rate of our transactions on online platforms, the decrease in transmission speed to milliseconds, the disappearance of many business lines, and the replacement of systems created with artificial intelligence, ultimately show that our data is more important than we think. The Personal Data Protection Law was born out of this understanding that data is the new oil in such a world.


 Personal data protection laws have emerged as an inevitable necessity with the increase in computer users. The “Hessen State Data Protection Law” was prepared and put into effect in Hessen, Germany, in 1970, and it has also been the pioneer of many regulations that have been created until today.

 In Turkish Law, the right to protect personal data, which was accepted as a fundamental right as a result of the addition to Article 20 of the Constitution in 2010, also encompasses many areas such as requesting the protection of personal data about himself or herself, being informed about personal data about himself, being able to access these data, and requesting their correction or deletion.

 With Law No. 6698 on the Protection of Personal Data, which entered into force in 2016, both these rights were reinforced, and certain obligations were introduced for all kinds of organizations that process, store, and use data through automatic or non-automatic means. The Law has emerged as a manifestation of Directive 95/46/EC on the Protection of Individuals Regarding the Processing of Personal Data and the Free Movement of Such Data, dated October 24, 1995. Although the said Directive has been shelved after the entry into force of the General Data Protection Regulation in Europe, it constitutes the main issue mentioned to us today.

 Artificial intelligence and machine learning, on the other hand, are the data that the said regulations try to protect. These are the areas where applications are created that try to develop with data analytics methods, realize this with algorithms, and make our lives easier as a result of related studies. As artificial intelligence technology, which exists in every ecosystem you can think of, from our advertising preferences to wearable technologies, develops, the amount of personal data collected by organizations that use, produce, or develop this technology will increase in direct proportion. Because the working principle of algorithms is expressed in its simplest form as “the more data, the more accurate results”. These data sets will multiply, create big data, and open the door to an era in which you will be as valuable as the information you provide in the near future.

 Exponentially increasing data collection processes bring with them a great risk. For this reason, data scientists from all over the world advocate that these raw data-feeding artificial intelligence algorithms should be used without prejudice, in ways that do not cause discrimination, fairness, equality, or privacy.


The biggest problem created by artificial intelligence technologies in the context of data protection law is based on the question of how to ensure privacy and confidentiality. The development of technology through a great revolution has created its own age by aging the resources that come out of the ground. The requirements of this new age depend on unfamiliar dynamics.

Chatbots that interpret contracts instead of a lawyer, legal technology that makes judges with artificial intelligence samples, those that offer treatment as if they are doctors, and more need only one thing to perform all these operations: our personal data!

 We have previously stated that personal data is protected according to many national and international laws. However, regulations cannot prevent emerging personal data breaches and the interpretation of data accumulated in databases by unauthorized persons. In any application that we download to our phone for free, the specifications that we pass without reading can access our camera, our directory, and much more confidential information. Because, as the movie The Social Dilemma says, “If you don’t pay for a product, you are the product.”

 Whether we pay or not, it is our most basic right to demand protection from all these unauthorized transactions, data breaches, the sale of our information on the Dark Web, and more. We can only do this with privacy and confidentiality procedures.

 Privacy and confidentiality are the main starting points of all regulations in the context of data protection law around the world, especially the General Data Protection Regulation. Since the privacy of private life is a fundamental human right that is also acknowledged in our Constitution; the ability to decide who can collect and use our personal data is also linked to individual autonomy. For all these reasons, artificial intelligence systems that process personal data must reduce the problems that arise regarding the rights and freedoms of individuals and provide privacy and security. This issue will only be possible with the correct and effective application of the principles of legality, justice, transparency, accountability, prioritizing the use of individual rights, security, and data minimization.


 Artificial intelligence and, in any case, automation systems must have a legal basis in order to process personal data. The legal reasons for processing within the scope of KVKK are listed in the 2nd paragraph of the 5th article. However, in some cases, legal reasons within the scope of the provisions included in the exception of the same Law may also be valid.

 Many views in the doctrine suggest different legal reasons for the continuity of artificial intelligence and its integration into any system. For example, the system used by a lawyer for contract amendment and the system where a doctor enters his patient’s data for consultation will not be considered within the scope of the same legal bases.

 A system used by the police in America in the past few days led to the arrest of a person who was not related to the wrong decision he made in the context of the crime. While it is so obvious that artificial intelligence, which is one of the most discussed issues, cannot always be fair, will we continue to use it?

 Yes, of course! Because if you can keep control of the data that artificial intelligence will use statistically accurate and up-to-date, in a deep corridor such as the internet, we can avoid negative consequences to a large extent if we can ensure that people reach the truth, not their exact ideas, and ultimately bring it to a level that can meet reasonable human expectations.

Finally, we will touch on the importance of advancing data processing in a transparent way in the context of this topic. The transparency of companies will also help us hold them accountable for their actions. In the case of artificial intelligence, this issue is “Where did you get this information from? What sources did you draw from? Can its accuracy be verified?” This can be done by asking questions. It is a sine qua non of transparency that the information be short, concise, understandable, and easily accessible.


 Being accountable is a principle valid not only in the field of personal data protection but also in almost all economic fields. For example, in the context of Turkish Law, legal entities working under the Capital Markets Board (“Sermaye Piyasası Kurulu”) must be accountable for their information society service. In accordance with the Personal Data Protection legislation, artificial intelligence automation systems account for many issues, such as which data is used while performing a transaction, and whether this data is required, sufficient, or excessively requested.

 When calculating risk, organizations should consider whether they can account for the data processed by authorized supervisory bodies and primarily individuals. Otherwise, results incompatible with the data minimization principle will emerge.


 Within the scope of Article 11 of the Law, the rights of the persons concerned were counted, and it was stated that these rights should be stated in the prepared texts (clarification text). At the same time, the rights of the data subject are mentioned in the 13th and 21st articles of the General Data Protection Regulation.

 It is an obvious fact that in artificial intelligence automation systems, it is difficult to protect data, maintain its continuity, and make it difficult for the relevant persons to exercise their rights. However, individuals can obtain information about their processed data in a clear and transparent manner, may want to learn about the third parties to whom their personal data has been transferred in the country or abroad, may request correction of the data in the records if they think that it is not correct, or if these matters are no longer necessary within the framework of any activity, they may be deleted or destroyed, or may request anonymization.

 Especially in Europe, the right to be forgotten (deletion, destruction, anonymization, rendering inaccessible) is used more frequently than others, and when it is understood by the courts that the requests of individuals are reasonable, it is necessary to delete the relevant data from databases.


 Data security is the starting point for all the stories. The fact that artificial intelligence is fed from many different sources in milliseconds makes risk management difficult and compliance with the security principle almost impossible.  We know that machines learn quickly from us. However, we never questioned whether permission was obtained from us for the data collected while these machines were being trained.

The use of data received, recorded, and stored for the learning of artificial intelligence automation systems is only possible by taking appropriate security measures. These security measures refer to the whole of the measures taken against data being accessed, lost, or damaged by unauthorized persons.

 The establishment of information security systems specified in ISO standards constitutes an important component of data security in artificial intelligence systems. Administrative and technical measures to be taken within the scope of the Law in this process include systems, services, and the confidentiality, integrity, and accessibility of the personal data processed on them.

 While talking about the principles, we stated that artificial intelligence should be processed for a specific purpose based on the necessary legal reasons. This context also explains data minimization. So much so that the use of processed data must be related to and limited to the purpose for which it is processed.

In terms of the data minimization principle, it is necessary to evaluate which data are suitable for the purpose of artificial intelligence education, and only these data should be processed. Therefore, if sufficient accuracy can be achieved with fewer datasets or the inclusion of fewer individuals, this should be absolutely preferred.


 Although the development of the Internet facilitates access to information, it makes it difficult to protect our data. II. The development of the process that started with World War II continues without stopping. This globalization introduced us to the concept of big data and proved that every change is an absolute pain. Although artificial intelligence technologies are only just beginning, they have developed enough to analyze large data sets and have advanced enough to shake people’s professional perspectives. However, since this development can only predict the ideas we have in mind, the risk factor increases, and in parallel, our sensitivity to privacy decreases.

 Within the framework of the Data Protection Law, we can be protected from the problems of artificial intelligence technologies by complying with the relevant principles and taking the necessary technical and administrative measures. It is extremely important to start this way by accepting that automation systems perform transactions much faster than a human can think and implement them, including lawyers in the rapid development of artificial intelligence technologies, making the process controllable, and taking relevant privacy measures without exception. Because, as Garry Kasparov says in Deep Thinking:

When you program a machine, you know what it can do. If the machine is programming itself, who knows what it might do?”

See Also;