Ensuring human rights is a necessary step, not only from a moral but also a legal aspect, when working with data in the digital world. Before launching a certain product based on artificial intelligence (AI), the security of personal data and the privacy of users must be tested. Otherwise, the best way to ensure the protection of personal data is not to have it registered in databases at all, says Artificial Intelligence Developer Armir Celiku in an interview with Portalb.mk.
With the development of technology and the creation of products based on artificial intelligence, human rights, such as privacy and protection of personal data, are being violated. To learn more about how developers of artificial intelligence products ensure the privacy of their users, Portalb.mk spoke with Artificial Intelligence Developer Armir Celiku.
In 2023, Armir and his colleagues united in a group called “Ai4Good” won first place in a hackathon with their chatbot called “Sakhi”.
How does the entire process of creating a product based on artificial intelligence work? Tell us the process step by step, from the need to the idea and implementation?
– Artificial intelligence is a very broad field. I can’t speak for most of this field, but when it comes to large language models (LLM), the first step is the idea. Usually you need to have an idea for an application, where such technology can perform its function, and traditional options are either not competitive in functionality or in price.
These cases are when you have a large database of text and you need to find a solution for certain details and it has to be fast and specific in terms of the details required. For example, a company that has many products and wants to provide customers with quick access to specific information about them. After that, you usually need to choose a specific language model based on the product requirements. Whether you need a specific language, some advanced model or a free model, it gives you the answer to your requirements.
All these things are taken into account. Finally, you find a framework or a specific code library that you can modify according to your requirements to implement the whole idea of the model application.
In this process, do you take human rights into account and whether the product may violate any of the human rights, such as privacy, protection of personal data, discrimination on various grounds?
– Yes, of course. In fact, it is a necessary step, both from an ethical and legal perspective. When you are pursuing training in data analytics, data science, or any field that deals with data, these considerations are part of the program plan.
How do you ensure that the systems you develop based on artificial intelligence do not violate any of the fundamental human rights such as privacy, freedom of expression, and non-discrimination?
– Fortunately, the engineers and large companies that create the basic models we use invest a lot of effort and capital in this work, which an individual cannot afford. The idea is that the security and the protection of basic human rights are part of the initial training of the model that is not made by a person. However, during the creation of the application, a lot of preliminary testing is still needed before the product is put into use, where the performance, security, and capabilities of the application can be seen.

What measures are you taking to prevent bias and discrimination in AI algorithms like Medchat, especially when it comes to sensitive areas like healthcare?
– „Medchat“ is simply a demonstration of the technology with no intention of being released to the market as a single product. If such a product were to be released to the market, then the steps that would be taken would be obtaining certain permits and licenses, consulting with lawyers, testing with universities and medical institutions, and a “disclaimer” stating that the application cannot be used as a final advisor and its advice should not be taken without consulting a doctor. This concerns safety.
In terms of discrimination and prejudice, the app will not store data on individuals to avoid this problem. And ultimately, no model is 100% accurate, because the knowledge in it is human knowledge with human bias built into it. It is an open question from a technical and philosophical perspective, how, or if, these issues have solutions.
People who are not well informed, sometimes, in order to explain a situation, unintentionally reveal their personal data, such as name, surname, social security number, place of residence, sensitive health information. Given the data-driven nature of artificial intelligence, how do you manage and protect user data to prevent unauthorized access, misuse or unintended consequences that may violate people’s right to privacy when using your products?
– For a start-up company that does not have the capital to have a cybersecurity employee, it is very difficult to protect user data and prevent unauthorized access. The only way is to not record user data in local databases at all.
What efforts have you taken to include diverse perspectives in the design and development of AI-based products, including marginalized communities, to avoid increasing existing social prejudices and inequalities?
– This was the central theme of the Goethe-Institut scholarship, where our team won. Democratizing and including the perspectives of marginalized communities is only the first step. But this is impossible without the direct involvement of the different communities themselves, i.e. without the contribution of that community to the training data for the model. This is clearly seen in the capabilities of Chat GPT in English compared to Albanian. There is simply much more material written in that language. In other words, to be involved, you have to get involved.
What strategies do you use and change to keep up with the latest ethical challenges and human rights implications when developing AI-based solutions, and how do you adapt your practices to address these issues?
-It is a field that changes not throughout the year, or from month to month, but often from day to day. It requires a general culture and knowledge of the challenges and a lot of focus and thinking about how these technological changes affect humanity, as well as ourselves. Namely, it is difficult to have a template or framework with a specific strategy, so you have to adapt as much as possible.
Author: Afije Sherifi
Link to the original post: Celiku: The safest way to protect personal data is not to register in local databases| Meta.mk
Share:
