
The success of ChatGPT and other intelligent applications has created the impression in the public that now human duties will be significantly eased and in the future everything will be automatic, without being aware of the problems these programs have and how easily they can make wrong decisions. Given that artificial intelligence will undoubtedly remain a part of our everyday lives, it is important to be informed about its benefits, but also prepared for the risks it brings, writes Portalb.mk.
Cases of malfunctioning intelligent programs have become more frequent in recent years, and thus, within the framework of the EU and the European Council, a large number of documents and proposals have been adopted to raise awareness of the threat of these technological products to human rights.
In the research of Dr. Igor Kambovski and M.A. Elena Stojanovska on the effect of new technologies, with a special focus on artificial intelligence, it is stated that its use in decision-making carries a serious risk of being discriminatory, especially when we take into account the fact that it cannot be explained how an intelligent program came to a certain conclusion.
For example, banking customers are often unaware that decisions about their credit applications are made by AI systems, not humans. Second, banking customers, even when they know that their applications are being decided by an AI system, do not have an adequate explanation for why their application was rejected , so it is logical that customers find it difficult to determine whether an algorithmic decision is discriminatory or not, the research states.
A similar discussion took place last year in Skopje, with representatives from FITR, the Association for Technology and Internet of Romania, Partners for Democratic Change from Serbia, and European Digital Rights from Belgium, where some of the most unfortunate cases of malfunctioning intelligent software were enumerated.
Let’s not forget the scandal that happened in the Netherlands with social assistance, which was a very simple artificial intelligence system. The system showed that there are people who lie when applying for social assistance. This artificial intelligence system analyzed things like residence, ethnicity, etc. As a result of the functioning of this system, some people were unlawfully deprived of their right to social assistance, others had their children taken away, and some even committed suicide, said Ella Jakubowska, Senior Policy Advisor at European Digital Rights from Belgium.
Of course, Balkan countries have also explored the popularity of artificial intelligence products, and at the same time the increased risk of their use. North Macedonia in 2021 formed a working group that was supposed to develop a National Strategy for Artificial Intelligence, to help people who want to learn about and work with artificial intelligence. Within the framework of this strategy, many local start-ups will be given the opportunity to realize their ideas and projects with appropriate training and access to modern equipment. The working group includes representatives of the company Aspigel, Web Factory, Masit, Piksel, the Ministry of Economy, Association Konekt, the Metamorphosis Foundation, the Faculty of Electrical Engineering and Information Technologies, as well as the Ministry of Information Society.
Why is it difficult to create an intelligent program that does not discriminate?
Contrary to various theories that are spreading among the general public, the developers of artificial intelligence do not deliberately “set” intelligent programs to work to the detriment of society, nor do they keep them under control because in that case it would be a conspiracy against humanity. The truth is that the logic of an intelligent program is not “written,” there are no lines of code where it is written “if X is Macedonian, ignore them.” Most of the work lies in how the data is prepared.
If we wanted to train an intelligent program that selects future employees in a company, in the simplest scenario, we would need enough data to create at least one statistical table (70% of managers are over 30 years old, 13.7% are from Europe, 23% are blondes…) and offer it to the AI. This table is created after collecting data from, for example, the CVs of 10,000 employees and allows the artificial intelligence to “learn” relationships such as: If the candidate is over 30 years old, they are suitable to be a manager.
The first obstacles that arise in such a situation are the prejudices that society already has and of course, they are reflected in the data. For example, if in the table only 14% of managers are women, then the artificial intelligence will be biased that women are unsuitable for managerial roles, and such prejudices are difficult to eliminate, because we cannot create fake tables.
So, one way AI developers deal with these problems is by adding importance indicators to each factor. For our hypothetical case, regarding age and gender, we can “teach” the AI to give more importance to age, because its importance indicator is higher.

However, in a neural network there are millions of such factors, which are interconnected with each other.
Can we predict how one of them, or an indicator of one of them, will affect the final result?
The “butterfly effect” created by a neural network is the second obstacle that conditions and complicates the development of an effective intelligent application.

Even for AI developers, the process of artificial learning is random
What actually happens when you train an intelligent program is that some importance indicator is randomly changed or some factor is ignored. Namely, age can be randomly ignored or its importance indicator can be reduced, and after each random configuration it is tested to see if the intelligent program works well, which means that its programmers have no control over it and cannot predict what the intelligent program will learn.
Of the data needed to train the AI, a portion is set aside to test what it has learned and see how functional it is, but again, this does not mean that the program is ready to be released to the market. The AI looks for similarities and differences in the data, but it cannot predict what similarities and differences it will find. For example, from a large number of applications it can learn that most applicants who are successful managers began their descriptions with the words “I am” and consequently evaluate any applicant who began their CV in that way as quite suitable. From photos of the company’s lawyers, it can learn not facial features, but the level of light or the level of contrast or the length of hair. Therefore, very smart programs can be 100% effective in making decisions with the data that is prepared for them, but still fail in the market.
Jashari: The challenge lies not in artificial intelligence systems, but in people themselves
Bardhyl Jashari, Director of the Metamorphosis Foundation and also part of the National AI Strategy working group, stated that the database with which the program is trained is much more important than the learning model. According to him, data is the foundation on which artificial intelligence systems are built and directly influences and shapes the capabilities and results of these systems.
“Imagine a house: you can have a great architect and a skilled construction team, but if the foundation is weak, the entire structure is at risk. Data quality also plays an important role–even the most sophisticated artificial intelligence algorithms will not be able to overcome the shortcomings of incomplete data,” says Jashari .

Jashari adds that the internet is full of discriminatory content, prejudice and hatred, and if, when designing programs that use artificial intelligence, we allow the artificial intelligence system to learn from this data uncontrollably, we will have problematic results. Therefore, the data used to train or teach the artificial intelligence-based system must be comprehensive and unbiased.
According to him, this can be done by collecting data from various sources and using techniques to detect and remove biases within the data set used. Then, the AI algorithms themselves can be modified to be fairer and remove discriminatory biases in the decision-making process.
In this process, transparency is key. Making AI systems more transparent allows for scrutiny and identification of potential biases. Regular audits and reviews can help identify and address discriminatory issues in AI systems before they cause harm, he says.
Jashari stressed the fact that AI should not completely replace human judgment and assessment, especially on sensitive topics. However, according to him, even human judgment is not always free from bias, which makes this mission difficult, but not impossible.
“The challenge lies not in artificial intelligence systems,” Jashari said, “but in people themselves with their prejudices and values.”