For years, Kristijan Lazarev used outdated technology to navigate and orient himself in the world around him, relying on clumsy, robotic voices and limited, often inaccurate apps.
All this has changed with the latest technological inventions and the advancement of artificial intelligence.
“I’m going to take a picture of the laptop now and the app takes 5-10 seconds to send the image. The advancement this year is that I can ask it specific questions about what’s in the photo. For example, I’m interested in what brand the laptop is or what color it is,” says Christian as he listens to the audio reader on his phone describing to him what the laptop photographed ten seconds ago looks like.
Kristijan is a graduate professor of Macedonian language and literature, and is currently a student at the Faculty of Electrical Engineering and Information Technologies (FEIT). He is 29 years old and has complete visual impairment.
“If it has a piece of paper with text written on it, it will read it to me. I can tell it, ‘I don’t want the whole text, I want you to make me a short summary of what’s on the piece of paper.’ And it will do that,” he says.
In the conversation with Kristijan, the transformative potential of artificial intelligence for his life was evident. The interviewee described how, from what was once a “terrible” and annoying “robotic” voice, he now, thanks to the latest technology, uses modern applications such as “Seeing AI” and “Be My Eyes,” which rely on AI to provide real-time assistance to people with visual impairments.

“It’s not perfect yet, but it’s progressing very quickly. For example, last year there was no option to be able to take a picture and interact with the picture like this. What it has now is literally interacting with the picture. I can ask it to explain every part of the photo to me,” says Kristijan, who himself writes articles and reviews about technological devices and aids for the blind.
The tools he uses are an example of the progress being made in terms of opportunities for the visually impaired. “Seeing Ai,” developed by Microsoft, offers a variety of functions, including image recognition and description, text translation, and answering additional questions that users may have.
“Be My Eyes”, a volunteer-run platform, demonstrates the power of community in accessibility, connecting blind and visually impaired people with volunteers who help them perform everyday tasks.
With the capabilities brought by the latest version of one of the most popular language models, ChatGPT (the version called “4o”), “Be My Eyes” recently introduced a virtual volunteer that uses the technology of “GPT-4”. This virtual volunteer can offer almost the same level of context and understanding as a human.
Artificial intelligence for everyone
With the increasing popularity and complexity of artificial intelligence systems, more and more solutions are emerging that can help people with mental, physical, visual or hearing impairments.
A 2022 report by the World Health Organization (WHO) and UNICEF reveals that, by the end of the decade, more than 2.5 billion people worldwide will need one or more assistive devices such as mobility aids, hearing aids, or communication support applications.
In addition to technologies that convert text into speech (text-to-speech) and which, with the latest developments, can change the tone or intonation of the voice based on what is being said and the user’s mood, artificial intelligence is expected to improve and make navigation and orientation applications more accessible.
The latest tools can also interpret sign language into text or speech, which will facilitate communication between people with and without hearing and speech impairments. There is actually such an example in North Macedonia–last year, three high school graduates presented their project–a smart glove, which translates sign language, i.e. hand movements, into text, allowing deaf and mute people to communicate with hearing and speaking people.
THREE HIGH SCHOOL STUDENTS CREATED A SMART GLOVE THAT CAN COMMUNICATE WITH DEAF AND MUTE PEOPLE
If you talk to large language models like OpenAI’s ChatGPT or Google’s Gemini , they will give you a series of examples of how the technology they are based on can make the daily functioning of people with physical disabilities easier.
So, they will tell you (or write) that they will also significantly improve warning systems, in a way that audio signals (such as fire alarms) can be converted into visual or vibrating notifications on smart devices, thus ensuring that people with hearing impairments are aware of the sounds in their environment, which is important for their safety.
Voice assistants like Siri, Google Assistant, and Alexa allow users with mobility impairments to control smart home devices, computers, or phones hands-free. While for some people this is just another feature of modern technology, for people with quadriplegia it is life-changing.
The artificial intelligence available today also offers personalized learning and can adapt content to the pace and learning style of people with cognitive disabilities. Thus, platforms like DreamBox and Lexia customize lessons based on the user’s performance.
“Things will change a lot for people with dyslexia. For them, for example, it is very tiring to read an entire book, and there are already applications that will help them. With the latest version of ChatGPT, you can upload an entire book and ask to extract the key points from the book in one or two pages and have the system read it out loud, which will give these people access to much more information than they have had before,” says Kristijan Lazarev in a conversation about the possibilities of AI.
Virtual therapists are another benefit of AI. Carefully trained chatbots already offer cognitive behavioral therapy and crisis intervention techniques. Apps like Woebot Health and Wysa already exist for this purpose. AI can also monitor users’ behavior and detect signs of distress (through changes in their typing patterns or social media posts) and alert assistants, caregivers, or professionals, which can be helpful for people with depression, anxiety, or post-traumatic stress disorder (PTSD).
Apps based on the latest technology also monitor users’ moods, recognizing visual cues and signs of anxiety in their voices. Such apps, combined with smartwatches that measure their heart rates, can recognize our emotions before we even become aware of how we feel.
Bias in AI development and the impact on accessibility and inclusion
And while artificial intelligence, obviously, will significantly improve accessibility for many, it is important to recognize that, without careful implementation, it can (un)intentionally exclude or discriminate against people with disabilities.
“Now I’m going to show you what accessibility and inaccessibility mean through a single app,” says Kristijan as he explains the functions of a mobile app for learning foreign languages.
“OK,” “Listen,” “Speak”–the audio reader announced the buttons in the app, before saying “Unlabeled” several times. Several of the buttons that Kristijan pressed did not have any labels added to them, which would make it difficult for visually impaired people like him to use the app smoothly.
“The application buttons themselves are unlabeled. And now, I don’t know what this button is for, because it’s unlabeled. The phone tries to recognize it, it tells me it’s some kind of icon with something for voice, but I don’t know what it is and what it’s for. That means an inaccessible application on the phone, the same thing with a computer. I’ll go into an application, for example a banking one, and the reader just tells me it’s some kind of button, but not what it’s for. If the developers don’t put a “labeled” tag on it, I won’t know what it’s for,” says Kristijan.
AI algorithms can in some cases reinforce prejudices or stereotypes, leading to discriminatory outcomes for people with disabilities. One such example is the Hackney family from Pittsburgh, USA, whose daughter was taken away last year by child protective services, who accused the parents of neglecting their child.
The girl’s parents, Andrew and Lauren, both have developmental disabilities and suspect that an AI-based tool that is part of the Allegheny County child welfare system flagged them as at-risk because of their disabilities. They believe the system discriminated against them because of the way the algorithm assessed risk factors, namely Lauren’s ADHD and Andrew’s nervous system disorders, likely leading to higher risk scores. Access to mental health services also increased the overall risk score in the system, and the family, due to their disabilities, likely used such services.
Andrew and Lauren filed a lawsuit against the agency and several employees, and the US Department of Justice is investigating whether the algorithm discriminated against people with disabilities.

But artificial intelligence, by itself, does not discriminate, because AI-based systems behave and learn according to data created and input by humans. According to this, prejudice and discrimination do not originate from AI, but from the data on which these systems were trained, says university professor Ivan Chorbev.
“Artificial intelligence algorithms are recently created by learning from datasets. Huge datasets are collected from the past, from various systems, and the algorithms learn based on them and then behave according to what they have learned. The problem is that there are biases inherent in the datasets from which the artificial intelligence learns. When there are inherent biases in such datasets, the AI that is trained on them will have the same biases that were already built into it by the people who made the decisions before. All the biases that existed in that dataset will also be present in the artificial intelligence,” says Professor Chorbev.
According to him, this is a complex problem, so it is necessary for humans to intervene in the entrenched prejudices.
“Artificial intelligence should not be strictly based only on learning from data sets that contain inherent data, but should intervene and incorporate additional programmed components into that decision-making model that will overcome prejudices,” says the professor.
Regarding the embedded discrimination in AI systems, Kristijan Lazarev will say:
“It’s the same with building sidewalks. If someone wants to make a small slope while pouring concrete, no one will have a problem–not parents with strollers, not travelers with suitcases, not those with wheelchairs. The problem and the way to solve things are similar.”
Ethical artificial intelligence
In addition to the inaccessibility of some applications, artificial intelligence also raises questions about the protection of fundamental human rights and ethical standards and postulates, which also affect people with disabilities, as a marginalized group.
In the document “Research on the Impact of New Technologies, With a Particular Focus on Artificial Intelligence, on Human Rights Online, and the Development of Ethical Standards for Protecting Human Rights on the Internet in the Context of Automated Decision-Making“ by the Metamorphosis Foundation for Internet and Society, several ethical principles are highlighted, which are said to be “recognized as a starting point for the creation, application and use of artificial intelligence systems, and which should represent applicable, worthy and inviolable rules for preserving human dignity due to their safety, confidentiality and responsibility towards people.”
Among the ethical principles listed are: explainability and verifiability, then dignity, prohibition of causing harm, fairness, ethical, and socially responsible use of data.
Regarding the prohibition of causing harm, the document states:
“The artificial intelligence system must comply with security standards, i.e. contain appropriate mechanisms that will prevent causing damage to people and their property. In the event that damage occurs, it must be repaired as soon as possible, and the injured party must be compensated in a manner determined by law.”
This section adds that special attention should be paid to the protection of sensitive categories such as the elderly, people with disabilities or children.
Privacy and consent regarding the collection and use of sensitive personal data are additional ethical dilemmas raised by the operation of AI systems. To function effectively, these systems require access to vast amounts of data, which, for people with disabilities, may mean collecting sensitive data such as health information, which may be vulnerable to misuse.
“All institutions, organizations, and companies involved in the development and use of artificial intelligence must ensure transparency in the use of artificial intelligence, that is, provide clear and concise information to users about whether and how their personal data is collected and processed when using a system that uses artificial intelligence. Additionally, users should be clearly informed for what specific purpose their personal data will be collected, stored, or otherwise processed, and be offered an explicit opportunity to give or withhold their consent,” the Metamorphosis document states.
Our interlocutor Kristijan says he is not “overly” concerned about the way information is collected about him, but adds that the competent institutions need to ensure that companies manage data responsibly.
The answer to the challenges lies in regulation, but companies and organizations also have responsibility
Regarding the problem he faces with inaccessibility, Kristijan says that the responsibility lies with application developers, who should be careful to make their products accessible to everyone, by “adding a few more lines to their codes” to make applications inclusive.
“Developers themselves need to develop the awareness that sometimes, their few seconds of work will save someone with a disability two or three hours and mean easier access,” says Kristijan.
He adds that developers should adhere to W3C standards that are optimized for interoperability, security, privacy, and accessibility.
“If they respect those standards, their applications will be as accessible as possible, according to the capabilities of current technologies. Some technologies have not yet reached the point where everything is accessible, so we are talking about as accessible as possible, as the moment allows,” he says.
Furthermore, Kristijan believes that IT companies should employ people with disabilities in their application testing departments to ensure that their products are accessible to everyone.
“IT companies have QA [Quality Assurance] testers in their teams who are young and capable people, skilled with technology. These are people who do not have any disabilities and can easily navigate what their programmers have done. Instead of testing only with such people, they should include a broader, diversified category of testers, where there would be people with disabilities and older generational people to ultimately ensure that their product can be used by everyone,” he says.
Regarding bias and discrimination, however, the Metamorphosis document states:
“AI systems can produce various forms of discrimination thanks to the fact that they learn from previous bad examples or because they are programmed to place individuals in a discriminatory position. To eliminate these forms of discrimination, it is necessary to correct AI systems based on examples of ideal behavior without discrimination. Stereotypical behaviors that lead to discrimination should not be repeated by AI systems, but must be corrected to abolish injustices and respect the rules of equality for all. All cases in which the functioning of the algorithm leads to discrimination, on any basis, must be effectively sanctioned.”
Kristijan says that no one should fear the rapid advancement of artificial intelligence, if its development is done carefully and responsibly. He believes that the benefits of its progress will be visible to everyone, and among those who will benefit the most will be people with disabilities.
“One of the things that blind people are most looking forward to is autonomous vehicles. And they wouldn’t be able to exist if AI didn’t get even better than it is now–if it didn’t start thinking by watching videos and training on videos. So, in the near to medium-near future, I think that will happen and those autonomous vehicles will become a reality and we won’t have to depend on JSP that doesn’t work properly, buses don’t arrive on time and we have inaccessible public transport. That will be solved,” says Kristijan.
Ever since the emergence of ChatGPT, which marked a turning point in the development of generative artificial intelligence, the European Union has been working on regulating the new technology. In April 2021, the European Commission proposed the first EU regulatory framework for artificial intelligence. As part of it, it is proposed to analyze and classify artificial intelligence systems according to the risk they pose to users–different levels of risk will mean more or less regulation.
After being approved last month, these rules became the first of their kind for artificial intelligence in the world.
The European Parliament, meanwhile, is prioritizing ensuring that AI systems in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They should be supervised by humans to prevent negative outcomes. Parliament also wants to establish a technologically neutral and single definition of AI that can be applied to future AI systems.
The transformative power of artificial intelligence for people with disabilities is undeniable and offers unprecedented opportunities for inclusion. However, as the integration of these technologies into people’s daily lives continues, it is crucial to prioritize respect for human rights. Hence, it is important to strike a balance between the need for technological advancements and the need to respect rules, guidelines, and laws.
Respect for human rights must not be seen simply as a legal obligation that must be complied with by those working to develop artificial intelligence, but as one of the essential components for its wise use. AI systems that are developed with accessibility and inclusivity at their core will usher in a future in which technology truly serves everyone, leaving no one behind.
Link to the original text: Револуција во пристапноста: Трансформативната моќ на вештачката интелигенција за лицата со попреченост | Meta.mk
