In today’s rapidly evolving digital landscape, artificial intelligence (AI) has emerged as a transformative force, revolutionising various industries. AI systems are now integral components in industries such as health care, finance, marketing, and customer service. They enhance efficiency, improve decision-making processes, and provide valuable insights from vast datasets.
However, the rise of AI also raises crucial questions about ethical considerations and the need for regulations to ensure responsible and lawful commercial use. Ethical AI deployment is essential to protect consumers, uphold human rights, and ensure the technology’s benefits are accessible to all. Regulatory frameworks not only provide guidance but also instil trust among stakeholders, fostering innovation while safeguarding against potential abuses.
At the forefront of AI ethics is the European Union (EU). In 2023, the EU began implementing the EU AI Act, which is meant to ensure that AI systems follow certain ethical standards. What does this act mean for commercial AI? How could it affect your industry? DALIM SOFTWARE has shared an informative guide to help you understand the effects of the EU’s AI Act. Keep reading to learn more.
ChatGPT is the most popular chatbot used to generate content and code for various applications. However, it and other AI systems go beyond content generation. Some are used for data analytics, fraud detection, and quality control. Thanks to these applications, AI users can enjoy the following advantages and more:
Of course, AI is still in its infancy and needs further polishing. There have been some issues with ChatGPT and other generative AI software, resulting in disadvantages such as:
These disadvantages are driving governments and organisations around the world to regulate the use of AI. Protecting consumers in the AI era is the main goal for these groups.
In the European Union, the development and deployment of AI for commercial use are subject to certain laws and guidelines. EU AI regulations aim to strike a balance between innovation and protecting an individual’s rights and interests.
AI systems used in commercial settings must be transparent, meaning they should provide clear information about their capabilities and limitations. Companies are accountable for the AI’s actions, and they must be able to explain the rationale behind AI-driven decisions.
To avoid biases and discrimination, AI applications must be designed and tested to ensure fairness, especially regarding protected groups. Algorithms should not perpetuate or amplify existing societal biases.
One specific area under these regulations is protection for vulnerable populations, especially the elderly and those with physical disabilities. Special care and considerations must be taken to ensure that AI applications used in health care, support services, or other contexts do not discriminate against or harm these groups.
Businesses must adhere to data protection regulations like the General Data Protection Regulation (GDPR) when using AI to process personal data. Consent, data minimization, and security are paramount.
Human intervention is essential, especially in high-risk AI applications. There must be mechanisms for human review and intervention in AI-driven decisions. Human oversight ensures that there is a responsible party who can be held accountable for the actions and decisions that AI systems make. When things go wrong or ethical issues arise, it’s essential to have a human as part of the process who can take responsibility and make necessary corrections.
The EU AI Act is a proposed regulation that aims to regulate the development and use of artificial intelligence in the EU. It is the first comprehensive AI law in the world, and it is designed to ensure that AI systems are safe, trustworthy, and used in a way that respects fundamental rights.
The AI Act classifies AI systems into four risk categories.
This category includes AI systems that pose a serious threat to fundamental rights or safety, such as AI systems that can be used for social scoring or mass surveillance. These AI systems are prohibited under the AI Act.
AI systems that could pose a significant risk to fundamental rights or safety, such as AI systems used in health care, education, or law enforcement, qualify as high risk. These AI systems must comply with a number of strict requirements, such as human oversight, transparency, and accountability measures.
This tier includes AI systems that pose a low or minimal risk to fundamental rights or safety, such as AI systems in chatbots or video games. These AI systems must comply with certain general requirements, such as transparency and fairness.
These AI systems pose no or negligible risk to fundamental rights or safety, such as those used in spam filters or simple calculators. These AI systems are subject to very few requirements.
The AI Act also establishes several other requirements for AI systems, such as the following.
The initial regulatory framework for AI in Europe has been around since April 2021. As of writing, the European Parliament and the Council of the European Union are currently negotiating the AI Act. Finalisation and adoption of the act are anticipated in the later months of 2023 or early 2024. There will be a transition period of 18 months before the EU fully enforces the new law.
One area of focus for the EU AI regulations is biometrics. Biometric identification is the use of unique physical or behavioural characteristics to identify individuals. One specific method is facial recognition, a type of biometric identification that uses facial features to identify individuals.
Facial recognition technology is particularly controversial due to its potential for mass surveillance. Facial recognition systems can be used to track and monitor people in public places, such as streets, parks, and airports. This usage raises serious concerns about privacy and freedom of movement.
The EU has taken a strong stance on the regulation of biometric identification, especially facial recognition. The proposed EU AI Act would ban the use of facial recognition for real-time mass surveillance in public places. The group cited reasons such as:
The EU’s stance on biometric identification is in line with its broader commitment to protecting human rights and fundamental freedoms. The EU believes that new technologies, such as facial recognition, should be used in a way that respects human rights and does not harm individuals or society.
Most AI programs in marketing and communication belong to the limited risk category in the EU AI Act. Companies that rely on this category of systems won’t feel the impact of the AI Act as much as others. Still, it’s one thing to use AI and another to utilise it to its full potential.
As businesses begin to incorporate AI into their marketing campaigns, some will inevitably stumble and face the current public reception of AI. One example is the UNDIZ lingerie billboard ads, the first AI-driven marketing campaign in France. The company used AI-generated images, touched up later by real people to correct flaws. Still, French law required them to add disclaimers that such images were AI-generated.
The initial response to the AI ads was negative. People on Twitter, Facebook, and other social media platforms discussed how the images looked “unnatural” and creepy. Others pointed out how such applications could upend the careers and jobs of professionals, such as models, graphic designers, and creative directors. The UNDIZ campaign got mixed reactions because of the negative publicity for AI.
What does this public reaction to AI-driven marketing entail? What can companies do to effectively reach an audience using AI?
The public still mistrusts AI for many reasons, including:
There are a number of things that can be done to build public trust in AI and achieve effective AI communication. These include:
There has also been a move to introduce a distinct symbol for AI-generated images. In some cases, AI-generated images can be very realistic, and it can be difficult for people to tell the difference between a genuine image and an AI-generated image. Having a specific symbol in place would help to protect consumers from misleading images.
Another use for the AI symbol/signage is that it can help to promote transparency and accountability. By labelling AI-generated images, companies are showing that they are being transparent about their use of AI. This type of action can help to build trust with both consumers and regulators.
Balancing the power of AI in commercial uses and ensuring ethical considerations is a complex challenge. The EU AI Act is a great first step that other countries and organisations around the world can emulate. Only through a global effort can we make AI a tool that the general public shouldn’t be afraid of. Through efforts like the EU AI Act, both AI and public sentiment may finally improve.
For companies, we encourage you to stay updated with the most current regulatory changes. In addition, make ethical considerations your top priority when developing and releasing AI solutions or products. These actions can help improve the public perception of AI, safeguard vulnerable populations, and minimise risks.