In today’s rapidly evolving digital landscape, artificial intelligence (AI) has emerged as a transformative force, revolutionising various industries. AI systems are now integral components in industries such as health care, finance, marketing, and customer service. They enhance efficiency, improve decision-making processes, and provide valuable insights from vast datasets.
However, the rise of AI also raises crucial questions about ethical considerations and the need for regulations to ensure responsible and lawful commercial use. Ethical AI deployment is essential to protect consumers, uphold human rights, and ensure the technology’s benefits are accessible to all. Regulatory frameworks not only provide guidance but also instil trust among stakeholders, fostering innovation while safeguarding against potential abuses.
At the forefront of AI ethics is the European Union (EU). In 2023, the EU began implementing the EU AI Act, which is meant to ensure that AI systems follow certain ethical standards. What does this act mean for commercial AI? How could it affect your industry? DALIM SOFTWARE has shared an informative guide to help you understand the effects of the EU’s AI Act. Keep reading to learn more.
Using ChatGPT and Other AI Systems for Commercial Use
ChatGPT is the most popular chatbot used to generate content and code for various applications. However, it and other AI systems go beyond content generation. Some are used for data analytics, fraud detection, and quality control. Thanks to these applications, AI users can enjoy the following advantages and more:
• Enhanced productivity and efficiency
• Improved decision-making through data-driven insights
• Cost reduction and resource optimization
• Improved customer experiences through personalised interactions
• Automation of repetitive tasks
Of course, AI is still in its infancy and needs further polishing. There have been some issues with ChatGPT and other generative AI software, resulting in disadvantages such as:
• Privacy concerns and data security risks
• Potential job displacement
• Bias and fairness issues in AI algorithms
• Ethical dilemmas in decision-making processes
• Regulatory compliance challenges
These disadvantages are driving governments and organisations around the world to regulate the use of AI. Protecting consumers in the AI era is the main goal for these groups.
General Rules for AI’s Commercial Use in the EU
In the European Union, the development and deployment of AI for commercial use are subject to certain laws and guidelines. EU AI regulations aim to strike a balance between innovation and protecting an individual’s rights and interests.
Transparency and Accountability
AI systems used in commercial settings must be transparent, meaning they should provide clear information about their capabilities and limitations. Companies are accountable for the AI’s actions, and they must be able to explain the rationale behind AI-driven decisions.
Non-Discrimination and Fairness
To avoid biases and discrimination, AI applications must be designed and tested to ensure fairness, especially regarding protected groups. Algorithms should not perpetuate or amplify existing societal biases.
One specific area under these regulations is protection for vulnerable populations, especially the elderly and those with physical disabilities. Special care and considerations must be taken to ensure that AI applications used in health care, support services, or other contexts do not discriminate against or harm these groups.
Human intervention is essential, especially in high-risk AI applications. There must be mechanisms for human review and intervention in AI-driven decisions. Human oversight ensures that there is a responsible party who can be held accountable for the actions and decisions that AI systems make. When things go wrong or ethical issues arise, it’s essential to have a human as part of the process who can take responsibility and make necessary corrections.
The AI Act in the EU
The EU AI Act is a proposed regulation that aims to regulate the development and use of artificial intelligence in the EU. It is the first comprehensive AI law in the world, and it is designed to ensure that AI systems are safe, trustworthy, and used in a way that respects fundamental rights.
This category includes AI systems that pose a serious threat to fundamental rights or safety, such as AI systems that can be used for social scoring or mass surveillance. These AI systems are prohibited under the AI Act.
AI systems that could pose a significant risk to fundamental rights or safety, such as AI systems used in health care, education, or law enforcement, qualify as high risk. These AI systems must comply with a number of strict requirements, such as human oversight, transparency, and accountability measures.
This tier includes AI systems that pose a low or minimal risk to fundamental rights or safety, such as AI systems in chatbots or video games. These AI systems must comply with certain general requirements, such as transparency and fairness.
These AI systems pose no or negligible risk to fundamental rights or safety, such as those used in spam filters or simple calculators. These AI systems are subject to very few requirements.
The AI Act also establishes several other requirements for AI systems, such as the following.
• AI systems must be designed and developed in a way that minimises the risk of bias and discrimination.
• AI systems must be transparent and accountable, and users must be able to understand how they work and why they make the decisions they do.
• AI systems must be subject to human oversight, and users must have the ability to override their decisions.
• AI systems must be used in a way that respects fundamental rights and values, such as the right to privacy, the right to non-discrimination, and the right to freedom of expression.
The initial regulatory framework for AI in Europe has been around since April 2021. As of writing, the European Parliament and the Council of the European Union are currently negotiating the AI Act. Finalisation and adoption of the act are anticipated in the later months of 2023 or early 2024. There will be a transition period of 18 months before the EU fully enforces the new law.
Prohibition of Biometrics
One area of focus for the EU AI regulations is biometrics. Biometric identification is the use of unique physical or behavioural characteristics to identify individuals. One specific method is facial recognition, a type of biometric identification that uses facial features to identify individuals.
Facial recognition technology is particularly controversial due to its potential for mass surveillance. Facial recognition systems can be used to track and monitor people in public places, such as streets, parks, and airports. This usage raises serious concerns about privacy and freedom of movement.
The EU has taken a strong stance on the regulation of biometric identification, especially facial recognition. The proposed EU AI Act would ban the use of facial recognition for real-time mass surveillance in public places. The group cited reasons such as:
• Privacy concerns: The EU believes that mass surveillance using facial recognition is a serious invasion of privacy.
• Discrimination concerns: The EU believes that mass surveillance using facial recognition could be used to discriminate against certain groups of people.
• Potential for abuse: The EU believes that governments and corporations could abuse mass surveillance using facial recognition.
The EU’s stance on biometric identification is in line with its broader commitment to protecting human rights and fundamental freedoms. The EU believes that new technologies, such as facial recognition, should be used in a way that respects human rights and does not harm individuals or society.
The Challenges of Effective Communication Through AI
Most AI programs in marketing and communication belong to the limited risk category in the EU AI Act. Companies that rely on this category of systems won’t feel the impact of the AI Act as much as others. Still, it’s one thing to use AI and another to utilise it to its full potential.
As businesses begin to incorporate AI into their marketing campaigns, some will inevitably stumble and face the current public reception of AI. One example is the UNDIZ lingerie billboard ads, the first AI-driven marketing campaign in France. The company used AI-generated images, touched up later by real people to correct flaws. Still, French law required them to add disclaimers that such images were AI-generated.
The initial response to the AI ads was negative. People on Twitter, Facebook, and other social media platforms discussed how the images looked “unnatural” and creepy. Others pointed out how such applications could upend the careers and jobs of professionals, such as models, graphic designers, and creative directors. The UNDIZ campaign got mixed reactions because of the negative publicity for AI.
What does this public reaction to AI-driven marketing entail? What can companies do to effectively reach an audience using AI?
AI Is Not Resonating With Most Audiences
The public still mistrusts AI for many reasons, including:
• Lack of transparency: AI systems are often complex and opaque, making it difficult for people to understand how they work and make decisions. This lack of transparency can lead to distrust and suspicion.
• Bias and discrimination: AI systems have been shown to be biassed against certain groups of people, such as women and people of colour. This bias can lead to unfair and discriminatory outcomes.
• Potential for misuse and abuse: Governments, corporations, and criminals can misuse and abuse AI systems. For example, malicious groups could use AI to track and monitor people without public consent or develop autonomous weapons that could kill without human intervention.
• Too Many Flaws: Systems that create AI-generated images are still in their infancy, so many of them have flaws that even untrained human eyes can see. Companies pushing for these low-quality products (for the sake of cost-efficiency or to ride the bandwagon) only create more disapproval in customers. Most people want authenticity and the human touch, which AI has still yet to achieve.
What Can Be Done To Build Public Trust in AI?
There are a number of things that can be done to build public trust in AI and achieve effective AI communication. These include:
• Pushing for AI transparency
• Ensuring accountability for the outcome of the systems
• Avoiding bias and discrimination in data collection and output
• Always incorporating human oversight
There has also been a move to introduce a distinct symbol for AI-generated images. In some cases, AI-generated images can be very realistic, and it can be difficult for people to tell the difference between a genuine image and an AI-generated image. Having a specific symbol in place would help to protect consumers from misleading images.
Another use for the AI symbol/signage is that it can help to promote transparency and accountability. By labelling AI-generated images, companies are showing that they are being transparent about their use of AI. This type of action can help to build trust with both consumers and regulators.
Final Thoughts: How Do We Make AI Less of a Spooky Threat?
Balancing the power of AI in commercial uses and ensuring ethical considerations is a complex challenge. The EU AI Act is a great first step that other countries and organisations around the world can emulate. Only through a global effort can we make AI a tool that the general public shouldn’t be afraid of. Through efforts like the EU AI Act, both AI and public sentiment may finally improve.
For companies, we encourage you to stay updated with the most current regulatory changes. In addition, make ethical considerations your top priority when developing and releasing AI solutions or products. These actions can help improve the public perception of AI, safeguard vulnerable populations, and minimise risks.