w e _ c a n

Social Media

AI is stealing our jobs, data and art? – Ethical concerns about the use of generative AI

With the spread of AI technology in the industry, concerns regarding its safety and impact on our lives are also vocalized: these can stem from instinctive fear caused by lack of transparency in its operation, but also from very recent examples of how AI can abuse personal data, copyright, or simply because it wasn’t created to be perfect. In our article series on AI, we start with summarizing the most common ethical concerns around the usage of it.

Bias

Ethical concerns arise as AI systems are trained on biased data which can lead to discriminatory practices, even leading to legal implications. AI perpetuates gender bias in search results, for example favouring male figures when searching for influential leaders and generating sexualized depictions of schoolgirls – but not schoolboys. Search engines, driven by user clicks and location create echo chambers that only reinforce (often harmful) real-world biases online.

Privacy

Generative AI models, like large language models (LLMs), may include personally identifiable information (PII) and set challenges for consumers to locate and remove their data. Privacy concerns have been risen in the collection, processing, and storage of massive datasets for AI training. Furthermore, the use of AI in surveillance by law enforcement gathers worries about potential misuse.

Distribution of harmful content

AI’s automated content creation brings about productivity gains but raises risks of unintentional harm on many levels, from offensive language in generated emails to more concerning content: deepfakes, surpassing voice and facial recognition pose serious challenges. Impersonations using deepfakes can influence public opinion in politics, possibly even leading to market or election manipulation. The lack of oversight means significant threat, such as the recent and popular deepfake video of the Ukrainian president Volodymyr Zelenskyy.

Transparency

Transparency on AI algorithms is much needed and expected; deep learning processes are often obscure, lacking the disclosure of the “mechanics”. Generative AI, like ChatGPT may hide data associations, raising concerns about trustworthiness. Understanding AI decision-making is vital, also making it transparent, especially in fields like healthcare and law enforcement, where even human lives might be at stake.

Accountability

The rise of using AI brings daily decision-making which has lead to accountability challenges. Identifying responsibility for negative outcomes becomes complex: does it belong to the companies validating purchased algorithms or the AI tool creators?

Fallibility

AI-based decisions are prone to inaccuracies due to the inherent imperfections of software. Just like databases, games, and websites, AI algorithms including generative chatbots like ChatGPT can produce false information, as its primary task is to mimic human conversations – not to be precise. Thus, reliance on AI can be problematic for critical business models and analytics.

Job displacement

AI solutions have already sparked fears in employees for losing jobs as automation may replace roles. Ethical responses involve investing in workforce preparation for new roles created by generative AI applications, and at the moment weCAN partner agencies claim: Copywriters using AI will replace copywriters not using AI, based on the fact that currently it’s far from being able to replace humans, but those who can effectively use it can create a massive advantage.

Copyright ambiguities

A lawsuit against ChatGPT highlights AI’s impact on intellectual property: writers, including George R. R. Martin and Jodi Picoult sued OpenAI for allegedly using their work to train AI without their permission. This case embodies fears about the livelihood of authors and the exploitation of intellectual property, along a broader issue of AI’s influence on creativity, seen in the creation of an AI-generated Rembrandt painting in 2016. The question of authorship arises, which in this state heavily challenges traditional definitions.

Where are we heading?

Even though we don’t have definitive answers addressing all these concerns, the urgency for companies to prioritize ethical AI practices is rather obvious and what we can do is analyze and enforce transparency and action on companies, AI projects.

Get started now

If you would like to work with us or just want to get in touch, we’d love to hear from you!

London

Baltia Squar, Mark Street, London

New York

Nenuya Centre, Elia Street New York, USA
Email
© 2022 – 2025 | Alrights reserved by Crowdyflow