Ethical Considerations in Requirements Engineering
By Andrey Shtanov - anshtanov@edu.hse.ru
Introduction
Requirements Engineering (RE) is a foundational process in the development of various systems and software products, focusing on defining the stakeholder needs to ensure that products meet functional and user requirements. As technology’s role in society is growing, ethical considerations within RE are also becoming increasingly critical, shaping how technology impacts privacy, learning processes, fairness, accessibility, and trustworthiness. This is especially true with the rise of Artificial Intelligence (AI), and, more specifically, with Generative AI services, which have significant potential to transform industries while also raising ethical concerns.
Generative AI services have begun influencing various sectors: from creative content to education and healthcare services. However, they present a unique set of ethical challenges in RE. These include safeguarding data privacy, ensuring fair and unbiased outputs, and controlling the spread of misinformation. For instance, Generative AI models trained on biased datasets can inadvertently perpetuate stereotypes, while their use in education raises questions about intellectual property and academic integrity. Data privacy is another pressing issue, as these models often trained on vast amounts of personal data, requiring stringent requirements for data management and user consent.
As the societal impacts of the AI expand, it places additional pressure on developers to address these ethical problems in the engineering process. This essay argues that integrating ethical considerations into RE is essential to prevent unintended harm, safeguard stakeholder interests, and promote responsible technological progress.
Literature Review
In recent years, various organizations and academic magazines have published their research on the ethical considerations associated with AI. UNESCO, for example, has highlighted ethical concerns related to the use of AI in education. According to their report, AI tools in education are often used without any regulations. That harms the educational process. [1].
But some universities do address the need for ethical guidelines in AI usage and development. For instance, Higher School of Economics has issued a set of recommendations within that regard. These guidelines stress transparency, accountability, and respect for user privacy as foundational elements that should be embedded in RE processes to ensure AI systems align with ethical standards [2]. This is an example of a practical approach in dealing with ethical concerns before they escalate.
Ethical considerations extend beyond educational institutions and into broader societal use. Industry experts and organizations have also contributed recommendations. For example, a blog post from Random Trees addresses the ethical implications of generative AI in fields beyond education and states concerns regarding misinformation, intellectual property infringement, and biases. The article says that by resolving issues like bias, abuse, misuse, ownership, and responsibility, we can harness the advantages of generative AI [3].
Public generative AI platforms, such as Character.AI, have made significant moves towards that idea and updated their safety protocols, aiming to improve ethical compliance by embedding safeguards against harmful or misleading content [4]. While these updates attempt to align with user expectations, they have also generated backlash within the community: users express concerns that updates have restricted creative freedom or limited AI’s capabilities and usefulness for diverse applications.
These findings illustrate a shared commitment to addressing ethical challenges in AI and underscore the importance of integrating ethical guidelines into RE. The growing consensus highlights the critical role of ethics in AI development. The evidence from academic and industry sources supports the idea that ethical RE is essential to foster trust, prevent harm, and ensure responsible AI adoption across diverse fields. But the users’ feedback underscores the importance of continuous ethical assessment and stakeholder engagement in the RE phase, ensuring that safety measures do not lower user satisfaction dramatically.
Discussion
Balancing stakeholder needs is a central ethical challenge in RE, particularly for AI technologies, where multiple interests, such as user experience, innovation, safety, and regulatory compliance often conflict with each other. The Character.AI platform initially was praised for its creative potential and interactive capabilities, recently implemented community safety updates designed to protect users from potentially harmful or misleading content. However, these updates sparked significant user backlash, particularly on community forums like Reddit, where users expressed concerns that the restrictions limited their creative freedom and reduced the platform's flexibility. This case highlights the tension that often sparks when RE incorporates new ethical safeguards.
The challenge in RE for AI is balancing with the safety expectations of regulatory bodies and society values with user expectations for functionality and freedom. In the Character.AI case, the developers had to prioritize safety and compliance to reduce risks associated with harmful or inappropriate content. This involved implementing restrictions that limited certain types of interactions and content generation. But that clashed with users’ desire for unrestricted engagement with the platform’s AI. Balancing these needs requires a continuous, iterative approach in RE, where developers regularly assess and integrate feedback from diverse stakeholders. In the case of Character.AI, user feedback on Reddit serves as an invaluable resource for understanding the impact of these ethical restrictions on the user experience. While the platform’s developers aimed to prioritize safety, the backlash reveals that users felt their needs were inadequately represented in the final requirements. This points to a gap in the RE process: insufficient consideration of the user’s perspective when establishing ethical safeguards. Eliminating this gap may be done by more active stakeholder engagement practices, such as surveys or beta testing phases that capture diverse viewpoints before implementing restrictive changes.
Furthermore, this balancing act between safety and creative freedom reveals the importance of transparency in the requirements engineering process. When stakeholders understand the rationale behind specific requirements, such as safety restrictions, they may be more accepting of changes, even if these changes limit functionality. The example case highlights an opportunity for platforms to communicate the ethical reasoning behind their design decisions more effectively, ensuring users feel informed and respected. Transparency also builds trust, an essential factor in maintaining user engagement and loyalty, especially when ethical updates may impact the user experience.
Conclusion
In conclusion, ethical considerations are essential in requirements engineering, especially in the development of AI systems that impact a diverse range of stakeholders. The example illustrates the complexities of balancing user freedom with the need for safety and regulatory compliance. Integrating ethical safeguards into Requirements Engineering can protect users and foster trust, yet it must be done thoughtfully to avoid alienating the very users it aims to serve. Future efforts in ethical RE should emphasize transparency, continuous stakeholder engagement, and adaptable guidelines, ensuring that AI technologies evolve responsibly and align with societal values.
References
- unesco.org, Use of AI in education: Deciding on the future we want https://www.unesco.org/en/articles/use-ai-education-deciding-future-we-want.
- hse.ru, HSE guidelines for AI in education https://www.hse.ru/docs/894045460.html.
- Random Trees, Ethical Considerations in Generative AI https://randomtrees.medium.com/ethical-considerations-in-generative-ai-112004eef101
- Character AI blog, Community Safety Updates https://blog.character.ai/community-safety-updates/
- ChatGPT 4o, some parts was reviewed by ChatGPT 4o with this promts: “Proofread the text and suggest improvements (this is for an academic essay): <text>”, https://chatgpt.com