Towards responsible AI: our framework for ethical development

Towards responsible AI: our framework for ethical development

Head of Business Development at Modsen

Paul Kirikov,

Head of Business Development at Modsen

Many claim to prioritize AI ethics but miss the mark. Establishing an ethical framework goes beyond buzzwords like “transparency.” It requires understanding ethics and aligning business goals with responsible development.

Waiting for an external framework isn’t an option, as we’re actively working with AI in real-time. Our aim is to ensure our work aligns with future regulations.

The solution? Embrace accepted standards, internal guidelines, and company values. At Modsen, we’ve developed a practical, adaptable AI ethics framework that integrates seamlessly with our technical strategies. Our goal is to find solutions that benefit all stakeholders while upholding our values and principles. We’re eager to share our approach with you.

Why AI and data ethics matter
Why AI and data ethics matter

Amidst the dynamic field of AI, ethical challenges arise, mirroring Liu Cixin’s “three-body problem” in the renowned novel. Companies grapple with bias, environmental impact, and privacy concerns while pursuing a responsible AI future. Yet, as TechCrunch notes, ethical considerations often “fall by the wayside”. A staggering 56% of executives, per Deloitte's 2023 report, question their organizations’ adherence to AI ethical standards.

To address this, it’s crucial to deeply understand ethics and align business goals with responsible development. A comprehensive AI and data ethics framework is essential for navigating complex issues, mitigating risks, and promoting responsible innovation.

Several potential ethical dilemmas and risks associated with AI and data usage include:

Bias and discrimination: AI systems can perpetuate and even amplify existing biases in data, leading to unfair outcomes.

Privacy and transparency: Inadequate data handling and opaque AI decisions undermine privacy, security, trust, and accountability.

Environmental impact: AI technologies can contribute to pollution, resource depletion, and other negative environmental effects if not used responsibly.

There have been numerous incidents in the past where AI and data usage led to negative consequences:

Microsoft’s Tay chatbot, launched in 2016, aimed to engage Twitter users in friendly conversations. However, it was quickly exploited, transforming from a harmless conversationalist to a platform for offensive content, resulting in controversy.

The 2018 Cambridge Analytica scandal unveiled unethical data harvesting practices from millions of Facebook users, breaching privacy boundaries and triggering a global dialogue on data ethics and user consent.

In 2019, racial bias was discovered in a widely used healthcare algorithm, leading to unequal treatment recommendations for certain categories of patients. The incident emphasized the crucial need for diversity and inclusivity in AI development teams and datasets, to mitigate bias and ensure equitable outcomes.

To steer clear of the pitfalls and protect against potential harm, we must take proactive steps. Embracing a strong AI and data ethics framework empowers businesses involved in AI software development to make ethically sound decisions, prioritizing transparency, fairness, and accountability in AI systems. It’s about safeguarding individual rights and well-being while navigating the complexities of technological advancement.

Key components of an AI and data ethics framework

As mentioned, merely uttering terms like “transparency” and “commitment,” or establishing oversight committees isn’t enough to enable AI ethics integration in software development. While these actions may appear promising, they can merely scratch the surface without meaningful implementation.

True AI ethics integration demands a deeper examination of the concepts and their practical application. Let’s dig deeper, shall we?

We say

We do

Bias and discrimination

  • Data auditing: Regularly reviewing training data helps identify biases and ensure inclusivity across diverse demographics.

  • Bias detection algorithms: Incorporating algorithms capable of identifying and quantifying biases in datasets and models enables targeted corrective actions.

  • Fairness-aware ML: Integrating fairness constraints into machine learning models promotes equitable outcomes for all demographic groups.

  • Diverse dataset: Collecting a wide range of datasets better reflects societal diversity and minimize bias propagation in AI systems.

 

Privacy and transparency

  • GDPR and CCPA compliance: Adhering to regulations like GDPR and CCPA helps safeguard user data and privacy rights.

  • Transparency measures: Providing clear explanations of AI decision-making processes fosters trust and accountability.

  • Data encryption: Employing encryption techniques protects sensitive data from unauthorized access, maintaining confidentiality.

  • User consent mechanisms: Providing robust mechanisms for obtaining user consent empowers individuals to control their personal data usage.

 

Environmental impact

  • Energy-efficient algorithms: Developing and utilizing energy-efficient algorithms reduce computational resource consumption.

  • Renewable energy usage: Powering AI infrastructure and data centers with renewable energy sources minimizes environmental impact.

  • Lifecycle assessments: Thoroughly assessing AI systems identifies environmental impacts and informs sustainability strategies.

  • Responsible AI usage: Promoting responsible AI practices prioritizes environmental sustainability and considers the ecological footprint of AI applications.

 

Now that we know more, we can see what needs to be done. The next step is to understand why and how a particular business situation should use it.

Ethics and efficiency: Harmonizing business objectives in AI

In 2022, an IBM study revealed that only 40% of people trusted companies to use new technologies like AI ethically (a similar percentage to 2018). Now, in 2024, has anything changed?

The crux of the issue lies in the delicate balance between business goals and ethics, a dilemma discussed at the recent 2nd Global Forum on the Ethics of Artificial Intelligence. It highlighted a crucial point: ethics must always serve a purpose.

Ethics discussions often center around fundamental business principles. Businesses aim to deliver quality products and services, free from flaws. Racially biased facial recognition, extremist chatbots, or discriminatory CV analyzers not only fail ethically but also violate these principles.

Aligning company goals with ethics lays the foundation for a responsible approach to AI development. However, defining these principles within the AI context poses challenges. For instance, how does transparency impact the ownership of AI intellectual property? How should responsibility for AI be distributed within companies? What specific measures can be implemented to uphold ethical standards throughout the development process?

In pursuit of answers to the questions, companies have reacted differently. Some major ones have established oversight committees consisting of executives, legal experts, data scientists, and even philosophers. They review AI projects at different stages, wielding the power to halt those with ethical concerns. However, their detachment from the development process raises questions about responsiveness and effective solutions.

Another is a multi-disciplinary “on the ground” approach. Imagine a scenario where everyone involved in an AI project, from developers to data scientists to project managers, huddles together to address ethical concerns head-on. This method champions teamwork, ensuring ethical considerations are woven into the development process. It’s about equipping those closest to the action with the tools to tackle ethical challenges in real time.

While oversight committees have their place in the process, with some big-name companies swearing by them, here at Modsen, we prefer the “on the ground” approach. And here is why.

More about Modsen’s AI and data ethics framework

At Modsen, we believe that an effective AI ethics framework requires three main components: a clear goal, core principles that align stakeholders’ interests, and a guide outlining specific actions. Our overarching goal for ethical development is to prevent any harm caused by AI, encompassing procurement, broader technological outcomes, and the development process itself. Harm, to us, refers to any unjustified tangible negative impact on our clients’ businesses or end users.

AFAS: Our guide to responsible AI development and beyond

With this goal in mind, we focus on four core principles: accountability, fairness, accessibility, and sustainability (the team calls it AFAS for short). Accountability guarantees human over sight and responsibility at every stage, emphasizing compliance, transparency, and data confidentiality. Fairness addresses discrimination risks, ensuring diverse datasets and unbiased outcomes. Accessibility enables technology usability for all users. Additionally, sustainability is a key principle we uphold, striving to minimize our environmental footprint and promote long-term viability in all aspects of our operations and technologies.

Ethical principles for AI development

Accountability
Fairness
Accessibility
Sustainability

Each principle prompts specific inquiries:

Accountability – Are regulations adhered to? Who holds responsibility at each project stage and for its outcomes? What level of transparency does the solution offer, and how secure is the data?

Fairness – Does the data include human attributes that may lead to discrimination? Is the dataset sufficiently diverse to be considered representative? Does stress testing reveal biased outcomes? Is there evidence of AI sentience?

Accessibility – Is the solution output comprehensible? How accessible is the technology to all end users? How thorough is the documentation of the code and data?

Sustainability – How energy-efficient is the programming approach? Are there plans to optimize resource usage? Does the development process consider the environmental impact of hosting, storage, and other resources? Are there any plans to optimize the software’s energy consumption and reduce its carbon footprint over time?

By answering the questions related to a specific project, we gain a clear understanding of how to proceed with its development ethically. From there, we can confidently apply this understanding to our technical standards, as well as clients’ interests to make sure ethical considerations are integrated throughout the process.

Ethical AI in focus

Navigate development with ethics at the forefront. Connect with our AI experts for guidance.

Dr. Gleb Basalyga, PhD, Senior Machine Learning Engineer at Modsen

Free consultation
Dr. Gleb Basalyga, PhD, Senior Machine Learning Engineer at Modsen
Top Right Decorative Hexagon
Bottom Left Decorative Hexagon
From theory to practice: Actionable checklist

While principles provide guidance, a specific actionable checklist is crucial. Our framework incorporates measures across multiple development issues, ensuring stakeholders tackle relevant issues. We’ve outlined initial steps for procurement, data analysis, and outcomes stages, with ongoing development for the project stage. Here’s a sneak peek into how we do it:

Our checklist for ethical AI development

Our commitment at Modsen is clear: we only decline projects, including those involving AI, if they entail breaking the law. In all other scenarios, we actively encourage dialogue, positioning ourselves not merely as developers, but as trusted consultants.

We deeply value the integrity of our partnerships and projects, prioritizing the upholding of our ethical standards and values. While we maintain a procurement checklist, it serves not as a barrier, but as a starting point for constructive conversations with clients. Through this dialogue, we identify areas where collaboration can be enhanced, ensuring that our partnership is mutually beneficial and aligned with our ethical principles.

Instead of conclusion

We’ve all heard the critiques of the tech industry’s self-regulation, and it’s time we took the lead in shaping our ethical landscape. While governments are grappling with the complexities of AI regulation, let’s seize the opportunity and set the bar higher.

The idea isn't just to stay ahead of the curve. Sure, getting your ethical practices in order now will save you headaches down the road when new laws inevitably roll in. But beyond that, it’s about who you are as a company. As you transition from AI tester or even early adopter to an industry player, you're defining and reinforcing your values. And Modsen is here for you not just to build AI solutions; we’re here to build the right ones – ones you can be proud of.

Share Form

Stay Tuned You have free access to articles, guides, useful tips, and industry insights from our experts

I consent to Modsen processing my personal information according to the Privacy Policy for marketing purposes. Consent withdrawal may take place at any time.