·

·

AI / Artificial Intelligence

·

Agnostic

AI Company Principles: how to govern AI in your company.

Before employees start using AI on their own, management must set the foundations and guidelines.

As part of my Oxford AI programme, we covered ethics and AI in depth. In short, society should not repeat the mistakes of the social media era. Social media apps and their algorithms had a major impact socially, politically and on health, and they were not regulated, or only too late. In AI, we are more proactive for example, the EU with its AI Act.

Especially when algorithms have a major impact on people, clear rules are needed: education, credit and finance, recruitment, etc. Algorithms already have an impact on recruitment:

The law is one thing. What is ethically right is another. Every company must decide which guidelines and principles should apply to the use of AI. Which data may we use? Is it biased, outdated or truly representative? Which AI decisions must be approved by a human? Are they traceable, etc.?

I have written a few lines for a hypothetical company. Feel free to use them as a basis.

As Switzerland’s leading condom and sexual pleasure brand, we are committed to diversity, sexual health and the privacy of our customers. These principles also apply to our marketing and the use of AI. The following rules for internal marketing, tools and agencies must be observed. The purpose of all our AI and machine learning activities has to be to support our principles and not to use AI to deceive, manipulate or harm users.

Data Privacy

  • The revDSG (Swiss GDPR) applies to all our activities. Using personal data to train machine learning algorithms or to target customer segments or groups requires users' consent.

  • Whenever possible, we train with non-personalised data and exclude personal information such as surnames, email addresses and the like.

  • We ensure that training and analysis take place in internal systems or closed cloud systems. The use of personal user data in open systems like ChatGPT is strictly prohibited.

  • Whenever users request deletion of personal data or ask for transparency, we respond in a timely manner. The use of data and algorithms must be documented in clear, non-technical language before an algorithm is actively used in a project or production environment.

Accuracy and Diversity

  • When developing machine algorithms or training third-party AI tools, we ensure that representative data sets are used. Ceylor wants to foster diversity and support all genders, races, ethnic groups, age groups and sexual orientations.

  • When developing AI, we ensure that real data is used and that algorithms are updated frequently.

  • When using algorithms in communication, for example for targeting personalised ads, personalised emails, personalised shop recommendations, etc., we have to ensure that our diversity principle remains in place despite personalisation.

Explainability

  • When developing or implementing AI measures and tools, explainability of automated decisions is key to Ceylor, as already mentioned in 1d. Any AI system used by our company has to be able to explain its decisions in a manner that is understandable to stakeholders. If this is not possible, no fully automated decisions are allowed and they have to be supervised by a Ceylor employee.

Safety

  • When using algorithms in communication, for example for targeting personalised ads, personalised emails, etc., we have to make sure that no conclusions can be drawn about individual persons, such as sexual orientation, fertility and pregnancy. Whenever possible and reasonable, we work with generic and diverse segments and not 1:1 personalisation.

Security and compliance

  • All internal and external staff members in an AI project (developers, data scientists, etc.) have to carefully read and sign this document of AI principles before starting work on an AI project.

  • All team members of an AI project have access to only as much personal data as is necessary to achieve the project goals.

  • These project goals have to be defined in a detailed project concept when AI projects include personal data or personalisation. This concept has to be reviewed and approved by the internal AI commission.

  • AI and machine learning measures that have a significant impact on our company or our clients must be reviewed by an external audit and submitted to the internal AI committee. This impact has to be defined by the committee.

Have you already established policies in your company? Which topics do you cover, and which do you deliberately leave out? We are happy to support you with AI strategies, workshops and guidelines.

Ready to get serious about AI?

30-minute initial consultation – free and non-binding. We will review together where you stand and what the right first step is.

Ready to get serious about AI?

30-minute initial consultation – free and non-binding. We will review together where you stand and what the right first step is.

Your registration was successful.
Your sign-up could not be saved. Please try again.
Your registration was successful.
Your sign-up could not be saved. Please try again.