SCANDIC GROUP AI Ethics Statement

Introduction & Objective

The SCANDIC GROUP uses artificial intelligence (AI) and algorithmic systems in almost all business areas: editorial recommendation systems and automated translation in the media; High-frequency and algorithmic trading in financial trading; Fraud detection and scoring in payment transactions; Smart contract platforms in crowdfunding; Route optimization in mobility; diagnostic support in healthcare; and personalized experiences in leisure and lifestyle services. As an internationally operating group of companies - represented by the SCANDIC ASSETS FZCO and the SCANDIC TRUST GROUP LLC in cooperation with the Legier Beteiligungs mbH – we are committed to using AI responsibly, transparently and in accordance with fundamental rights. This AI ethics statement formulates core values, processes and control mechanisms for the development, procurement, operation and use of AI in the SCANDIC GROUP. It builds on our existing guidelines on compliance, corporate governance, data protection, supply chain and human rights policy and represents a binding guideline for all business areas.

Overview library

1. Core Values ​​& Guiding principles

We are guided by international best practices for trustworthy AI. Our systems should serve people, respect their rights and strengthen society. Seven basic principles are central:

  • Human agency and supervision: People make the final decisions; AI supports, but does not replace, human responsibility.

  • Technical robustness and security: Systems must be reliable, resilient and fault-tolerant. Security mechanisms such as backup solutions, redundancies and emergency protocols are mandatory.

  • Privacy and data sovereignty: Data collection and processing are carried out according to the principles of data minimization, purpose limitation and integrity.

  • Transparency: Decisions and processes are documented in a comprehensible manner. Users should know when they are interacting with AI.

  • Diversity, non‑discrimination and fairness: Models are checked for bias and designed to treat all populations fairly.

  • Social and environmental well-being: AI applications must contribute to sustainable growth, social progress and environmental protection.

  • Accountability: Responsibilities for the entire AI life cycle are defined; Violations are punished and findings are incorporated into improvements.

2. Governance & Responsibilities

Clear governance structures apply to all AI projects. A group-wide AI committee ensures strategic control and sets standards. Project teams must conduct risk assessments, create documentation and obtain approvals. The data protection officer and the compliance team are involved at an early stage. The four-eyes principle applies; Critical decisions are reviewed by interdisciplinary teams.

3. Legal & Regulatory framework

We comply with the EU AI Act, the General Data Protection Regulation, industry-specific regulations (e.g. medical device and aviation law) and national laws of the countries in which we operate. AI applications are classified according to risk (low, high, unacceptable) and given the corresponding compliance requirements. Risk and impact assessments, quality management and external audits are carried out for high-risk procedures. We also pay attention to international codes such as the OECD and UNESCO recommendations on AI ethics.

4. Data Ethics & Data protection

AI is based on data. We only collect data that is necessary for the intended purpose and anonymize or pseudonymize it whenever possible. Training, validation and test data sets are checked for quality, representativeness and bias. We document data sources, ownership rights and license conditions. Personal data is processed in accordance with the GDPR, including deletion concepts and access controls.

5. Transparency & Explainability

Where technically possible and legally required, we ensure transparent models and explainable results. We provide users with information about whether and how AI is used. For critical applications (e.g. lending, medicine) comprehensible explanations of the decision logic are generated. Documentation contains information on data, models, parameters, tests and application limits.

6. Fairness, Bias & Inclusion

We identify and reduce systematic biases through measures such as: diverse data sets, algorithm audits, fairness-conscious modeling and continuous impact measurement. We promote accessibility by offering designs suitable for different user groups (language, culture, abilities). Discriminatory or harmful content will be removed.

7. Human Oversight & Responsibility

Decisions that have a significant impact on individuals (e.g. in the areas of finance, healthcare, personnel selection) must not be made exclusively by AI. We anchor human control in every critical process: employees can correct entries, override decisions and switch off systems. We train employees in the safe use and understanding of AI systems.

8. Robustness & Security

Our AI systems are protected against attacks and manipulation. Measures include secure programming, regular testing, monitoring, access controls and incident response plans. Models are tested for adversarial behavior. In the event of failures or malfunctions, emergency processes are implemented to limit damage. We document known vulnerabilities and provide patches.

9. Sustainability & social impact

AI projects from the SCANDIC GROUP are intended to make a positive contribution to the climate and society. We pay attention to energy consumption when training and operating models and rely on efficient algorithms and renewable energy sources. We assess potential impacts on employment, education and social structures and promote responsible innovation that enables inclusive growth.

10. Monitoring & continuous improvement

AI systems are monitored throughout their entire life cycle. We measure performance, fairness, security and compliance with ethical requirements. Deviations or undesirable effects lead to adjustments or shutdown. Findings from audits, incidents and feedback from users are incorporated into further development. The AI ​​ethics statement is regularly reviewed and adapted to new technologies, legal requirements and social expectations.