adacom loader
Please Wait

AI Policy

1. Introduction    

The purpose of this AI Policy is to establish a comprehensive framework for the responsible, secure, ethical, transparent, and compliant use, development, deployment, operation, and retirement of Artificial Intelligence (AI) systems within ADACOM (hereafter “Organization”). This policy ensures adherence to ISO/IEC 42001 & ISO/IEC 23053 requirements, alignment with the EU AI Act, and conformity with additional applicable regulations (e.g. General Data Protection Regulation). It sets forth mandatory organizational expectations for the management of traditional machine-learning systems and Generative AI capabilities—including text, image, audio and code-based models—to ensure secure and ethically aligned outcomes.

This policy promotes consistent governance practices across the AI system lifecycle, preserves individual rights, safeguards data integrity, and mitigates risks associated with autonomy, bias, misinformation, cybersecurity threats, and model misuse. It further cultivates a culture of transparency, responsible innovation, and continuous improvement.

2. Scope

This policy applies to all organizational units, departments, subsidiaries, partners, and third parties engaged in the design, training, procurement, deployment, integration, operation, maintenance, monitoring, or decommissioning of AI systems. It governs all AI use cases, whether developed internally, sourced from vendors, embedded in platforms, or acquired as cloud services (SaaS, PaaS, or API-based models).

The scope includes Machine learning–based predictive systems, Generative AI models (text, image, audio, video, code generation), Algorithmic decision-support and automation solutions, AI-enabled analytics platforms, High-risk AI systems, Foundation models and Large Language Models (LLMs) designated under the EU AI Act.

This scope extends to all employees, contractors, consultants, external partners handling organization data or providing model outputs.
Excluded from this scope are non-algorithmic automated processes that do not involve machine-learning or statistical inference. However, when such systems integrate AI modules, they immediately fall under this policy.

3. Policy 

3.1 AI System Lifecycle 

The organization manages AI systems using a structured lifecycle aligned with ISO/IEC 23053 and ISO/IEC 42001:
a.    Business Need Identification: This phase determines whether an AI solution is appropriate by assessing operational requirements, evaluating available data, and considering potential ethical and regulatory impacts. Stakeholders define clear objectives, expected outcomes, and risk appetite to guide decision-making. Strategic alignment ensures the AI initiative supports organizational goals before development begins.
b.    Design and Development: During this phase, system architecture is defined, relevant datasets are collected, and models are trained with documented methodologies. Developers assess and mitigate risks related to bias, data quality, and robustness, while ensuring transparency and accountability in model behavior. Ethical and privacy considerations are embedded to support responsible innovation and traceability.
c.    Testing and Validation: AI systems undergo structured evaluations in controlled environments to confirm accuracy, stability, and resilience against misuse or adversarial interference. Performance metrics are compared against defined success criteria to determine readiness for production. Validation results are documented to demonstrate compliance and support future audits.
d.    Deployment and Integration: Approved AI systems are introduced into production environments with appropriate controls, access rights, and monitoring tools enabled. Users receive training on system capabilities, data interpretations, and operational limitations to ensure responsible usage. Deployment includes communication to impacted stakeholders and confirmation that regulatory obligations are satisfied.
e.    Operation and Monitoring: Continuous supervision ensures that AI systems remain effective, secure, and aligned with intended outcomes. Logging mechanisms capture activity data to detect anomalies, model drift, and unintended consequences. Documented feedback informs maintenance decisions, retraining needs, and risk mitigation measures.
f.    Retirement and Decommissioning: AI systems are formally retired when they are no longer needed, become obsolete, or fail to meet compliance requirements. Access permissions are revoked, data and models are archived securely, and remaining components are removed to prevent unauthorized reuse. Lessons learned are captured and fed back into governance processes to improve future deployments.

3.2 Risk Management

The organization maintains an AI Risk Management Framework (AI‑RMF) consistent with ISO/IEC 23894 and the AI Act. Risk management activities apply across the entire AI lifecycle, including design, data collection, development, testing, deployment, monitoring, maintenance, and decommissioning. This system is systematic, traceable, and proportionate to the risks presented by the intended purpose and use context of the AI system.

All AI systems undergo an initial risk classification to determine whether they fall within prohibited practices, high-risk categories, or lower-risk applications. For high-risk AI systems, a formal and iterative risk management process is applied, incorporating hazard identification, risk estimation, risk evaluation, and implementation of mitigation measures. Special consideration is given to risks affecting fundamental rights, discrimination, privacy, cybersecurity, interoperability, and societal impacts.

Risks arising from data quality, bias, representativeness, and potential data poisoning are systematically assessed. Technical robustness and resilience against manipulation, adversarial attacks, or model drift are validated through testing and documented evidence. 

When risks cannot be reduced to acceptable levels through mitigation measures, the AI system shall not be deployed. Risk management outcomes are reviewed at least annually or when significant modifications, regulatory changes, or new hazards are identified. This approach ensures safe, trustworthy, and lawful AI operations in compliance with the EU AI Act.

3.3 Data Management & Quality

The organization ensures that all data used to train, test, validate, and operate AI systems is lawful, relevant, accurate, and fit for its intended purpose. Data used in AI development reflect the contextual environment within which the system operates and undergo quality checks to detect and remediate historical bias, sampling issues, or anomalies. Data lineage is documented comprehensively, including original sources, transformation steps, access controls, and retention policies, with special care taken to preserve the integrity and confidentiality of personal information.

All high-risk and generative AI systems maintain dataset documentation artifacts that provide transparency into the characteristics of the underlying data. These artifacts describe intended use, known limitations, collection methodologies, and ethical considerations. Dataset review processes are conducted to verify that data is appropriately representative and that any sensitive attributes are used only with explicit authorization and legitimate justification. Synthetic data, if employed, are clearly labeled, and privacy leakage testing is conducted to prevent re-identification risks.

To preserve data integrity, only approved datasets are used for training, and periodic audits are conducted to detect drift, contamination, or emerging bias. When personal data is processed, retention is aligned with GDPR principles of data minimization and storage limitation. Violations of data quality expectations are immediately remediated, and updated documentation reflects corrective actions taken.

3.4 Transparency & Explainability
Transparency and explainability are fundamental principles governing the responsible use of AI within the organization. Users interacting with AI systems are informed that they are engaging with automated functionality, and the purposes, limitations, and expected level of autonomy of the system is communicated clearly. For high‑risk systems, additional transparency obligations apply, including documented reasoning pathways and auditable decision‑making processes. Explanations are meaningful, tailored to stakeholder needs, and sufficiently detailed to support challenges or appeals.

The organization implements mechanisms enabling users to understand how outputs are generated, particularly in contexts involving risk to rights, safety, or access to essential services. Generative AI systems disclose when content is synthetically produced. Where explainability is technically constrained, compensating safeguards, such as heightened supervision, are implemented. Documentation includes known failure modes, uncertainty ranges, and conditions under which the model’s outputs should not be relied upon.

3.5 Security & Robustness

AI systems are designed, developed, and operated in a manner that preserves their integrity, availability, and resilience against accidental failure, malicious interference, or adversarial exploitation. Security by design principles are embedded throughout the lifecycle, incorporating access controls, versioning safeguards, and cryptographically signed model artifacts to detect tampering. Training pipelines are protected from data poisoning attacks, unauthorized parameter modifications, and model extraction attempts.

Upon deployment, the organization maintains a comprehensive set of controls and relative documentation to ensure enforcement of controls. These include but are not limited to technical files, model documentation, dataset datasheets, risk assessment reports, human supervision logs, incident records, ethical review reports, security logs, and compliance attestations. Each AI system is assigned a risk classification, and documentation is tailored proportionally to the system’s risk level. Controls and evidence shall be periodically reviewed and updated to reflect operational changes, audit findings, and regulatory guidance, ensuring continued readiness for inspection, supervision, and certification.

Continuous monitoring mechanisms detect performance anomalies or suspicious behaviours indicating compromise. Security logs are retained to support forensic investigations, and rollback capabilities are maintained to restore operations following disruptive events. Robustness testing, including adversarial simulations and stress testing are conducted to identify vulnerabilities prior to deployment. Controls are reviewed regularly, adjusting protection levels based on evolving threat landscapes and system criticality.

3.6     Human Supervision

Human supervision ensures that AI‑driven decisions remain aligned with ethical expectations, legal requirements, and organizational values. Supervision mechanisms enforced are proportionate to system risk, with high‑risk applications requiring human‑in‑the‑loop review before consequential decisions are enacted. Human operators are trained to detect anomalous outputs, recognize uncertainty, and escalate concerns when system behaviour deviates from expected parameters. Operators retain authority to override or suspend AI activity if security or compliance concerns arise.

To mitigate automation bias, decision support tools are presented with contextual cues indicating uncertainty and potential error ranges. Users have access to channels for contesting decisions influenced by AI, and appeals are investigated promptly. Supervision records, including override events and decision rationales, are documented to support accountability, audits, and continual improvement.

3.7     Incident Response

Organization’s Incident Response Policy & Procedure are also enforced upon need to promptly detect, assess, contain, and remediate incidents that impact AI systems’ integrity, security or compliance. Incident Response Policy addresses all AI-related failures, including operational anomalies, model drift, unintended outputs, data breaches, and malicious exploitation. In accordance with Organization’s Incident Response Policy & Procedure, all incidents are reported through defined channels, assessed for severity, and escalated to responsible internal teams and relevant regulatory authorities as required by law. Incident Response Procedure includes root-cause analysis, impact assessment, communication protocols, and documentation of corrective and preventive actions. Post-incident reviews feed back into the AI system lifecycle to mitigate recurrence and improve organizational resilience.

3.8     Ethical Considerations

AI systems deployed by the organization uphold ethical principles that protect human rights, promote fairness, and prevent harm. Ethical governance requires careful consideration of bias, discrimination, and unintended consequences in all AI models, including generative AI. High-risk AI applications undergo periodic ethical reviews to assess potential societal impacts, fairness, and transparency. Developers and operators are responsible for identifying ethical risks, implementing mitigation measures and escalating complex ethical dilemmas to the relative business and system owner. The organization fosters a culture of accountability, responsibility, and proactive ethical vigilance, ensuring AI systems contribute positively to society while minimizing risks.

3.9    Continuous Improvement

The organization embeds continuous improvement principles into the governance of AI systems, ensuring that lessons learned from incidents, audits, operational monitoring, and user feedback are systematically incorporated into system design. Controls are reviewed periodically to remain aligned with evolving industry standards, regulatory requirements, and emerging best practices. Feedback loops capture performance metrics, risk indicators, and compliance outcomes, enabling informed updates to operational practices and risk mitigation strategies.
Training and awareness programs are maintained to ensure that personnel involved in AI development, deployment, monitoring, and supervision remain knowledgeable about the latest ethical standards, regulatory obligations, and organizational expectations. AI systems themselves undergo performance evaluations and model recalibrations based on these insights, maintaining both operational efficiency and alignment with compliance obligations. Continuous improvement processes are documented and audited, ensuring accountability and traceability of changes over time. The organization commits to fostering a culture of proactive governance, transparency, and learning, reinforcing ethical AI deployment and reducing the likelihood of systemic errors or regulatory non-compliance.

3.10    Acceptable Use of AI Systems

Organization users must use AI systems responsibly, ethically, and in accordance with applicable laws, regulations, and organizational policies. AI tools must be accessed only for authorized business purposes and shall not engage in activities that compromise privacy, fairness, or security. Organization users shall respect data confidentiality and handle personal, sensitive, or proprietary information in AI systems in line with data protection requirements.

Organization users shall avoid using AI systems to produce, disseminate, or act upon content that is misleading, discriminatory, harmful, or unlawful. They shall not bypass or disable safeguards, monitoring, or human supervision mechanisms embedded in AI systems. They must report anomalies, unexpected behaviours, or potential misuse of AI systems to designated governance or security teams immediately.

Users shall interact with AI systems transparently, clearly distinguishing human decisions from automated outputs, and shall exercise appropriate judgment when interpreting AI recommendations. They must participate in training and awareness programs to maintain the skills and understanding necessary for the secure and effective use of AI systems.

They shall collaborate with colleagues and governance teams to continuously improve AI system usage, feedback mechanisms, and operational practices. AI interactions must be documented when required for accountability, compliance, or audit purposes. 

4.    Document Review 

To ensure the continued suitability, adequacy, and effectiveness of its information security the Organization shall ensure that reviews of this document is performed at appropriate intervals and when significant changes occur to the Organization, or its information assets.
Reviews and updates shall be discussed during the Information Security Steering Committee’s meetings and communicated to the Organization’s Management for approval and sign off.

5.    Policy Compliance 

Compliance with this policy is mandatory for all internal and external Users. Compliance checks will be performed on a regular basis by the Information Security Officer of the Organization.
Any breaches or alleged breaches of this policy will be investigated by the Information Security Officer and directly reported to Information Security Steering Committee to take the appropriate disciplinary actions.

login