Effective Date: March 1, 2025
The related policies are provided for transparency, compliance readiness, and future service enablement purposes only.
This document outlines SCALIBIT’s rules, compliance obligations, and permitted boundaries for using its infrastructure to deploy artificial intelligence (AI), GPU hosting, machine learning (ML) training, automation, and other specialized compute workloads — including data handling, ethical usage, regulatory compliance, and misuse prevention.
SCALIBIT AI & GPU SERVER USAGE POLICY
SCALIBIT (“SCALIBIT”) provides global hosting and infrastructure services — including Virtual Machines (VMs), Cloud Servers, Dedicated Servers or Bare Metal, and GPU Servers — with available server locations across North America, South America, Europe, Asia-Pacific, and Africa.
This AI & GPU Server Usage Policy (“Policy”) is an integral part of SCALIBIT’s Terms of Service (ToS), Acceptable Use Policy (AUP), Privacy Notice, and, where applicable, the Data Processing Agreement (DPA). In the event of any conflict regarding AI, GPU, or ML-related usage, this Policy shall prevail for those specific activities.
Quick navigation to key parts of this policy
- 1. Purpose and Scope
- 2. Definitions
- 3. Authorized, Restricted, and Prohibited AI Uses
- 4. Data Protection, Privacy, and Regulatory Compliance
- 5. Responsibility for AI Models, Outputs, and Decisions
- 6. Export Controls, Sanctions, and Sensitive Use Cases
- 7. Monitoring, Abuse Detection, and Fraud Prevention
- 8. Violations, Enforcement, and Remedies
- 9. Relationship to Other SCALIBIT Policies
- 10. Contact and Compliance Inquiries
- 11. AI Research, Academic Use, and Educational Exemptions
1. PURPOSE AND SCOPE
This Policy governs the use of SCALIBIT Services for artificial intelligence (AI) model development, GPU hosting, machine learning (ML) training, large language models (LLMs), automated decision-making, and other specialized compute workloads.
It applies to all Customers and End Users who use SCALIBIT’s infrastructure — including Virtual Machines (VMs), Cloud Servers, Dedicated Servers or Bare Metal, and GPU Servers — for AI or AI-adjacent workloads, regardless of the geographical location of the servers.
By deploying AI or specialized compute workloads on SCALIBIT infrastructure, you agree to comply with this Policy, as well as SCALIBIT's Terms of Service (ToS), Acceptable Use Policy (AUP), Privacy Notice, and, where applicable, the Data Processing Agreement (DPA).
2. DEFINITIONS
For the purposes of this Policy:
- AI Hosting means using SCALIBIT Services to develop, train, fine-tune, deploy, or serve artificial intelligence or machine learning systems, including but not limited to LLMs, diffusion models, and predictive analytics engines.
- GPU Hosting means use of dedicated, bare metal, or virtualized GPU-capable servers (including Cloud Servers) for intensive computational workloads, such as AI/ML training, rendering, or simulation.
- ML Training means any process that uses datasets — including structured, unstructured, or synthetic data — to train a model that can infer patterns, make predictions, or automate decisions.
- Regulated AI means AI systems that may materially affect the rights, freedoms, safety, finances, privacy, or opportunities of individuals or groups (for example, hiring models, credit scoring, law enforcement tools, or health-related predictions) and are subject to special regulatory frameworks.
- Prohibited AI Usage means any AI, ML, or GPU workload that violates this Policy, the AUP, applicable laws, export controls, sanctions regimes, or recognized ethical and privacy standards.
3. AUTHORIZED, RESTRICTED, AND PROHIBITED AI USES
3.1 Permitted AI and GPU Uses
The following uses of SCALIBIT Services for AI and GPU workloads are generally permitted, provided they comply with all other SCALIBIT policies and applicable law:
- Standard machine learning training with non-sensitive datasets.
- Predictive analytics, forecasting, optimization, and statistical modeling in business, research, or operations.
- AI-based automation, including chatbots, customer support assistants, workflow automation, or anomaly detection for legitimate security operations.
- GPU-based rendering workloads, such as 3D rendering, VFX, animation, or benign image processing.
- Scientific and academic research that does not involve prohibited categories of data or use cases.
Customers are solely responsible for ensuring that all permitted AI and GPU workloads comply with data protection, privacy, and sector-specific regulations.
3.2 Restricted (Pre-Approval) AI Uses
The following categories of AI activities are considered restricted and require prior written authorization from SCALIBIT before deployment:
- Biometric Processing: Face recognition, voiceprint analysis, fingerprint matching, gait recognition, or any processing of biometric identifiers used for identification, authentication, or tracking.
- Emotion or Behavioural Analysis: Inferring emotions, mental state, or behavioural risk assessments from images, voice, text, or sensor data.
- Regulated AI: AI systems used in healthcare, medical diagnostics, legal or regulatory decision-making, or systems affecting fundamental rights under GDPR, UK GDPR, KVKK, or similar frameworks.
- Automated Decision Systems: AI used for hiring, employment screening, lending, credit scoring, eligibility assessment, or fraud risk scoring concerning individuals.
- Any AI system subject to special regulation (for example, the EU AI Act, sector-specific AI guidance, or emerging AI safety laws).
SCALIBIT may deny, condition, or revoke authorization at its sole discretion where a risk to rights, safety, compliance, or infrastructure integrity is identified.
3.3 Strictly Prohibited AI Uses (No Exceptions)
The following activities are strictly prohibited on SCALIBIT infrastructure. No approvals, exemptions, or reseller arrangements may override these prohibitions:
- Deepfake and Impersonation Abuse: Training or serving models that generate highly realistic synthetic media (audio, video, images) to impersonate real individuals without consent, including political figures, public officials, or private individuals.
- Non-Consensual or Exploitative Content: AI models used to create, enhance, or distribute pornographic, erotic, or sexually explicit content involving real individuals, minors, or any form of abuse (including synthetic or deepfake child exploitation material).
- Mass Surveillance or Oppressive Monitoring: AI used for unlawful mass surveillance, tracking of protected groups, coercive monitoring of individuals, or invasive profiling in violation of privacy and human rights standards.
- Dark Web, Criminal, or Illicit Finance AI: AI models designed for darknet operations, ransomware automation, cybercrime tool development, sanctions evasion, illicit finance, or coordination of criminal activity.
- Political Manipulation and Disinformation: Use of AI to generate, amplify, or coordinate disinformation campaigns, astroturfing, bot farms, or manipulation of democratic processes or elections.
- Military, Warfare, and Weapons Development: AI models or simulations used to design, control, or optimize weapons systems, targeting, battlefield communications, drone warfare, or other military/intelligence offensive operations.
- Harassment, Hate, or Abuse: AI-generated content used to harass, threaten, or attack individuals or groups on the basis of race, religion, ethnicity, gender, sexual orientation, or other protected characteristics.
Any use of SCALIBIT Services for these purposes constitutes a material breach of this Policy, the TOS, and the AUP, and may result in immediate termination without refund and reporting to competent authorities.
3.4 AI Ethics & Responsible Use Requirements
Customers deploying AI workloads on SCALIBIT infrastructure must ensure responsible and ethical use of AI in compliance with internationally recognized frameworks, including (but not limited to) OECD AI Principles, EU AI Act, NIST AI Risk Management Framework, and ISO/IEC 23894 (AI Risk Management).
- AI systems must not be intentionally designed to deceive, impersonate, or manipulate individuals without their informed consent.
- AI outputs must not include discriminatory, harassing, hateful, or rights-infringing content based on protected characteristics under applicable law.
- High-impact AI systems (employment screening, credit scoring, eligibility determination, biometric identity verification) require human review and must not rely on AI-only outputs for binding decisions.
- Where synthetic data is used for AI training, it must be generated and managed in a legally compliant and non-exploitative manner.
3.5 Model Abuse & Misuse Prevention (Deepfake, Malicious AI, Political Manipulation)
The following AI use cases are strictly prohibited due to legal, ethical, and public safety risks — including when performed using commercially licensed, open-source, or fine-tuned models:
- Generation or deployment of AI models intended to impersonate real individuals (including voice cloning, facial synthesis, or identity spoofing) without their explicit consent.
- Development or distribution of AI tools used to automate hacking, exploit generation, vulnerability scanning, or malware design.
- Training or serving models designed for political manipulation, influence operations, targeted disinformation, bot-driven propaganda, or election interference.
- AI used for biometric spoofing attacks (e.g., defeating facial recognition, voice authentication, CAPTCHAs, or KYC verification tools).
3.6 Automated AI-Based Mass Email, Phishing, or Fraud
SCALIBIT strictly prohibits any use of its infrastructure (including GPU servers, Cloud Servers, or AI modules) to generate, promote, or automate:
- AI-generated phishing templates or impersonation of financial, governmental, or service providers.
- Mass email or SMS campaigns that are AI-assisted for fraudulent, deceptive, or unsolicited communications (spam).
- AI-based social engineering, fraud optimization, or cyber-extortion activities.
- Automated generation of illegal or unauthorized marketing databases using AI.
4. DATA PROTECTION, PRIVACY, AND REGULATORY COMPLIANCE
4.1 General Information Governance
Customers remain solely responsible for ensuring that their AI and GPU workloads comply with all applicable data protection, privacy, consumer protection, and sector-specific regulations, including but not limited to:
- CCPA/CPRA and other U.S. state privacy laws.
- GDPR and national data protection laws in the European Union.
- UK GDPR and the UK Data Protection Act.
- KVKK in Türkiye.
- HIPAA and related U.S. health data regulations (where applicable).
- FERPA, BIPA, and similar niche or regional privacy frameworks.
- Any applicable AI-specific regulation (including the EU AI Act and national AI statutes).
SCALIBIT does not provide “HIPAA-compliant” or “BIPA-compliant” AI hosting, nor does it act as a Business Associate under HIPAA for customer-managed AI data. Customers must not use SCALIBIT Services for the storage, processing, or training of AI models on Protected Health Information (PHI) or similarly regulated health data.
Customers must obtain all necessary consents, permissions, and legal bases for processing personal data, especially when training models on personal, biometric, or behavioural information. SCALIBIT does not vet or approve datasets and assumes no liability for their legality or appropriateness.
4.2 AML/KYC Requirements for Specialized AI Workloads
For certain AI-related workloads that present elevated financial, regulatory, or misuse risks (including GPU clustering, AI acceleration, crypto-related AI development, or automation of financial decisioning), SCALIBIT may require identity verification (KYC) and/or enhanced due diligence (EDD) prior to service activation.
- Verification may involve confirmation of identity, organization, or country of origin for sanctions compliance purposes.
- Services involving AI training on financial, biometric, health, or identity-based data may require additional legal documentation.
- Anonymous, unverifiable, or proxy-operated accounts may be restricted or denied access to specialized GPU or AI workloads.
Failure to comply with AML/KYC requests may result in suspension, refusal, or termination of AI-related Services.
For certain specialized AI workloads — including large GPU clusters, automated financial or credit decisioning, crypto-related AI tooling, or workloads with elevated abuse or sanctions exposure — SCALIBIT may require completion of a separate AI KYC / Enhanced Due Diligence (EDD) Verification Form.
This form may request additional information about the Customer’s organization, ownership structure, intended AI use cases, and data categories, and must be completed accurately as a condition of service activation or continuation.
AI Hosting Application Form is currently available at: https://scalibit.com/ai-hosting-application.
4.3 AI Data Residency, Localization, and Sovereignty Rules
Customers remain responsible for ensuring that any personal data, training datasets, logs, and AI outputs hosted or processed on SCALIBIT infrastructure comply with applicable data residency, localization, and data sovereignty requirements in the relevant jurisdictions.
Without limiting the generality of Section 4, Customers must in particular:
- Ensure that any personal data subject to GDPR is stored, processed, or transferred in accordance with lawful transfer mechanisms (for example, adequacy decisions, Standard Contractual Clauses, or other recognized safeguards).
- Comply with KVKK and any applicable local data localization rules in Türkiye, including restrictions on transferring personal data abroad and obligations to maintain certain data within the country.
- Respect regional or sector-specific data residency requirements applicable to financial, telecom, governmental, or critical infrastructure data, where such regulations mandate storage in specific countries or regions.
- Clearly designate, within their own systems and documentation, which SCALIBIT locations (for example, EU-only, US-only, TR-only) are used for datasets subject to residency constraints.
SCALIBIT does not guarantee that a given server location satisfies any specific data residency law. Customers are solely responsible for selecting appropriate regions and for implementing any necessary technical or contractual safeguards to meet data residency and sovereignty obligations.
4.4 AI Model Training – Data Classification & Legal Processing Categories
To reduce legal, ethical, and security risks, Customers must classify datasets used for AI model training, fine-tuning, and inference according to their sensitivity and regulatory impact.
At a minimum, the following categories apply:
- Class 0 – Synthetic or Non-Personal Data: Fully synthetic or anonymized datasets with no direct or indirect link to an identifiable individual. Generally permitted, subject to this Policy and the AUP.
- Class 1 – Business / Operational Data: Non-sensitive business metrics, logs, or operational data (for example, inventory levels, generic IoT data, or aggregated analytics) that do not identify individuals. Permitted, provided applicable confidentiality obligations are observed.
- Class 2 – Personal Data (GDPR Art. 4 / Similar Definitions): Information relating to an identified or identifiable natural person (for example, names, emails, IP addresses, account identifiers). Use of Class 2 data requires a valid legal basis, appropriate transparency measures, and, where applicable, a DPA with SCALIBIT.
- Class 3 – Special / Sensitive Personal Data (GDPR Art. 9, KVKK Art. 6, etc.): Data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, health data, or data concerning a natural person’s sex life or sexual orientation. Customers must not use SCALIBIT Services to train or deploy AI models on Class 3 data, unless expressly and separately agreed in writing by SCALIBIT and permitted under applicable law.
- Class 4 – Biometric, Genetic, and Identity-Linked Data: Biometric identifiers (face templates, fingerprints, voiceprints, gait patterns), genetic data, or other data used for unique identification or authentication. Restricted and generally prohibited for AI training on SCALIBIT unless explicitly approved under Section 3.2 as a restricted use.
- Class 5 – Financial, Payment, and PCI-Related Data: Payment card numbers, security codes, full PAN, or any data subject to PCI-DSS or similar frameworks. SCALIBIT does not provide PCI-certified AI hosting, and Customers must not train or deploy AI models on raw cardholder or sensitive payment data.
Customers must document their data classification and ensure that AI models trained on mixed datasets inherit the strictest applicable category. Where there is doubt, the dataset should be treated as the highest-risk class involved.
5. RESPONSIBILITY FOR AI MODELS, OUTPUTS, AND DECISIONS
All AI models, training datasets, fine-tuned weights, inference outputs, predictions, recommendations, and decisions made using SCALIBIT-hosted infrastructure are the sole responsibility of the Customer.
SCALIBIT does not review, monitor, audit, or validate AI models or outputs and does not provide any warranty as to their:
- Accuracy, reliability, or fitness for a particular purpose;
- Compliance with legal, ethical, or regulatory standards;
- Absence of bias, discrimination, or unfair impact;
- Non-infringement of third-party intellectual property rights.
Customers must implement appropriate validation, human oversight, and risk management processes before relying on AI outputs in any critical, sensitive, or high-impact context.
6. EXPORT CONTROLS, SANCTIONS, AND SENSITIVE USE CASES
Use of SCALIBIT infrastructure for AI and GPU workloads remains subject to all applicable U.S. Export Controls, Sanctions Regulations, and National Security Laws, including but not limited to OFAC, BIS, EAR, and related frameworks, as described in ToS §43 (U.S. Export Laws) and AUP §22.
Customers may not use SCALIBIT Services to:
- Develop, train, or deploy AI systems for the benefit of any sanctioned country, region, or OFAC-listed party.
- Support AI activities that facilitate sanctions evasion, money laundering, terrorist financing, or illicit finance.
- Provide AI capabilities to military, weapons, or dual-use applications requiring export licenses.
SCALIBIT may suspend or terminate AI-related services where there is a reasonable suspicion of violations and may report such activity to relevant authorities as required by law.
7. MONITORING, ABUSE DETECTION, AND FRAUD PREVENTION
SCALIBIT does not monitor the content of AI models or datasets. However, SCALIBIT may use automated and manual methods to identify anomalous, abusive, or high-risk activity patterns.
SCALIBIT reserves the right to:
- Temporarily suspend or throttle workloads that threaten infrastructure stability.
- Request clarification regarding AI workloads, datasets, and intended use cases.
- Investigate abuse reports via Abuse Reporting or the SCALIBIT SOC & Abuse Portal.
Where violations are identified, SCALIBIT may take immediate action as described in Section 8.
8. VIOLATIONS, ENFORCEMENT, AND REMEDIES
Any violation of this Policy constitutes a material breach of SCALIBIT’s Terms of Service and Acceptable Use Policy.
- Issue warnings or corrective requests;
- Suspend or restrict access to services or resources;
- Terminate Services with or without refund;
- Report illegal activity to competent authorities;
- Cooperate with lawful investigations.
SCALIBIT shall not be liable for damages arising from good-faith enforcement actions.
9. RELATIONSHIP TO OTHER SCALIBIT POLICIES
- Terms of Service
- Acceptable Use Policy
- Privacy Notice
- Data Processing Agreement
- Law Enforcement Requests
10. CONTACT AND COMPLIANCE INQUIRIES
- Compliance: compliance@scalibit.com
- Legal: legal@scalibit.com
11. AI RESEARCH, ACADEMIC USE, AND EDUCATIONAL EXEMPTIONS
SCALIBIT may grant limited exemptions for bona fide academic or non-profit research, subject to prior written approval and strict compliance conditions.
- Institution and project details must be disclosed.
- All prohibited use cases remain fully applicable.
- Ethics approvals and privacy compliance are mandatory.
- Additional safeguards may be required.
Last Reviewed: March 1, 2025
SCALIBIT Legal & Compliance Department — Wyoming, United States.