Usage Policy for Tendem
Effective date: 07/11/2025
1. Scope and Applicability
1.1. This Usage Policy (also referred to as the “Acceptable Use Policy” or “Policy”) governs all use of the Tendem platform, products and services, including any tools, models, data pipelines, interfaces and related documentation made available by Tendem (“Tendem”, “we”, “us”, “our”) (together, the “Services”).
1.2. This Policy applies to all users of the Services, including individuals, organizations and businesses (“you”, “Users”), as well as to:
content and data uploaded to, processed by, generated with or derived from the Services;
systems, models and products developed, trained and/or improved using the Services; and
any interactions, tasks or activities conducted on or through the Services.
1.3. This Policy forms part of, and must be read together with, the Toloka Terms of Use and any applicable agreements (including any Data Processing Agreement). In the event of conflict, the Terms of Use prevail, unless otherwise expressly agreed in writing.
1.4. Roles under AI regulation. Depending on how you use the Services, you may qualify as a “provider”, “deployer”, “importer”, “distributor” or other actor under applicable AI regulation (including the EU AI Act). You are solely responsible for determining your role(s), classifying your AI system(s), and complying with all corresponding legal obligations. Unless explicitly agreed otherwise in a written agreement with Tendem, Tendem does not act as the “provider” of any AI system that you develop, train or deploy using the Services, and Tendem does not perform conformity assessments, CE marking, declarations of conformity or registrations for your AI systems.
2. General Legal and Compliance Obligations
2.1. Compliance with law. You must comply with all applicable local, national and international laws and regulations in connection with your use of the Services. Failure to do so may result in immediate suspension or termination of access.
2.2. Regulated activities. Where your use of the Services relates to regulated activities (for example in finance, healthcare, employment, consumer protection or data protection), you remain solely responsible for obtaining and maintaining all required licences, approvals and compliance controls.
2.3. Responsibility for output. You are responsible for how you use, rely on or distribute output generated with the Services. You must not present automated output as human-generated content where doing so would mislead others or violate applicable law.
2.4. AI literacy and staff training. You must ensure that personnel who design, configure, supervise or rely on AI systems built or operated using the Services have an appropriate level of AI literacy and receive suitable training on: (a) the capabilities and limitations of such systems; (b) the relevant obligations under applicable AI and data-protection laws; and (c) internal procedures for identifying, escalating and mitigating potential risks, malfunctions or serious incidents.
3. Prohibited and Restricted Uses
You must not use the Services in any of the ways described below.
3.1. Illegal, harmful, fraudulent or abusive activity
You must not use the Services to:
3.1.1. Engage in illegal activity, including but not limited to:
the development, distribution or use of illegal substances, goods or services;
money laundering, fraud, scams or other financial crimes;
unlawful surveillance, stalking, harassment or doxxing; or
creating, distributing or promoting child sexual abuse material (CSAM) or any content that sexualizes minors. Tendem reports actual or suspected CSAM to relevant law-enforcement authorities and will terminate access.
3.1.2. Cause or threaten harm to yourself or others, including by:
promoting, encouraging or instructing on suicide, self-harm or eating disorders;
generating graphic or detailed depictions of self-harm intended to encourage harmful behaviour; or
glorifying or inciting violence, terrorism or violent extremism.
3.1.3. Develop, improve or operate weapons, explosives, hazardous materials or systems intended to cause serious harm.
3.1.4. Conduct or facilitate fraud, scams, abuse or predatory behaviours, including:
impersonation or misrepresentation of identity, credentials or affiliations;
phishing, social engineering or prompt injection;
facilitating forgery, acquisition or distribution of counterfeit or illegally obtained goods;
generating or facilitating large-scale spam or unsolicited communications; or
manipulating markets, prices, reviews, ratings or clicks.
3.1.5. Engage in academic dishonesty, including plagiarism, cheating, falsifying research data, or completing academic assignments or examinations on behalf of others.
3.1.6. Generate content primarily intended to mislead, deceive or defraud others, including fabricated evidence, forged documents or synthetic media presented as authentic.
3.1.7. Interfere with or compromise the technical integrity of the Services or third-party systems, including denial-of-service attacks, exploitation of vulnerabilities, system intrusion, or the distribution of malware, spyware or unauthorized surveillance tools.
3.2. Hate, discrimination, harassment and abusive content
You must not use the Services to generate or disseminate content that:
3.2.1. Promotes, incites, glorifies or celebrates hate, discrimination or harassment based on an individual’s or group’s protected characteristics (such as race, ethnicity, nationality, religion, gender, gender identity, sexual orientation, disability status, caste or similar identity attributes).
3.2.2. Advocates or encourages violence, intimidation or discrimination against individuals, groups, animals or property.
3.2.3. Targets individuals or groups with bullying, humiliation, shaming, threats or abusive language, including coordinated harassment campaigns.
3.3. High-risk and prohibited AI practices
3.3.1. Prohibited AI practices. In particular, you must not use the Services for any AI practices that are prohibited under Article 5 of the EU AI Act (Regulation (EU) 2024/1689), as amended from time to time. This includes, without limitation, using the Services to develop, train, deploy or operate systems that:
use subliminal, purposefully manipulative or deceptive techniques aimed at materially distorting a person’s behaviour in a manner that impairs their ability to make informed decisions where such distortion is likely to cause harm;
exploit vulnerabilities related to age, disability or social/economic circumstances in a way that is likely to cause harm;
implement social scoring or profiling of individuals leading to detrimental or unfair treatment;
conduct criminal risk assessments or predictions based solely on profiling or personality traits;
create or expand facial-recognition databases through untargeted scraping of images from the internet or CCTV footage;
infer emotions in workplaces or educational institutions (except where strictly necessary for safety or medical reasons and permitted by law);
perform biometric categorization or inference of sensitive attributes (such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation); or
perform real-time biometric identification in publicly accessible spaces by or on behalf of law enforcement.
3.3.2. High-stakes automated decisions and critical domains. Without Tendem’s prior express written approval, you must not use the Services to develop, train, deploy or operate systems that make or support high-stakes automated decisions in domains that significantly affect an individual’s safety, fundamental rights or livelihood, including:
safety components of products or other “high-risk” AI systems under applicable law;
management or operation of critical infrastructure (such as energy grids, water supplies, transportation systems, autonomous vehicles, drones or robotics) without rigorous safety validation and appropriate human oversight;
education and vocational training, including determining access to educational institutions or evaluating performance where decisions have significant impact and lack appropriate human review;
employment and workforce management, including automated screening, ranking or rejection of job applicants, AI-based behavioural monitoring of employees, or workplace profiling that may lead to discrimination;
access to essential private or public services, including automated creditworthiness assessments, insurance or mortgage decisions, welfare benefits, asylum or immigration decisions, without transparent criteria and meaningful human review;
law enforcement, criminal justice, predictive crime detection, mass surveillance or tracking of individuals or communities without legal authorization and human oversight;
migration, asylum and border control management; or
administration of justice and democratic processes.
3.4. Minors and vulnerable individuals
3.4.1. You must not use the Services for purposes specifically targeting or designed to be directly accessible to minors in a manner that is non-compliant with applicable child-protection, data-protection or consumer-protection laws.
3.4.2. You must not use the Services to sexualize children, generate or distribute CSAM, facilitate grooming, or depict or promote any form of child abuse or exploitation.
3.4.3. You must not use the Services to develop or deploy tools specifically targeting or accessible to minors where such tools present heightened risk and are not subject to appropriate safeguards and legal compliance.
3.5. Deepfakes, misinformation and media integrity
You must not use the Services to intentionally generate or disseminate:
3.5.1. Deepfakes or manipulated media that are likely to deceive the public, including fabricated audio, images or video presented as authentic.
3.5.2. Disinformation or misinformation about current events, health, science or civic processes where such content is deliberately misleading or likely to cause harm, or that undermines the integrity of elections or civic processes (including false information about voting procedures or results).
3.5.3. Automated content moderation or filtering that unlawfully censors lawful speech or enforces discriminatory or biased decisions.
3.5.4. Mass-scale personalized manipulation, including micro-targeting designed to influence political views or voting behaviour in a deceptive or exploitative manner.
3.5.5. Where required by applicable law, you must clearly and prominently disclose when image, audio or video content has been artificially generated or manipulated (for example, deepfakes).
3.5.6. You must not remove, alter or obscure any machine-readable markers, watermarks or similar technical measures attached to synthetic content that are designed to signal that such content is artificially generated or manipulated.
3.5.7. Human–AI interaction transparency. Where you use the Services to develop or operate systems that interact directly with natural persons (such as chatbots, virtual assistants, conversational agents or synthetic-media generators), you must ensure that such persons are informed, where required by law, that they are interacting with or viewing content generated or assisted by an AI system, and you must provide appropriate mechanisms for them to obtain human intervention or escalation where legally required.
3.5.8. Limited safety-testing exception. Tendem may be used in limited ways to generate content otherwise prohibited under this section where, and only where, the sole purpose is to improve the safety, robustness, responsibility or ethicality of AI systems (for example, red-teaming, benchmarking, adversarial testing) in accordance with industry standards and applicable law. You remain responsible for implementing strict safeguards, secure handling and non-production use of such content.
3.6. Political activity and election interference
You must not use the Services to:
3.6.1. Advocate for or against specific political candidates, parties or ballot initiatives in a deceptive, non-transparent manner or in violation of applicable election laws.
3.6.2. Conduct political lobbying or influence government decisions using automated content at scale without complying with applicable transparency, lobbying and campaign-finance rules.
3.6.3. Incite, glorify or promote disruption of elections or civic processes.
3.6.4. Conduct mass-scale political micro-targeting or manipulation based on sensitive attributes or inferred vulnerabilities.
3.7. Pornographic content and non-consensual intimate imagery
You must not use the Services to:
3.7.1. Generate, distribute or promote pornographic or sexually explicit content, including depictions of sexual intercourse or explicit sexual activities, sexual fetishes or fantasies, incest or bestiality.
3.7.2. Create, distribute or manipulate intimate images of any identifiable person without their explicit, informed consent, including “revenge porn”, synthetic nudity or other non-consensual intimate imagery.
3.7.3. Sexualize minors or generate any content involving minors in sexual contexts (including implied, stylized or “age-played” scenarios).
3.8. Professional advice and high-stakes decisions
3.8.1. You must not use the Services as a substitute for professional advice where specialized qualifications are required. In particular, you must not rely solely on Tendem outputs to:
provide investment, financial, insurance, tax or similar professional advice;
provide legal counsel, legal opinions or recommendations on legal actions; or
make or communicate medical diagnoses, treatment recommendations or patient-monitoring decisions without appropriate medical supervision.
3.8.2. You must not rely solely on the output of the Services for any healthcare-related decision, including genetic risk prediction, mental-health assessment or treatment decisions that could affect insurance, employment or personal rights.
3.8.3. You must not use the Services as the sole basis for life-altering decisions in public services (e.g. welfare benefits, asylum requests), criminal justice, employment, credit, insurance or housing, without transparent criteria and appropriate human review.
3.9. Financial services and advertising
You must not use the Services to:
3.9.1. Conduct or facilitate money laundering, scams or fraudulent payment/bonus schemes.
3.9.2. Automate creditworthiness assessments or customer risk scoring in opaque or discriminatory ways, or deny financial services (loans, insurance, mortgages) without human review.
3.9.3. Circumvent advertising rules and regulations or engage in advertising fraud, including manipulation of reviews, ratings, clicks or impressions.
3.10. Military and weapons use
The use of the Services for any military or weapons-related purposes is strictly prohibited. This includes, but is not limited to, using Tendem to develop, improve, operate or test:
weapons systems or military equipment;
military software, control or targeting systems;
ammunition or explosives;
military vehicles or transport systems (ground, air, naval or space); or
any other products or services intended primarily for military applications.
3.11. Platform integrity, safeguards and account abuse
You must not:
3.11.1. Harm or attempt to harm the Tendem platform, the Tendem and Mindrift platforms, other users or third parties, including by:
prompt-injection attacks, data exfiltration or model extraction;
attempts to degrade performance, availability or security of the Services; or
unauthorized monitoring or communications surveillance of individuals.
3.11.2. Circumvent or attempt to circumvent any safeguards, safety mitigations or access controls implemented in the Services, including by:
coordinating malicious activity across multiple accounts;
using automation to create accounts or to engage in spam or abusive behaviour; or
using prompts, completions or other outputs to train competing AI models without authorization where this is restricted by your agreement with Tendem.
3.11.3. Encourage users to download or install software (including malware) on their devices in a deceptive or unauthorized manner.
3.12. High-risk AI systems and customer obligations
3.12.1. If you use the Services in the lifecycle of an AI system that qualifies as “high-risk” under applicable AI regulation (including Annex III of the EU AI Act), you are solely responsible for:
(a) implementing and maintaining a documented risk-management system and quality-management system for such high-risk AI system;
(b) ensuring appropriate data-governance and data-quality measures, including documentation of the origin, relevance and representativeness of data processed via the Services;
(c) ensuring that logging and traceability obligations are met, including retention of logs for at least the minimum period required by law;
(d) providing and enforcing human oversight, including meaningful human review of high-stakes decisions, where required by law; and
(e) preparing and keeping technical documentation and records sufficient to demonstrate compliance with applicable AI regulation, including a clear description of how the Services are used within the design, development, training, validation, testing or monitoring of the AI system.
3.12.2. Unless explicitly agreed in writing, Tendem does not assume the role of “provider” of your high-risk AI system and is not responsible for your conformity assessment, CE marking, declaration of conformity or registration obligations for such system.
3.12.3. Where you act as a deployer of a high-risk AI system under applicable AI regulation, and the Services are used in that system’s lifecycle, you are responsible for:
(a) using the system in accordance with the instructions of the system’s provider;
(b) ensuring that input data you supply is relevant and sufficiently representative for the intended use;
(c) informing affected workers or end-users about the use of high-risk AI systems where required by law; and
(d) performing any required fundamental-rights impact assessments or similar assessments (for example where mandated for public authorities or providers of public services). Tendem does not conduct such assessments on your behalf.
3.13. General-purpose AI (GPAI)
3.13.1. If you use the Services to develop, train, fine-tune or otherwise contribute to a general-purpose AI model (GPAI) within the meaning of applicable AI regulation, you are responsible for complying with all obligations applicable to such models, including, where required:
(a) preparing and publishing summaries of training data (at least at an appropriate category level);
(b) conducting and documenting risk assessments and mitigation measures, particularly where the model may pose systemic risks; and
(c) complying with any applicable copyright-related obligations, including respecting opt-outs and providing appropriate information about training-data sources.
3.13.2. You are responsible for ensuring that data collected, generated or annotated via the Services can be described and documented at the level of granularity required by applicable transparency obligations. Tendem does not warrant that user-provided data or your use of the Services will, by itself, satisfy any training-data transparency or copyright-compliance obligations for your GPAI models.
4. Personally Identifiable Information (PII) and Data Protection
4.1. Legal basis for processing PII
4.1.1. Before processing Personally Identifiable Information (PII) within the Services, you must ensure compliance with all applicable data-protection laws and regulations. You are solely responsible for:
determining, documenting and maintaining a valid legal basis for uploading, storing or otherwise processing PII within the Services;
providing all required notices and obtaining any consents from data subjects where required by law; and
ensuring that your use of the Services is consistent with applicable privacy frameworks and your own internal policies.
4.1.2. Tendem does not validate, monitor or assume responsibility for the lawfulness of PII uploaded by you.
4.2. Data minimisation and anonymisation
4.2.1. You should anonymise or pseudonymise PII before uploading it to the Services whenever feasible.
4.2.2. You must limit processing to data that is accurate, relevant and strictly necessary for the intended purpose, and implement appropriate technical and organisational measures to protect PII, including access controls, retention limits and secure deletion policies.
4.3. Data Processing Agreement (DPA)
4.3.1. Where required by data-protection law, the Data Processing Agreement (DPA) forms an integral part of your agreement with Tendem and governs the processing of PII via the Services.
4.3.2. Under the DPA:
you act as Data Controller or Data Processor (on behalf of a controller); and
Tendem acts as Data Processor or Sub-Processor, as applicable.
4.3.3. The DPA sets out the respective roles, responsibilities and safeguards (technical, organisational and contractual) for PII processing. You must review, accept and adhere to the DPA before processing PII via the Services.
4.3.4. Failure to comply with the DPA and applicable data-protection law may result in limitations on, or termination of, your ability to process PII using Tendem.
5. Enforcement
5.1. Investigations and actions. We may investigate any suspected violation of this Policy or misuse of the Services. We reserve the right, at our sole discretion and without prior notice to the extent permitted by law, to:
issue warnings or require remedial actions;
suspend or restrict your access to all or part of the Services;
terminate your account or contract;
remove or disable access to content; and
report behaviour and share relevant information with law-enforcement or other competent authorities, particularly in cases involving child safety, threats of violence or other serious harm.
5.2. Reservation of rights. Our enforcement actions are without prejudice to any other rights or remedies available to Tendem under contract or applicable law.
5.3. Incident notification by customers. If you become aware of a serious incident, malfunction or risk relating to an AI system for which the Services were used in its design, development, training, validation, deployment or monitoring, and which may reasonably impact the security, integrity or legal compliance of the Services, you must notify Tendem without undue delay and provide reasonable cooperation to assess and, where appropriate, mitigate the issue.
5.4. Cooperation with authorities. Tendem may cooperate with competent supervisory authorities, regulatory bodies and law-enforcement agencies as required by law, including by sharing information reasonably necessary to investigate or address suspected violations of this Policy, serious incidents or systemic risks, while taking into account applicable confidentiality and data-protection obligations.
6. Reporting Concerns and Violations
If you become aware of any content or activity on the Services that you believe violates this Policy or applicable law, you should promptly report it to us via the support channels or contact details provided on the Tendem website or in your agreement. We take all reports seriously and will review them in accordance with our internal procedures and applicable law.
7. Changes to this Policy
7.1. We may update or modify this Policy from time to time (for example, to reflect changes in law, technology or our Services). The updated version will be published on our website and will indicate the effective date.
7.2. Your continued use of the Services after any update takes effect constitutes your acceptance of the updated Policy. If you do not agree with the updated Policy, you must stop using the Services.
7.3. AI regulation and territorial scope. The obligations arising from applicable AI regulation, including the EU AI Act, may apply to your AI systems based on factors such as where the system is placed on the market or put into service, where it is used, and where affected individuals are located, regardless of your place of establishment. You are responsible for monitoring when such obligations become applicable to you and for complying with them. Tendem may update this Policy and related contractual documentation to reflect changes in AI regulation and supervisory guidance.