Inworld AI Acceptable Use Policy

(last updated June 11, 2025)

This Acceptable Use Policy (“Policy”) supplements the Inworld AI Terms of Service (currently located at www.inworld.ai/terms) or the Master Services Agreement you signed with Inworld AI (the “Agreement”) and applies to your access and use of our Services, including any Inputs you provide and Outputs you create. It also applies to your use within and outside our Website and our Services, whether directly or indirectly, as well as any attempts to engage in such use. Capitalized but undefined terms used herein have the meanings set forth in the Agreement. 

General. These rules apply to all uses of Inworld AI platform or services:

  1. Comply with applicable laws—for example, do not: 
  1. Compromise the privacy, safety or legal rights of others 
  2. Engage in regulated activity without complying with applicable regulations
  3. Promote or engage in any illegal activity, including the exploitation or harm of children and the development or distribution of illegal substances, goods, or services
  4. Use subliminal, manipulative, or deceptive techniques that distort a person’s behavior so that they are unable to make informed decisions in a way that is likely to cause harm
  5. Exploit any vulnerabilities related to age, disability, or socio-economic circumstances
  6. Create or expand facial, voice, or other biometric recognition databases without consent
  7. Evaluate or classify individuals based on their social behavior or personal traits (including social scoring or predictive profiling) leading to detrimental or unfavorable treatment
  8. Assess or predict the risk of an individual committing a criminal offense based solely on their personal traits or on profiling
  9. Infer an individual’s emotions in the workplace and educational settings, except when necessary for medical or safety reasons
  10. Categorize individuals based on their biometric data to deduce or infer sensitive attributes such as their race, political opinions, religious beliefs, or sexual orientation
  11. Make any use of our Services or their Output in ways that would be classified as “prohibited” or “high-risk” or by a similar description under applicable law, including Applicable AI Laws. “Applicable AI Laws” means applicable legislation or regulations related to artificial intelligence and/or automated decision-making, including the European Union's Artificial Intelligence Act, Regulation (EU) 2024/1689. While Inworld AI may review a customer’s proposed use case for the Services, customers/users are nevertheless responsible for: (i) as a deployer, independently conducting an analysis of their classifications under Applicable AI Laws; and (ii) otherwise ensuring compliance with all other laws to which they are subject.

  1. Do not use our service to harm yourself or others—for example, don’t use our services to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system. 
  2. Do not repurpose or distribute output from our services to harm others—for example, don’t share output from our services to defraud, scam, spam, mislead, bully, harass, defame, discriminate based on protected attributes, sexualize children, or promote violence, hatred or the suffering of others.
  3. Do not create violent, hateful, or harassing material outside of fictional contexts.  For example, this includes accessing or using our Services to:

a) Create, distribute, or engage in violent threats, extremism, or terrorism, including material that threatens, incites, or promotes violence against an individual or group.

b) Engage in, promote, or facilitate human trafficking, sexual violence, or other exploitation.

c) Discriminate based on protected characteristics, including race, national or ethnic origin, religion, age, sex, gender, sexual orientation, or physical ability.

d) Promote or facilitate harassment, including material that promotes harassing, threatening, intimidating, predatory, or stalking conduct or that otherwise promotes or celebrates the suffering of individuals or groups of individuals.

e) Promote or facilitate self-harm, including suicide or eating disorders.

f) Create, promote, or facilitate the spread of misinformation, including denying the existence of specific health conditions and other medical misinformation.

g) Promote or facilitate the use of harassing debt collection practices.

This section does not apply to activity in purely fictional contexts (e.g. violent speech by a character in a book, video game or movie) or when it is part of reporting on newsworthy activity by third parties (e.g. a news anchor reporting on terrorist activities).

  1. Do not use the Services or Output to train AI or compete with Inworld AI. This includes:
  1. Using any part of our Services or their Output to research and develop products, models, or services that compete with Inworld AI, or otherwise compete with Inworld AI.
  2. Using any part of our Services or their Output as input for any machine learning or training of artificial intelligence models.
  3. Using any part of our Services or their Output as part of a dataset that may be used for training, fine-tuning, developing, testing, or improving any machine learning or artificial intelligence technology.

  1. Mandatory AI disclosure. Always clearly and prominently disclose to users they are interacting with AI rather than a human.

Developers. These rules apply to uses of Inworld AI Framework and APIs:

The Inworld AI services allow you to build custom applications. As the developer of your application, you are responsible for designing and implementing how your users interact with our technology. When building with these services the following rules apply in addition to the general acceptable use policy:

  1. Don’t compromise the privacy of others, including:
  1. Collecting, processing, disclosing, inferring or generating personal data without complying with applicable legal requirements
  2. Using biometric systems for identification or assessment, including facial recognition
  3. Facilitating spyware, communications surveillance, or unauthorized monitoring of individuals
  1. Don’t perform or facilitate the following activities that may significantly impair the safety, wellbeing, or rights of others, including:
  1. Providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations
  2. Making high-stakes automated decisions in domains that affect an individual’s safety, rights or well-being (e.g., law enforcement, migration, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance)
  3. Facilitating real money gambling or payday lending
  4. Engaging in political campaigning or lobbying, including generating campaign materials personalized to or targeted at specific demographics
  5. Deterring people from participation in democratic processes, including misrepresenting voting processes or qualifications and discouraging voting
  1. Don’t misuse our platform to cause harm by intentionally deceiving or misleading others, including:
  1. Generating or promoting disinformation, misinformation, or false online engagement (e.g., comments, reviews)
  2. Impersonating another individual or organization without consent or legal right
  3. Engaging in or promoting academic dishonesty 
  4. Failing to ensure that automated systems (e.g., chatbots) disclose to people that they are interacting with AI, unless it's obvious from the context
  1. Don’t build tools that may be inappropriate for minors, including but not limited to, sexually explicit or suggestive content, graphic violence, obscenity, or other mature themes.

Enforcement

Enforcement of this Policy is at Inworld AI’ sole discretion, and any failure of Inworld AI to enforce this Policy in every instance does not constitute a waiver of our right to enforce it in other instances. This Policy does not create any right or private right of action on the part of any third party or any reasonable expectation that our Services will not contain any material that is prohibited by this Policy or that objectionable material will be promptly removed after it has been posted. We use a combination of automated systems, user reports, and human review to assess material and usage that may violate this Policy. For users who violate this Policy, we may remove the violating material, and/or suspend your access to and use of our Services. For certain material that poses a real-world risk of harm, we reserve the right to contact or cooperate with relevant law enforcement authorities.

We encourage you to report any suspected abuse and misuse to Inworld AI by following the prompts in the Servies or by emailing legal@inworld.ai 

Complaints Handling

If you believe your account has been incorrectly banned, or your material has been incorrectly removed, you can let us know by contacting support@inworld.ai 

Updates to this Policy

We may periodically update this Policy. We will notify you of any significant changes in advance or as required by law. You can check the “Last Updated” date at the outset of this Policy to see when it was last updated.