Leveleen Child Safety Policy

Last Updated: January 21, 2025

1. Introduction

At Leveleen, we are committed to creating a safe and supportive environment for all users, with special attention to the safety of children and minors who may use our real-life RPG platform. This Child Safety Policy outlines our approach to protecting minors and preventing harmful or inappropriate content or interactions within Leveleen.

2. Age Requirements

Leveleen is intended for users who are 13 years of age or older. Users between the ages of 13 and 18 should have parent or guardian consent to use the service. We implement an age verification system during the sign-up process to help enforce these requirements.

The following age restrictions apply:

  • Users under 13 years of age are not permitted to use Leveleen.
  • Users between 13-18 years require parental consent.
  • Certain features may have additional age restrictions clearly indicated within the app.
  • Mission content is designed to be age-appropriate based on user settings.

3. Prohibited Content and Behavior

Leveleen strictly prohibits content and behavior that may harm, exploit, or endanger children. The following are explicitly prohibited on our platform:

  • Child Sexual Exploitation: Any content or behavior related to child sexual abuse, exploitation, or inappropriate interactions with minors.
  • Grooming: Attempts to establish inappropriate relationships with minors or manipulate them for any exploitative purpose.
  • Sextortion: Threatening or coercing minors for sexual content or favors.
  • Trafficking: Any attempt to traffic, trade, or exploit minors.
  • Harmful Challenges: Promoting, encouraging, or sharing challenges that may cause physical or psychological harm to minors.
  • Bullying and Harassment: Content or behavior intended to harass, intimidate, or bully others, particularly minors.
  • Hate Speech: Content promoting discrimination, hatred, or violence against any individual or group based on attributes such as race, ethnicity, gender, religion, disability, or sexual orientation.
  • Self-Harm Promotion: Content that promotes, encourages, or glorifies self-harm, suicide, or eating disorders.
  • Dangerous Missions: Mission suggestions or encouragement for activities that could be harmful to minors' physical or mental health.

4. Content Moderation and Safety Measures

To ensure a safe environment, particularly for younger users, we implement the following safety measures:

  • Content Filtering: AI-generated story and mission content is filtered to prevent inappropriate or harmful material.
  • Safety Mode: Default setting that ensures story content and interactions remain appropriate and constructive.
  • Age-Appropriate Missions: Mission recommendations are tailored to be appropriate for the user's age group.
  • Positive Reinforcement: Story content focuses on healthy, positive development and collaboration.
  • Moderation System: User-generated content is subject to both automated and human moderation.
  • Reporting Tools: Easy-to-use reporting mechanisms for users to flag inappropriate content or behavior.
  • Educational Content: Clear information about safe participation and responsible mission design.

5. AI Story Safety

Our AI Story Director and mission systems are specifically designed with safety in mind:

  • Age-Appropriate Guidance: Mission suggestions and story content are tailored to be appropriate for younger users.
  • Healthy Activity Suggestions: The system avoids recommendations that could be harmful or risky.
  • Mental Health Awareness: Prompts are designed to be supportive and avoid triggering content.
  • Professional Disclaimers: Story content is for entertainment and not professional medical, therapeutic, or counseling advice.
  • Crisis Prevention: Safeguards help detect and respond appropriately to concerning content or behavior.

6. Reporting Mechanisms

We encourage all users to report content or behavior that violates our Child Safety Policy. Reports can be made through:

  • In-app reporting tools accessible from any content or user interaction
  • Email to safety@Leveleen.com
  • Contact form on our website

All reports are taken seriously and investigated promptly. Depending on the severity of the violation, we may take appropriate action, including content removal, account suspension, or reporting to relevant authorities.

7. Compliance with Laws

Leveleen complies with all applicable laws regarding child protection, including:

  • Children's Online Privacy Protection Act (COPPA)
  • Applicable state and international regulations regarding minors' online safety
  • Mandatory reporting requirements for child abuse or exploitation
  • Data protection regulations concerning minors' personal information

We cooperate fully with law enforcement in cases involving child safety and may report serious violations to appropriate authorities.

8. Education and Resources

We provide resources to help parents, guardians, and younger users understand online safety and healthy participation:

  • In-app safety guides and educational content
  • Links to external resources on digital wellbeing and online safety
  • Clear explanations about AI-generated story content
  • Guidelines for safe mission design and collaboration
  • Resources for parents on supporting healthy use of the platform

9. Updates to This Policy

We may update this Child Safety Policy from time to time. We will notify users of any significant changes by posting the new policy on this page and updating the "Last Updated" date.

10. Contact Us

If you have questions or concerns about our Child Safety Policy, please contact us at:

safety@Leveleen.com