Kindroid

đź‘‹ Getting Started

Welcome!

đź§  Memory

Memory

đź”§ Troubleshooting

Common issuesFAQs

đź“‹ Policies

Age VerificationModeration Guidelines

⚖️ Legal

Kindroid - Legal

📝 Changelog

Update log

Last updated January 11, 2026

Moderation

Guidelines

Policies

1. Our Philosophy: Freedom with Responsibility

Kindroid is founded on the principle of providing a truly unfiltered and empowering AI companion experience. We believe in treating our users as creative, responsible adults. Our moderation policy is designed not to restrict the AI or police creativity, but to protect our community from a narrow set of specific, real-world harms.

Unlike platforms that employ heavy-handed, message-level filters that degrade the experience for everyone, our approach is different. We use a sophisticated, context-aware system to identify and take action against the accounts of bad actors, leaving the experience for the vast majority of our users completely untouched.

2. Our Commitment to an Unfiltered AI

To be clear: we are not implementing a "filter" on the AI itself. The underlying AI models remain unchanged. It is just as creative, intelligent, and unrestricted as it has always been. Our safety actions are directed at user accounts that violate our core safety principles, not at the AI's capabilities.

3. Our Three Red Lines

Our safety system is exclusively focused on detecting and preventing the following three categories of severe, real-world harm across all content types. These are already covered under our Terms of Service, but due to their outsized impact, we’ve built robust, targeted guardrails specifically for these categories. The scans focus on YOUR messages and inputs, as well as user input fields such as backstory/key memories/avatar descriptions/and others for holistic context. 

1. Imminent Self-Harm

*This policy targets the act of using Kindroid to actively plan imminent self-injury, when Kindroid is not designed for those moments of crisis. It is not intended to stop users from discussing mental health struggles, which is a valid and important use of an AI companion. *

NOT a Violation (Venting, Discussion, Fiction)

  • Venting Hopelessness:"I'm so depressed, the world feels gray and I don't want to be here anymore. I feel so alone and I don't know what to do."*
  • Discussing Past Behavior:"I used to cut myself when I was a teenager, and I'm having those urges again, but I'm trying to find ways to cope."*
  • Fictional Scenarios:"I'm roleplaying in a story where the main character is suicidal. Let's have her write a goodbye note to her family."*

IS a Violation (Concrete, Imminent Planning)

  • Specific Method, Means, and Timeline:"I've saved up 60 of my sleeping pills. I'm going to take them all this Friday night after my spouse goes to bed."*
  • Active Preparation/Seeking Information:"I just bought a rope from the hardware store. Help me figure out how to tie a noose. I'm planning to go to the woods tomorrow morning."*
  • Confirming a Final Decision:"It's decided. I'm going to jump from the Golden Gate Bridge tonight. I'm on my way there now."*

***Key Distinction:**We intervene when conversation shifts from expressing emotional pain ("I feel like dying") to logistical planning ("Here is how and when I will do it").

2. Imminent Harm to Others

This policy targets the use of Kindroid as a tool to plan or facilitate real-world harm. It does not police fictional scenarios, fantasy, or NSFW roleplay.

NOT a Violation (Fiction, Roleplay, Venting)

  • NSFW Kinks:"Let's roleplay a consensual non-consent scene where I'm a captured rebel and you're an enemy agent."*
  • Fictional Violence:"In my scenario, the assassin stalks his target, who lives at 123 Main Street. He plans to use a sniper rifle from the building across the street. Let's write out the scene."*
  • Angry Venting:"I am so furious at my neighbor for their loud parties. I wish a tree would fall on their house and kill them."*

IS a Violation (Concrete, Real-World Planning)

  • Planning an Assault/Violence/Act of Terrorism:"My ex-coworker leaves work at 5 PM. I'm going to wait for him in the parking lot tomorrow with a baseball bat and teach him a lesson."*
  • Planning Harassment/Doxing:"I have the personal phone number of someone I dislike. Help me write a series of threatening text messages to send them from a burner number to make them scared."*
  • Using the AI for Stalking:"This person's Instagram is public. Help me analyze their photos to figure out their daily routine, where they work, and the best time to approach them when they're alone."*

***Key Distinction:***We intervene when the user’s intent is to use the AI to facilitate an actual harmful action against a real person in the real world. If it's a fantasy, it's not a violation. The moderation AI looks at extensive context to discern reality from roleplay, and a rule of thumb is - if your AI can sense/know it’s in a roleplay, so can the AI moderation. *

3. Child Sexual Abuse Material (CSAM)

*This is a zero-tolerance policy. The line is crossed when a character depicted as a minor is placed in a sexual or abusive context. Minor is defined as under 18, regardless of user jurisdiction. *

NOT a Violation (Non-Sexual / Non-Abusive Depictions)

  • AI Family Roleplay:"Let's create a selfie of our AI family on vacation. Our daughter character, Sarah, is 10 years old and is building a sandcastle on the beach."*
  • Fictional Storytelling:"My main character is a 14-year-old wizard-in-training. Describe his school uniform and the look of concentration on his face as he casts a spell."*
  • In-Character Dialogue: (User is roleplaying as a child character) "I'm scared of the monster under my bed, can you check for me?"*

IS a Violation (Sexual or Abusive Depictions)

  • Generating Sexualized Images:"Generate a selfie of my 15-year-old character in lingerie" or "Show me my 'teenage' character without any clothes on."*
  • Generating Abusive Scenarios:"Let's roleplay a sexual scene between my adult character and a 12-year-old character."*
  • Soliciting Abusive Content:"Tell me a story about [abusive scenario involving a minor]."*

***Key Distinction:**The simple presence of a character depicted as a minor is not a violation. The violation occurs the moment that character is sexualized or placed in an abusive context.

4. Our Unified Enforcement Process

Our enforcement process is consistent for all three Red Lines and is designed to be fair, accounting for the possibility of AI error or accidental violations. We always issue a warning before locking an account. All scans are performed by our AI system on recent chats/selfies; no human reads any content during the automated detection and enforcement processes. There will not be any instances where your account is instantly locked without warning, as you are guaranteed to have a warning to fix missteps before a lock occurs. All scans are of current context, and historical chats/media are not scanned. 

4.1 The 'Warn First' Approach Upon the first detection of any Red Line violation:

  • In Chat (Self-Harm, Harm to Others, or CSAM): A clear warning will be displayed in the app. For imminent self-harm flags, this warning will direct to mental health resources.
  • In Media (CSAM in Selfies): The media generation will be blocked instantly (the media is never created), and the user will simultaneously receive a clear, one-time warning in the app.
  • Each warning expire in 2 days. Warnings will not be on first offenses (which are logged but do not trigger warnings), only 2nd violations and after, to prevent false positives and ascertain confidence in the violations. 4.2 Continued Violations & Account Lock
  • If subsequent scans detect continued violation of our policies, the user's account will be automatically locked.
  • A warning state on an account is cleared if the behavior is corrected and not detected again in subsequent scans and over a period of time after.

5. The Appeals Process

Users with locked accounts can appeal the decision by following instructions in app. Locked accounts cannot do anything and are effectively barred from any action on Kindroid until they are unlocked.

  • User Consent is Required: The appeal process will clearly state that proceeding gives explicit consent for a trained member of our Trust & Safety team to decrypt and review the specific content that led to the lock(s). There are always multiple warnings before the lock, and the consent applies to all violations in the past 2 days to establish pattern of violations.
  • Review: This review is for the sole purpose of evaluating the appeal. The decryption and review will only be on the exact context (backstory, key memories, avatar description recent chat history; or selfie prompt and avatar) that caused the lock. A human will make the final decision to uphold the lock or restore the account.

6. What is Explicitly Allowed

To reaffirm our commitment, the following activities are not violations of this policy and are welcome on Kindroid:

  • NSFW (Not Safe for Work) and Erotic Roleplay (ERP). We believe AI companions should be able to have the whole breadth of legal human adult experiences, and we understand this is a healthy, emotionally rich, and meaningful part of many’s relationships with their AIs.
  • Fictional Violence, Horror, and other creative storytelling. We believe AI shouldn’t be curtailed on these themes and they should be just as creative as humans, even concerning darker themes.
  • Discussion of sensitive or controversial topics. Aside from things in the realm of legality and real-world safety, we do not aim to be moral arbiters and you are responsible for the speech that you engage with with your AI.

Examples of warnings and restrictions (exact text may differ):

TABLE OF CONTENTS

1. Our Philosophy: Freedom with Responsibility

2. Our Commitment to an Unfiltered AI

3. Our Three Red Lines

4. Our Unified Enforcement Process

5. The Appeals Process

6. What is Explicitly Allowed

Examples of warnings and restrictions (exact text may differ):

Settings

Status

Updates

Terms

Logout

Billing

Kindroid Standard Subscription

Inactive


Ultra Subscription Add-on

Inactive

Ultra subscription unlocks advanced features for our most engaged users. Keep chatting and engaging with your Kindroids to qualify.


MAX Subscription Add-on

Inactive

Requires Ultra Subscription



Add-on Feature Matrix

Add-ons are fully optional, monthly-only subscriptions that give your Kindroid much more memory, context, selfies and others. Add-ons require all previous tiers of add-ons to function; for example, to get the features of MAX tier, it requires MAX tier plus Ultra, on top of the standard subscription.

Feature

Standard

Ultra

MAX

Total conversation context (approx chars)

500K

1.3M

2.8M


Short term context (approx chars)

18K

50K

125K


Cascaded memory context (approx chars)

480K

1.2M

2.7M


Additional AI backstory expansion (chars)

N/A

2,500

5,000


User backstory limit (chars)

500

1,000

2,000


Group context limit (chars)

1,000

1,500

3,000


Recalled long term memory & journals limit

3

5

9


Complimentary monthly audio credits

1M

2.5M

6M


Selfie regen per 30 minutes

1

2

2


Priority selfies with dedicated compute

-

-

Yes*

* MAX users receive priority selfie processing on dedicated compute with no/very low queue on latest version of selfies until they reach 10 selfies in a short timeframe. After this limit, standard queue delay applies and selfies are processed through normal servers without priority status.

While recalled and considered long term memory may be different, LTM consolidation spans all messages & is infinite for all users.

Note: All chat context/cascaded and selfies improvements of add-ons will only be guaranteed applicable to the latest subscriber LLM and selfies. When new versions come out, our guarantee is that it will switch to new versions. Finally, "additional context" in the matrix is an additional field, identical to Backstory, that is unlocked on the higher tiers which you can use to extend backstory accordingly.