Content Policy
How Mor governs content, sources, outputs, submissions, and safety-related interactions across the Service.
Purpose and Scope
Mor is a private service that organizes, summarizes, classifies, labels, ranks, and presents information across its products and related surfaces. This Content Policy explains the rules and standards Mor applies to content, sources, submissions, outputs, reports, and related conduct within the Service.
This Policy applies, as relevant, to:
- third-party publisher and source content accessed through feeds, links, APIs, integrations, or other ingestion methods;
- Mor-generated or Mor-assisted outputs, including summaries, classifications, labels, rankings, and contextual layers;
- user submissions, reports, feedback, appeals, and other materials sent to Mor;
- conduct affecting Mor's systems, safety processes, trust signals, moderation workflows, or other users; and
- related features, interfaces, and support or reporting channels made available by Mor.
This Policy operates alongside Mor's Terms of Service, Privacy Policy, Acceptable Use Policy, Community Guidelines, AI Disclosure, Child Safety materials, Copyright & DMCA policy, and related Trust and Safety resources. If multiple policies apply, Mor may apply them together.
Safety, Integrity, and Trust
Mor prioritizes user safety, child safety, platform integrity, legal compliance, and trustworthy information access.
Mor may take action where content, conduct, or source behavior creates or appears reasonably likely to create safety, legal, operational, deception, abuse, or integrity risk. Mor may also act to reduce misuse of its systems, prevent harm, protect users and third parties, preserve product quality, and maintain confidence in the Service.
Mor is not a public forum or common carrier. Decisions about what Mor displays, labels, restricts, recommends, downranks, delists, removes, or otherwise acts on are made as part of Mor's private service operations, editorial judgment, trust-and-safety processes, and product governance.
How Mor Reviews and Moderates Content
Mor may use a combination of automated systems and human review processes to detect, filter, review, label, restrict, deprioritize, remove, or otherwise act on content, submissions, sources, behaviors, and related signals that may be unlawful, abusive, deceptive, unsafe, manipulative, or inconsistent with this Policy.
These measures may be applied before publication, after publication, after user reports, during investigations, or in response to safety, legal, technical, or integrity concerns.
Mor may consider context, severity, credibility, recurrence, scale, user impact, legal obligations, source reliability, attempts to evade safeguards, and the risk of real-world harm. Mor may act on imperfect or incomplete information where prompt action is reasonably necessary to protect users, third parties, the Service, or the public.
Mor does not guarantee that all problematic content or conduct will be detected or prevented.
Prohibited Content and Conduct
The following categories are prohibited or may result in restriction, removal, reduced distribution, account or feature limitations, source enforcement, escalation, or other action.
1. Child Sexual Abuse, Child Exploitation, and Grooming
Mor prohibits any content or conduct involving:
- child sexual abuse material;
- sexualization of minors;
- grooming or enticement of minors;
- child sexual exploitation, trafficking, or abuse;
- sexual extortion involving minors; or
- any attempt to obtain, share, normalize, promote, or facilitate such material or conduct.
Mor may remove such material immediately, preserve evidence, disable access, and report matters to appropriate authorities, hotlines, or partners where required or appropriate.
2. Human Exploitation, Trafficking, and Sexual Violence
Mor prohibits content or conduct that promotes, facilitates, depicts, threatens, or organizes:
- human trafficking;
- sexual exploitation;
- prostitution or commercial sexual exploitation where unlawful or exploitative;
- coercive sexual conduct;
- sexual assault or abuse;
- sextortion; or
- non-consensual sexual conduct or coercion.
3. Terrorism, Violent Extremism, and Organized Violent Harm
Mor prohibits content or conduct that promotes, supports, glorifies, coordinates, recruits for, or facilitates:
- terrorism;
- violent extremist ideologies or organizations;
- mass casualty attacks;
- manifestos intended to inspire violent acts;
- organized violent criminal activity; or
- instructions or operational support for serious violent harm.
4. Threats, Incitement, and Dangerous Violence
Mor prohibits:
- credible threats of violence;
- incitement to violent wrongdoing;
- celebration or encouragement of imminent serious harm;
- instructions for carrying out violent criminal acts;
- targeted intimidation designed to create fear of physical harm; and
- graphic or exploitative violent material where the safety, dignity, or public-interest context does not justify distribution.
5. Illegal Drugs, Criminal Facilitation, and Dangerous Illicit Activity
Mor prohibits content or conduct that meaningfully facilitates:
- illegal drug trafficking or distribution;
- unlawful manufacture of dangerous drugs or substances;
- criminal marketplaces;
- evasion of law enforcement in connection with serious crimes;
- instructions intended to enable serious criminal wrongdoing; or
- other illicit activity that creates substantial safety or legal risk.
6. Fraud, Scams, Deception, and Abuse of Trust
Mor prohibits:
- fraud, scams, and financial deception;
- phishing and credential theft;
- impersonation intended to mislead, exploit, or injure;
- forged or fabricated submissions, evidence, reports, or source representations;
- malware, spyware, malicious code, or unauthorized access activity;
- spam or manipulative mass-contact behavior; and
- other deceptive conduct that undermines user trust, safety, or platform integrity.
7. Hate, Harassment, Bullying, and Targeted Abuse
Mor prohibits content or conduct that attacks, degrades, dehumanizes, threatens, or targets individuals or groups on the basis of protected characteristics or similar status.
Mor also prohibits:
- hate speech or hateful harassment;
- glorification of exclusion, segregation, or subordination;
- bullying or sustained abuse;
- stalking or targeted intimidation;
- dogpiling or coordinated harassment; and
- content designed primarily to humiliate, terrorize, or silence others.
8. Sexual Exploitation, Non-Consensual Intimate Content, and Explicit Sexual Material
Mor prohibits:
- non-consensual intimate imagery;
- sexually exploitative content;
- intimate deepfakes or synthetic sexual content involving real people without consent;
- sexual extortion;
- exploitative pornography;
- content that sexualizes abuse, coercion, or minors; and
- other sexual material that is unlawful, exploitative, or inappropriate for the Service.
9. Privacy Violations, Doxxing, and Exposure of Sensitive Personal Information
Mor prohibits:
- doxxing;
- publication or distribution of sensitive personal information without authorization or other valid basis;
- unauthorized disclosure of private contact details, financial data, government identifiers, medical or highly sensitive records, private media, or precise location information;
- surveillance facilitation intended to endanger or invade privacy; and
- content that creates a meaningful risk of stalking, identity theft, blackmail, or physical harm.
10. Dangerous Misinformation, Manipulated Media, and Serious Public Harm
Mor may restrict, label, reduce distribution of, or remove content that presents a credible risk of significant real-world harm, including where the risk arises from materially deceptive, fabricated, manipulated, or dangerously misleading claims.
This may include, depending on context:
- harmful medical or public-health falsehoods;
- fabricated emergency information;
- manipulated media presented in a misleading way;
- false claims likely to cause panic, violence, or serious injury; and
- other materially misleading content where the consequences of amplification may be severe.
Mor is not required to definitively resolve every factual dispute before acting.
11. Defamation, Unverified Allegations, and Malicious Falsehoods
Mor may act on content that includes false factual allegations, unsupported accusations, fabricated claims, or other statements that create reputational, legal, or safety risk, especially where the content appears malicious, reckless, manipulative, or insufficiently supported.
12. Intellectual Property and Other Rights Violations
Mor prohibits unlawful or infringing submissions and may restrict or remove content that appears to violate copyright, trademark, privacy, publicity, contractual, or other rights. Mor may process qualifying notices, takedown requests, and counter-notices in accordance with applicable law and Mor's Copyright & DMCA processes.
13. Evasion, Circumvention, and Abuse of Safety Systems
Mor prohibits attempts to evade, defeat, manipulate, probe, overload, or abuse:
- moderation systems;
- trust and safety workflows;
- reporting channels;
- appeals processes;
- source evaluation systems;
- ranking, recommendation, or integrity systems; or
- account, identity, or access controls.
This includes repeat-offender evasion, sockpuppeting, ban evasion, coordinated false reporting, submission fraud, and attempts to manipulate trust signals or source treatment.
Source Integrity and Reliability
Mor may evaluate publishers, feeds, integrations, submissions, and other sources using internal and external signals, including accuracy patterns, correction practices, transparency, authenticity, repeated deception, manipulation, policy violations, abuse patterns, and reliability concerns.
Mor may:
- label or annotate source quality concerns;
- limit indexing, visibility, recommendations, or distribution;
- suspend or remove specific sources, feeds, domains, or integrations; or
- apply heightened review to sources that present elevated risk.
Mor does not guarantee inclusion, ranking, visibility, traffic, or continued treatment for any source.
AI-Assisted Systems and Mor Outputs
Mor may use automated and AI-assisted systems to ingest, classify, summarize, organize, rank, label, or present information.
Because such systems are probabilistic, outputs may be incomplete, imperfect, or mistaken. Mor may review, label, revise, restrict, or remove AI-assisted outputs or related material where accuracy, safety, legality, integrity, or policy concerns arise.
Users and third parties may not use Mor's AI-assisted features or outputs to:
- create or distribute prohibited content;
- harass, defraud, impersonate, or exploit others;
- generate manipulative or deceptive material;
- evade moderation or policy controls;
- launder misinformation through synthetic presentation; or
- abuse Mor's systems, pipelines, or trust signals.
Mor may label disputed, synthetic, manipulated, low-confidence, or context-sensitive material where appropriate.
Note: Mor Output is provided for informational purposes and does not constitute legal, medical, financial, or other professional advice. See our AI Disclosure for additional information.
Age-Appropriate Access and Sensitive Content
Mor may label, warn on, blur, hide by default, limit visibility of, age-gate, or otherwise restrict access to content that may be graphic, sexual, exploitative, mature, disturbing, or otherwise inappropriate for younger audiences or inconsistent with the applicable age rating of a product surface.
Where required or appropriate, Mor may use declared age, account status, product settings, region-specific requirements, or other age-related safeguards to limit access to certain content or features.
Mor may apply additional precautions for minors and young users, including stricter review, restricted access, reduced discoverability of certain material, or removal of content that presents heightened safety risk.
Reporting Concerns
Users and other affected parties may report content, sources, conduct, safety concerns, legal concerns, privacy violations, intellectual property concerns, or suspected violations of this Policy through Mor's designated reporting and support channels.
Mor may request supporting information where useful to assess a report. Mor may prioritize urgent matters involving child safety, credible threats, exploitation, severe harassment, privacy risks, security issues, or legal process.
Mor reviews reports and may take action based on severity, credibility, available evidence, recurrence, public-interest considerations, legal obligations, and safety risk.
To report a concern, use Mor's designated reporting page or contact channels made available through the Service.
Appeals and Reconsideration
Where Mor offers an appeal, reconsideration, or follow-up review process, Mor may evaluate the request based on the information provided, any additional evidence, the nature of the action taken, and the safety, legal, and integrity concerns involved.
Appeal or reconsideration availability may vary by issue type, product surface, jurisdiction, or risk level. Submission of an appeal does not guarantee reversal, response, or reinstatement.
Enforcement Actions
Mor may take one or more of the following actions where it believes content, conduct, sources, accounts, submissions, or related activity violate this Policy or otherwise create meaningful safety, legal, abuse, deception, or integrity risk:
- refuse, reject, or decline to publish material;
- remove, suppress, blur, or hide content;
- label, annotate, warn on, or interstitial content;
- reduce distribution, visibility, ranking, recommendation, or discoverability;
- disable or limit features, submissions, interactions, messaging, or account capabilities;
- restrict, suspend, terminate, or otherwise limit access to accounts, sources, feeds, integrations, or services;
- block or limit abusive actors from participating in features, interacting with users, or accessing parts of the Service;
- preserve records or evidence;
- escalate matters to trust and safety personnel, legal review, platform partners, or law enforcement; and
- take other reasonably necessary action to protect users, third parties, Mor, or the public.
Action may be taken with or without prior notice where appropriate, including in urgent or high-risk situations.
Contact and Policy Resources
For questions, reports, or concerns relating to content moderation, source restrictions, safety actions, labeling, removal decisions, or application of this Policy, contact Mor through its designated moderation or reporting channels.
Related Resources
Policy Changes
Mor may update this Policy from time to time to reflect product changes, legal requirements, safety learnings, operational needs, or evolving risk conditions. Unless otherwise required by law, the updated version will apply when posted.
Contact the Moderation Team
For questions about content moderation actions including content removal, restrictions, labeling, enforcement decisions, or the application of Mor's content rules to specific material, contact the Moderation Team.
If you are unable to use this form, you may email moderation@themorapp.com.