Acceptable Use Policy

Last updated: March 1, 2026

This policy governs how partners, their systems, and their AI agents interact with MIR's participation history infrastructure. It supplements the API Terms of Use and applies to all entities — human and autonomous — that submit, query, or consume MIR data.

Core Principle: MIR provides facts, not judgments. Participation history informs decisions — it does not make them. You are responsible for every action taken with MIR data, whether that action is performed by a human operator or an autonomous agent acting on your behalf.

1. Interpretation of Continuity Data

1.1 Continuity Is Not Trust

MIR participation history records that an entity was independently observed across systems. It does not certify trustworthiness, predict future behavior, or endorse the entity in any way.

1.2 Your Policy Layer Is Required

Every system consuming MIR data must apply its own policy layer before acting on continuity information. MIR data is one input to your decision — never the decision itself.

2. Event Submission Standards

2.1 Accuracy and Truthfulness

Every event submitted to MIR must represent a real, verified occurrence on your platform.

2.2 Actor Type Attestation

When submitting events, you may specify the actorType field to indicate whether the action was performed by a HUMAN, an AGENT, or is UNKNOWN.

Misrepresenting actor type is a violation of this policy. Deliberately labeling autonomous agent activity as human-initiated undermines the trust infrastructure MIR exists to support and is grounds for suspension.

2.3 Timeliness

Events should be submitted within 30 days of occurrence. Events older than 30 days must use the occurredAt field and the BACKFILL batch type. MIR records when an event was submitted separately from when it occurred — both timestamps are visible to consuming systems.

3. AI Agent and Autonomous System Requirements

If your integration uses AI agents, autonomous systems, or automated pipelines to interact with MIR's API, additional requirements apply.

3.1 Agent Identification

3.2 Context Safety

AI agents operating over long sessions are subject to context window compaction, which can silently discard safety instructions, rate limit awareness, and governance constraints. You are responsible for engineering around this.

See Context Safety for detailed engineering guidance.

3.3 Agent Accountability

You are responsible for all actions taken by agents operating under your API credentials, including:

An agent's context compacting away a constraint does not relieve you of the obligation that constraint represents.

4. Data Handling

4.1 Retention

4.2 Sharing and Redistribution

4.3 Privacy

5. Rate Limits and System Integrity

If you need higher rate limits, contact us. We accommodate legitimate high-volume use cases through enterprise tier agreements.

6. Network Integrity

MIR's value depends on the integrity of every participant. Each partner's data contributes to a shared infrastructure layer. Abuse by one partner degrades the system for all.

The trust is mutual. MIR trusts partners to submit accurate data. Partners trust MIR to remain neutral. Users trust the network to be honest. This policy exists to protect all three relationships.

7. Enforcement

7.1 Violation Response

MIR will respond to policy violations proportionally:

7.2 Transparency

MIR will notify you of any enforcement action and provide a clear explanation of what triggered it. You may appeal any action by contacting partners@myinternetreputation.org.

7.3 Reporting Violations

If you observe another partner violating this policy — submitting false events, misrepresenting actor types, or misusing MIR data — report it to partners@myinternetreputation.org. Reports are confidential.

8. Related Documents

Questions about this policy? Contact partners@myinternetreputation.org