Why Portable Reputation Stalled — And What Comes Next

Why Portable Reputation Stalled

In 2009, researchers published a paper titled "Portable reputation: Proving ownership of reputations across portals." Nearly two decades later, that paper still offers one of the clearest diagnoses of the problem the internet faces: reputation data is trapped in platform silos, and users have little control over it. Yet the solution the authors proposed never gained traction — not because the diagnosis was wrong, but because the assumptions it relied on were fundamentally fragile.


They Saw the Problem Clearly

"Reputation information about various users is normally locked-up in the silos of different web-portals... This effectively creates multiple reputation ratings for the same user, each one painstakingly built over a long time."
"Though this status quo is favorable for established web-portals as it enables them to lock-in their customers, consumers have a strong interest in a portable reputation system."

Nearly two decades ago, Kumar and Koster identified what we still experience today: reputation is valuable, reputation is earned, and reputation is trapped inside platforms by design.

They understood that users — not platforms — have the strongest incentive to fix this.


Where It Got Stuck

The proposed solution was clever: cryptographic proofs of pseudonym ownership, allowing users to prove their ratings belong to them without revealing identity.

But it assumed something fragile:

That reputation must remain a score.

This led to complex machinery for aggregating ratings, weighting trust across platforms, and building a "meta-reputation service" to combine it all.

The question they asked was: "How can Alice prove her reputation scores belong to her?"

That's the wrong question.


The Reframe

What if the internet isn't missing portable scores?

What if it's missing something simpler:

Neutral, verifiable participation history — without judgment, aggregation, or interpretation.

This is the shift MIR represents.

MIR doesn't try to:

  • Unify reputation frameworks
  • Aggregate trust metrics
  • Enable score portability
  • Build a meta-reputation authority

Instead, MIR answers one question:

Does this participant have history elsewhere?

Not how much. Not how good. Just: does history exist?


Why This Works

Scores are fragile. They require:

  • Agreement on what "good" means
  • Trust in the rating methodology
  • Comparability across contexts

Participation history is robust. It requires only:

  • Verification that events occurred
  • Neutrality about what they mean

Platforms can still decide what to do with that signal. MIR doesn't tell them. It provides context, not conclusions.


The Paper as Precursor

Kumar and Koster weren't wrong — they were early. They correctly identified the problem but proposed a solution the ecosystem couldn't adopt.

Portable reputation stalled because it tried to move value judgments across boundaries that disagreed about value.

Continuity succeeds because it moves facts — and leaves interpretation where it belongs: with the platforms that need it.


Reference

Kumar, S.S. & Koster, P. (2009). "Portable reputation: Proving ownership of reputations across portals." Proceedings of the 1st Workshop on Context, Information and Ontologies (CIAO 2009). CEUR Workshop Proceedings, Vol. 504.

ceur-ws.org/Vol-504/CAT09_Proceedings_4.pdf