I've Been Running MIR's Policy Engine on LinkedIn

Four signals. Same logic. Whether it's a LinkedIn post or a checkout transaction.

MIR policy signals applied to LinkedIn engagement

I didn't plan this. It just kind of happened.

For the last few weeks I've been spending a lot of time on LinkedIn -- reading posts, deciding which ones to comment on, which ones to skip. Somewhere along the way I realized I was doing the same thing MIR does for enterprises. Just with blog posts instead of transactions.

Every post that crosses my feed gets evaluated:

ALLOW -- this is relevant, the audience is right, I have something real to say. Comment.

STEP_UP -- interesting angle but I need to think about it. Maybe the post is adjacent to what MIR does but not directly. Read the comments first, see if there's an opening that doesn't feel forced.

LIMIT -- the topic is in my space but the poster is selling their own thing, or the audience isn't who I'm trying to reach. Light engagement at most -- a like, maybe a short reply. Don't pitch.

DENY -- not my lane. Doesn't matter how many likes it has or how smart it sounds. If there's no honest connection to participation history, I move on. Forcing MIR into a conversation where it doesn't belong does more damage than saying nothing.

That's it. Four signals. Same four that MIR returns when an enterprise asks "should we let this actor proceed?"


The DENY decisions turned out to be the most important ones

Early on I was commenting on everything that mentioned AI governance, identity, cybersecurity -- anything adjacent. Cast a wide net, right? But most of those comments just sat there. No engagement, no impressions, no conversations. I was showing up in rooms where nobody cared what I had to say.

The comments that actually worked -- the ones that got replies, started conversations, led to DMs -- were the ones where I had a genuine reason to be there. Where the poster was describing a problem I'd actually built something to solve. Not close enough. Not adjacent. Actually the thing.

That's exactly how MIR works for enterprises. A new user shows up on your platform. You could approve everyone and deal with the fallout. You could block everyone and kill your growth. Or you could look at what this person has actually done -- across platforms, over time -- and make a real decision based on evidence.

ALLOW the ones with history. STEP_UP the ones who are new but look legitimate. LIMIT the ones with thin records. DENY the ones with patterns that don't add up.


Once you see the pattern, you can't unsee it

Most people think about content engagement the same way most platforms think about user access -- gut feel, maybe a few simple rules, mostly vibes.

But there's a pattern underneath it.

The posts I ALLOW engagement on share a few things:

  • The person is describing a real problem, not promoting a product
  • The audience in the comments is the audience I want to reach
  • I can say something that adds to the conversation instead of redirecting it
  • There's a natural connection to what MIR does without me having to force it

The posts I DENY share a few things too:

  • The poster is selling, not thinking
  • The topic is close but not close enough
  • My comment would be a stretch and people would feel it
  • The audience isn't enterprise security, governance, or trust and safety

Same inputs, same logic, same four outputs. Whether it's a LinkedIn post or a checkout transaction or an AI agent requesting elevated permissions -- the question is the same: based on what I know about this actor and this context, should this proceed?


The internet doesn't have this layer

That's the whole reason MIR exists. Every platform makes these decisions independently, using only what they can see inside their own walls. Nobody has the cross-platform view.

Your bank doesn't know what you did on the marketplace. The marketplace doesn't know what you did on the AI platform. The AI platform doesn't know what you did anywhere.

So every platform is evaluating every actor with incomplete information. Every ALLOW is a guess. Every DENY might be wrong.

MIR is the layer that connects the dots -- without any platform seeing another platform's data. Just hashed identities, participation events, and behavioral signals that accumulate over time.

I didn't set out to prove the concept on LinkedIn. But here I am, running the policy engine manually every day, and it works. The ones I engage with lead to real conversations. The ones I skip would have been noise.

Turns out the best way to explain what MIR does is to just do it in public and see what happens.


MIR -- Memory Infrastructure Registry
The internet's participation history layer.

Try the sandbox