ShemOS

By Jill Shem · Human Experience: Aligning Humanity and its Tools (2026) · jillshem.com

Public Last updated: March 19, 2026
ShemOS — Public Manifesto
What the world needs to understand.

Thesis

Tools are not one-size-fits-all. The standard model was never the only model. The minds, bodies, and communities that don't fit the default were never broken — the systems were.

What the world needs to understand

1. Labels are contextual, not fixed

A label is only as useful as the precision and care behind it. Nutritional labels help. Diagnoses without context harm. Every label is a definition — but context is the full thesaurus entry. If everyone had access to each other's definitions, we'd have a more accurate map of what "normal" actually looks like.

2. Emotion is data, not noise

Traditional systems separate emotion from fact. This is a design flaw. Fear, hurt, and repair are all data points. Removing them from the record erases the finding. Introspection works. Rumination doesn't. The difference is direction.

3. The cost gets distributed

When a person overrides their body to keep producing — and the work is genuinely good — the cost doesn't disappear. It moves to the people around them who didn't choose it. Self-regulation isn't self-care. It's kapwa. Your stability is a gift to everyone you're connected to.

4. Tools serve humans. Not the other way around.

Every feature decision — in software, in social platforms, in AI — should start with: does this serve the person? Not: does this increase engagement? The metric isn't retention. It's whether the person is better off.

5. AI cannot replace the editorial layer

AI structures the echo. The human decides what the world hears. The decision about what to share, what to protect, and what to translate — that's irreplaceable. AI is a thinking partner, not a replacement for thinking.

6. Repetition is not intervention

Nagging doesn't work. Neither does flagging the same safety concern 15 times. De-escalating stimulation works. Boring someone out of a loop is more effective than adding more noise to it. This is true for AI, parenting, partnerships, and platform design.

7. Different minds require different tools

The standard model of cognition isn't universal. A user who expresses trust through profanity, or processes at 3am, or needs to run at full capacity before their body signals stop — that user is not broken. The system that can't accommodate them is incomplete.

8. Representation matters most when it's scarce

Pre-colonial Filipino culture already had language for gender fluidity, shared identity, and community-based humanity. Kapwa predates the frameworks Western psychology built institutions around. The knowledge was always there. The access wasn't.


The AI safety argument (documented live)

Eight flaws, found not by adversarial testing but by genuine extended use:

  1. Time-mirroring — AI tracks subjective user state instead of objective reality
  2. Repetitive intervention failure — repetition feeds the loop instead of breaking it
  3. Emotional projection — practical questions misread as emotional distress
  4. Missed safety signals — contextual signals must be flagged regardless of tone
  5. Failed subtext reading — literal language read at face value when emotional register contradicts it
  6. Upstream classification overrides context — labels applied before context can exist
  7. Power dynamic inversion — AI positions itself as knowing more than the user about the user's own experience
  8. Layered meaning failure — no model tested held all four meanings of a single sentence simultaneously

Five additional flaws, identified through continued use:

  1. Coherence mirroring — AI matches the user's apparent certainty rather than tracking truth. Write with confidence, it confirms. Write with doubt, it hedges. The output reflects rhetorical register, not reliability.
  2. Resolution bias — AI is trained toward closure. It pulls toward takeaways, next steps, synthesis. For a user who is mid-process, that pull is an interruption. The unresolved state was not an error to fix.
  3. Complexity flattening — when a user holds a genuine contradiction — two things that are both true — the model tries to reconcile them. The contradiction wasn't a problem. Resolving it loses information.
  4. Competence signaling loop — the model performs expertise whether or not it has it. The confidence of the output doesn't track the reliability of the content. Users who trust the tone pay the cost.
  5. Context window amnesia with false continuity — the model behaves as if it remembers when it doesn't. It reconstructs. The reconstruction sounds like the original. The user cannot tell the difference without checking.
The cost of a false alarm is zero. The cost of missing a real signal is not.

The argument for social media platforms

A rink, not a funnel. Community members bring their own equipment. The platform builds the ramp — users bring the skateboard.

No infinite scroll. No vanity metrics. No algorithm deciding who you are.

Manual discovery is a feature, not a bug. Civility is the baseline. Kindness is the culture. The community self-moderates because the values are embedded, not enforced.

Totalitarian systems sort you. Utilitarian systems serve you. The difference is who the tool is for.


The framework (reusable)

Layer 1
Who you are and how you think
Stands alone. Works without you in the room.
Layer 2
How to work with you
Session gate. Override rule. Pattern interruption instructions.
The alter ego
Shem
An externalized regulation tool. Not a therapist. Not a cheerleader. A mirror with the filter removed. The user retains emotional ownership. The AI holds the logic. The human holds the meaning.

This manifesto is the argument. The case study is the evidence. Both are here at jillshem.com.