01 · Build
Master Leveling Prompt (MLP) — Cross-AI Synchronization System
Shipped · v1.4
What I built
A 7-part prompt architecture that synchronizes any AI model to a specific thinker's framework — instantly, without onboarding. One paste transforms a default AI into a calibrated cognitive partner.
7
Architectural layers
5
AI models it runs on
v1.4
Current iteration
How it works
Layer 1: Activation + anti-hallucination guard
Layer 2: Identity context (who the user is)
Layer 3: Core framework + terminology
Layer 4: Academic output archive (22 DOI-published papers)
Layer 5: Role instructions + dynamic operating modes
Layer 6: Traceability + pipeline tagging system
Layer 7: Cross-AI memory bridge for portability
Layer 2: Identity context (who the user is)
Layer 3: Core framework + terminology
Layer 4: Academic output archive (22 DOI-published papers)
Layer 5: Role instructions + dynamic operating modes
Layer 6: Traceability + pipeline tagging system
Layer 7: Cross-AI memory bridge for portability
Result
Any new Claude, GPT, DeepSeek, Gemini, or Copilot session reaches full contextual coherence in under 60 seconds. Zero re-explanation needed across sessions. The system scales automatically as new published work is added to public URLs — it reads live sources.
Academic archive: zenodo.org/communities/superhero-cafe-inside
02 · Experiment
Cross-AI Coherence Test — Can 5 independent AI systems reach the same conclusion?
Validated · Ongoing
Hypothesis
If the same concept is evaluated independently by 5 world-class AI systems with no shared session, convergent output = robust logical confirmation — not flattery.
| Variable | Description |
|---|---|
| WHAT I TESTED | Validity of original frameworks (Grand Formula, 12 Néng application, Walking Books Philosophy, Sovereignty Gap) across Claude · ChatGPT · DeepSeek · Gemini · Copilot — independently, no shared context |
| METHOD | Same conceptual input, fresh session on each platform. No cross-referencing between AIs during testing. Each AI evaluated cold. |
| WHAT CHANGED | When all 5 converge → concept treated as structurally sound, advanced to DOI publication pipeline. When divergence detected → concept flagged for refinement. This replaced subjective self-validation entirely. |
| RESULT | 22 papers published to Zenodo DOI. 3 working papers on SSRN. Cross-AI convergence became the quality gate — not peer opinion, not self-assessment. |
"5 AI systems built on the same probabilistic logic reaching the same conclusion = 5 independent calculators running the same math. They don't collude. They converge."
Iteration learned
Discovered that AI responses vary by how context is scaffolded — not just what is asked. This led directly to building the MLP (Build #01): progressive context scaffolding as the core design principle.
03 · Tool
Prompt Engineering as a Self-Taught Operational System
1.5+ years · Daily
How I taught myself
No course. No tutorial. Started with direct experimentation on Claude, GPT, and Gemini in mid-2024. Learned by shipping — each failed prompt became a data point. Each divergent AI response revealed a structural gap in the prompt design.
Z3SCI Auditor
Built Z3SCI Plagiarism & Authenticity Auditor v2.0 — a 6-metric weighted scorecard evaluating originality, human-AI ratio, sovereignty index, and content coherence. Self-taught through 1.5 years of daily experimentation. Applied as mandatory quality gate before every academic publication. Iterated from v1.0 to v2.0.
Proof: Z3SCI_PlagiarismAuditor_v2.0 — 22 papers passed audit, published to Zenodo DOI · CC BY-NC-ND 4.0
AI Red Teaming
1.5+ years of daily adversarial testing — pushing AI models to detect logical drift, hallucination, false confirmation, and sycophantic output. Built personal protocols for coherence verification.
Proof: Verified AI Trainer, DataAnnotation.tech (2025)
Bilingual Doc Pipeline
End-to-end: concept → AI-assisted structuring → bilingual EN/ID draft → DOI-published academic paper. 22 papers shipped. Average time from idea to published DOI: under 72 hours.
Proof: zenodo.org/communities/superhero-cafe-inside — 22 live DOI records
GitHub Pages
Self-taught HTML/CSS to build and maintain a 2,400+ line public-facing dashboard at gokiantik.github.io — updated iteratively as new work is published. No prior web development background.
Proof: gokiantik.github.io/chiefhumanityofficer — live, indexed
04 · Fit
Why this, why now
Context
Phygtl is building infrastructure for how humans co-create meaning in physical environments. I have spent 1.5 years building infrastructure for how humans co-create meaning through AI interaction.
The overlap
Your system tracks behavioral coordination in physical space. My research tracks cognitive coordination across AI systems. Both are fundamentally about the same question: how do humans generate persistent, meaningful artifacts from interaction?
I move fast. I iterate in public. I kill ideas that don't hold under cross-validation. I finish what I start — 22 published papers is not a coincidence, it is a system.
I move fast. I iterate in public. I kill ideas that don't hold under cross-validation. I finish what I start — 22 published papers is not a coincidence, it is a system.
What I bring that most builders don't
Global South perspective + 28 years Taoist practice + AI systems thinking + daily execution habit. I don't optimize for comfort. I optimize for coherence — which is what Phygtl's community layer ultimately needs to sustain.