Research Portal — Empirical Evidence for Quantum Structure in Human Reasoning
Formal publications and internal reports documenting QHP theory and empirical validation.
Comprehensive 22-experiment validation of quantum structure in human reasoning. 812+ QHG states, 5 embedding models, GPU acceleration, quantum roadmap.
Zero-shot classification via quantum probability. Born rule vs 8 trained baselines. Ablation study. Cross-domain validation.
Full quantum cognition claim: 22 experiments, theoretical framework, GoEmotions social media validation, cross-extractor proof.
Quantum cognition in everyday emotional expression. Born rule, interference, uncertainty. GoEmotions Reddit validation.
"How Your Thoughts Obey Quantum Laws" — Channeling Sagan, Tyson, and Penrose. 9 interactive SVG diagrams.
22 experiments validating QHP predictions across three tiers of evidence.
| ID | Experiment | Key Result | Status |
|---|---|---|---|
| V1 | Coherence (same-role clustering) | Cohen's d = 1.63–2.93 | Pass |
| V2 | Projection (category collapse) | 23.7× above chance | Pass |
| V3 | Interference (destructive/constructive) | p = 10−89 | Pass |
| V4 | Wave-Particle Duality | p = 2.5 × 10−7 | Pass |
| V5 | Entanglement Locality | 1.26× lift, monotonic decay | Pass |
| V6 | Schrödinger Evolution | ρ = −0.996 | Pass |
| V7 | Full QRA Cycle | Φ adaptation 0/5 → 5/5 | Pass |
| ID | Experiment | Key Result | Status |
|---|---|---|---|
| T1 | Conflict Detection | QHP wins (interference F1) | QHP wins |
| T2 | Non-Commutativity (Order Effects) | Asymmetric projection confirmed | QHP wins |
| T3 | Non-Separability | Local-only entanglement confirmed | QHP wins |
| T4 | Superposition Advantage | QHP retrieval beats classical | QHP wins |
| T5a | Born Rule cos²θ | 56–90% zero-shot accuracy | QHP wins |
| T5b | Malus's Law | r = 0.538 confusion prediction | QHP wins |
| T5c | Heisenberg Uncertainty | r = 0.841 complementarity | QHP wins |
| ID | Experiment | Key Result | Status |
|---|---|---|---|
| B1 | Entanglement Correlation (Bell-type) | p = 0.031 enhancement | Pass |
| B2 | Cross-Model Universality | 4/5 signatures replicate on all 5 models | Pass |
QHG structure vs raw text vs shuffled roles. GTE Born: 87.8% → 58.4% when roles stripped.
ablationcos²θ distinguished from cos, cos³, softmax via calibration (ECE) and log-likelihood.
born-ruleMedical, education, engineering, ethics — 229 new states. Born accuracy 86–100%.
cross-domain500 Reddit comments → 1,569 QHG states. Born accuracy 85.5–90.1%. Stronger than formal text.
social-mediagoEmotionsGPT-5.2 vs Qualtron-4B on identical dialogues. No significant difference (p=0.31). Signatures are inherent.
cross-extractorBorn (zero-shot) 75.2% acc vs 8 trained classifiers. Beats kNN, RF, LR, Prototype Net.
baselinesneuripsEnd-to-end document processing: ingestion, QLang extraction, QNR2 normalization, QHG process generation.
Summary across all 10 document scenarios. Pipeline throughput, extraction quality, coverage metrics.
overviewResearch paper extraction with entities, relations, QLang, QHG processes.
scientificContract clause extraction with normative rules (obligations, prohibitions).
legalFinancial risk, thresholds, collateral, interest rate structures.
financialHR policies, obligations, permissions, disciplinary sequences.
policyData classification, retention rules, access controls.
governanceSystem architecture, event flows, state transitions.
technicalCode extraction: functions, types, postconditions, side effects.
codePipeline configuration, stages, conditions, deployment gates.
devopsTimeline extraction, causal chains, action items, root causes.
incidentSystematic evaluation of 31 NLP tools and the extraction benchmark comparing GPT-5.2, Qualtron-4B, GLiClass, and more.
31 tools evaluated: spaCy, GLiNER, GLiClass, Stanza, Flair, Qualtron (3 modes), CoreNLP, ingest pipeline, QNR2, HSC, and more.
31 tools35-sentence gold set. Qualtron-4B: 91.4% vs GPT-5.2: 60% on role classification. 12 prompt configs tested.
benchmark| Approach | Exact Match | Category Match | Latency | Cost |
|---|---|---|---|---|
| Qualtron-4B (optimized) | 91.4% | 91.4% | 402ms | $0.00 |
| GPT-5.2 | 60.0% | 62.9% | 1349ms | $0.06 |
| CAG Baseline (Qwen) | 28.6% | 40.0% | 377ms | $0.00 |
| GLiClass | 0.0% | 5.7% | 25ms | $0.00 |
Multi-model, multi-prompt comparative study for QLang extraction quality.
Testing extraction quality across model sizes and architectures via OpenRouter.
6 prompt strategies: baseline, few-shot, chain-of-thought, structured, domain-specific.
Multi-model comparative study. 6 prompt strategies tested via OpenRouter.
Internal report: 19 experiments, 7/7 constructs validated. First comprehensive empirical evidence.
"How Your Thoughts Obey Quantum Laws" — Sagan/Tyson-style accessible overview with 9 SVG diagrams.
Raw text ablation, Born rule differentiation, cross-domain validation. NeurIPS, CogSci, Frontiers papers drafted.
GoEmotions social media (Born 90.1%), cross-extractor proof (p=0.31), NeurIPS baselines (8 classifiers). All papers updated.