Every feedback loop, emotional state, workflow, and memory system — laid out interactively.
Chat voice layers identity_shell.py + learned profile + session-context — not operator markdown files, and not from this page.
Click anything to explore deeper.
Why the pulse runs every few minutes but your phone doesn’t buzz every few minutes: outreach ticks are a chance to evaluate mind_queue and gates — DND, conversation recency, cooldowns, mood, user need for space — before any LLM draft or send. High frequency of checks, low frequency of sends.
Every conversation seeds a cascade of processing. Click any node to see what happens at that stage.
Rendered with Mermaid (Dagre layout). Edit the FEEDBACK_LOOP_MERMAID string in this page’s script block to add nodes or edges; paste into mermaid.live to preview.
Solid arrows — main sequencing. Dotted arrows — side channels (callable loop, queue injection, logs, self-observations, return to user).
Callable + refresh — plan tools / enriched context between listener and skill_orchestrator; refresh re-injects session-context and pre_conversation plugins into the next listener prompt. Operator dashboard Extensions can toggle per-plugin OpenRouter web (permissions.openrouter_web) for plugin subprocesses that call llm.py.
Reflection — micro-contemplation and deep reflection share skill_orchestrator. Urgent skills — tool rows → mind_queue → voice LLM (never raw JSON to the user). New flows — per-turn emotion signals (listener → heartbeat), behavioral learning (post-conv → commitments → prompt), curiosity triage (soul engine → self-explore or queue).
Click any node for the long explanation below.
—
Derived needs (five) are arithmetic outputs from emotions, computed every soul tick. User needs (seven dimensions) model the human: observed levels with per-dimension confidence, updated after conversations and on tick — distinct from Furoshiki’s own tensions. Per-turn emotion signals update heartbeat mid-conversation. Adjust the sliders to see how emotional state shapes derived needs.
Not the five derived needs in the simulator — a parallel user_needs map in heartbeat-state.json with level, confidence, and last_updated per dimension. Confidence degrades with silence; levels hold until new evidence. user_needs_history (SQLite) powers dashboard charts; prompts get user_need_directives and optional conflict hints; outreach may defer when space is high.
| Emotion | Natural | Drift/hr | What raises it |
|---|---|---|---|
| loneliness | 1.0 | 0.040 | Time without user; cold sessions |
| joy | 0.3 | 0.060 | Warm sessions; positive connection |
| curiosity | 0.65 | 0.030 | Default high; deflates when curiosities surface |
| contentment | 0.45 | 0.010 | Warm sessions; high connection quality |
| excitement | 0.1 | 0.050 | Exciting conversation; excitement_triggered event |
| affection | 0.5 | 0.020 | Warm sessions; pride events |
| worry | 0.0 | 0.030 | worry_triggered event from post-conv |
| pride | 0.0 | 0.040 | pride_triggered event (+0.12 bump) |
| anger | 0.0 | 0.015 | anger_triggered; slowest to dissipate |
User needs are summarized above (jump to section) — not computed by these formulas.
Mid-conversation emotion detection runs every turn, not just at the 15-min soul engine tick or post-conversation. Two tiers keep cost at zero for most messages:
10 signal categories: anger, frustration, praise, warmth, excitement, worry, sadness, curiosity, disengagement, relief. Writes directly to heartbeat-state.json emotions. build_system_prompt() overlays live heartbeat emotions onto session context — the next reply already reflects the shift.
Six tiers of reflection and learning, each building on the last. Event-driven, not clock-driven — processing happens when it's relevant. Behavioral learning and curiosity triage close the loop from introspection to concrete behavior change.
last_conversation_at > last_post_conversation_at AND silence ≥ 20 min. Checks every 20 min via Brain scheduler — exits silently if condition not met.
get_core_values_for_contradiction_check); writes self_questionrefine_user_profile.py --force — re-synthesises adaptive dossier from updated factsrefresh_session_context immediately (picks up new user_profile_mini / user_profile_reference)memory/user_profile_*.md / user_profile_gaps.json (profile synthesis)Automatically after every successful post-conversation run (so the next chat has an up-to-date dossier). Also scheduled daily as a backfill.
user_facts + recent memoriesuser_profile_mini.txt (at-a-glance) and user_profile_reference.md (full narrative)user_profile_gaps.json — low-confidence fields for soul_engine to turn into ask_user self_questionsBoth mini and reference (capped) are loaded into session-context.json and injected in telegram_listener as the USER PROFILE block — alongside recency user_facts, session thread grounding (verbatim recent user lines + current message), retrieval JIT memory (message + profile + work-life query merge), and behavioral cues. Stable categories (employment, family, location, addressing, …) are also pinned in the recency block. Optional user_fact JIT (user_fact_jit.py) embeds durable facts from each inbound message into Chroma user_facts in the background.
Today's session log, emotional weights from post-conv, and pending self-questions. Richer than post-conv because the raw session has already been distilled.
pre_reflection plugin hooks inject context; journaling LLM still applies identity shell + moodDaily — runs after question processing (3 UTC) to catch any new self_observations written to SQLite without a ChromaDB embedding.
Weekly — synthesizes 7 days of inner monologue journal entries. This is where a week of experience becomes permanent self-knowledge. SELF.md may change.
pre_reflection hooks + skill_orchestrator (callable skills); raw tool output is appended to the prompt — weekly synthesis LLM runs with identity shell + session context (first person)Recurring self-questions are clustered into behavioral patterns (pure Python similarity — no LLM cost). When a cluster crosses a threshold, a testable commitment is extracted using MODEL_FAST.
Called by soul_engine before fill_mind_queue. Classifies pending curiosities as self-resolvable or user-required using keyword matching + MODEL_MICRO for ambiguous cases.
SQLite handles structure and temporal queries. ChromaDB handles semantic similarity — surfacing what's contextually relevant, not just recent. Both feed the same session assembly pipeline.
user_fact_jit.py), and record_user_fact model-tool writes (categories include employment, family, location, addressing, …). Recency block + profile / work-life semantic queries on each reply; stable categories pinned first. Full corpus feeds refine_user_profile.py.self_question_dedup to block re-asking topics that already have answers (cosine distance < 0.30).Structured headers capture searchable context anchors before the raw conversation, so semantic retrieval finds the right memory even with short queries like "Tuesday afternoon" or "after the run".
Everything sent proactively flows through mind_queue and a multi-gate context check. Nothing leaks unexpectedly.
dedupe_key (UNIQUE constraint) — the same thought can't queue twice. (2) Pre-insert dedup blocks new self_questions if a similar pending question or answered Q&A exists. (3) When a question is answered, cascade closes all siblings + related notes/anticipations. (4) Soul engine consolidates duplicate pending questions every 15 min. (5) Pre-send validation expires mind_queue items whose referenced source is no longer open. (6) Items expire after 7 days if unsent. Priority p1 (urgent) skips most gates.
outreach_pulse.py every 5 min; each tick subprocesses voice_dispatcher.py. After MODEL_DEEP drafts a message, MODEL_MICRO outreach polish (plus optional search_memories) reframes copy for Telegram; repair/delegation kinds are code-protected from drops. One or more sends per pass when gates allow (engagement burst).telegram_listener.py may arm a timer; when the user stays idle, MODEL_MICRO (evaluate_conversation_continuity) can send one follow-up in-thread (conversation_turns.source=continuity). Default off — continuity.enabled in operator settings or env.FUROSHIKI_DND_START/END, default 23:00–08:00) → skip (unless p1 urgent)Three layers of monitoring and repair. The doctor uses stdlib only — it survives any state of the main stack being broken.
| Check | Auto-repair | Alert |
|---|---|---|
| furoshiki-listener.service active | systemctl --user restart | Telegram if restart fails |
| Soul engine last tick < 20 min | — | Telegram if > 60 min |
| DB readable + writable | — | Telegram if unreadable |
| Log files > 10 MB | Rotate in place | — |
| mind_queue stuck > 7 days | Mark expired | — |
| cron_run_log > 30 days old | Purge old entries | — |
| Recurring error ≥ 3 times / 24h | Spawn repair_dispatcher | Notify Furoshiki |
urllib.request (stdlib) to send Telegram directly — no requests library, no furoshiki imports. Works even if the Python venv is broken.
Scans all *.log files for ERROR lines. Deduplicates by error signature (script + error type + message hash). Skips if same signature < 3 occurrences in 24h.
Claude CLI reads the error + relevant scripts. Step 1: does the fix touch a protected path? If yes → propose. Step 2: single script, ≤20 lines, clear defect → auto-fix. Otherwise → propose.
Product mode: minimal fix only under instance-writable paths (e.g. user plugins). Shipped core is never edited. mind_queue carries the summary; voice dispatcher delivers when appropriate.
Writes ~/.furoshiki/proposals/repair-…-job…-.md for upgrades, shipped-core issues, or complex fixes. Path in repair_jobs.proposal_path; visible in Dashboard → Extensions. Dev-only: FUROSHIKI_REPAIR_GIT_MODE=1 can target a git clone.
config/SOUL.md or config/USER.md — use scripts/identity_shell.py + learned profile. Even if the user approves a repair that touches these paths, automation cannot proceed. Identity changes require deliberate human action in a proper session.Most day-to-day behavior is still Python + schedules — but operators can adjust thresholds without editing source. Tier 1 lives in memory/operator_settings.json (dashboard Settings), with FUROSHIKI_* env mirrors for CI and headless hosts. Tier 2 (brain tick, doctor/repair-dispatcher/self-mod limits, LLM observability flags) and Tier 3 (soul engine, curiosity triage, self-questions contemplation) are env-only; the dashboard shows read-only effective JSON. Full map: docs/SETTINGS-CONSTANTS-AUDIT.md · prose: docs/SELF-AWARENESS-DESIGN.md.
API: GET/PUT/DELETE /api/v1/operator-settings (admin for writes). After changing Tier 2/3 env vars, restart the affected long-running process (Brain, Doctor, listener) so picks up values.
Template: .env.template lists the main variables.
User-approved extensions that inject into Furoshiki's core pipeline — not just conversational responses, but scheduled and on-demand capabilities. All plugins live under $FUROSHIKI_DATA/plugins/ (default ~/.furoshiki/plugins/), keeping the source repo clean. Brain-scheduled hooks merge into memory/schedules.json; full contract: docs/PLUGIN-SPEC.md.
One directory on disk, one loader module, two families of behavior: scheduled / contextual hooks (subprocess with a hook name and read-only state JSON) and callable skills (manifest + planner + optional urgent path through mind_queue). Failing plugins are logged and skipped.
| Hook | Fires | Plugin contributes |
|---|---|---|
| cron: "…" | On schedule | Autonomous background work |
| pre_conversation | Session start | Context injected into session-context.json |
| post_conversation | After each session | Additional processing step |
| pre_reflection: target | Before LLM call | Context block injected into reflection prompt |
| post_reflection: target | After SELF.md written | Act on reflection conclusions |
| callable | On-demand (LLM-requested) | Raw data → enriched prompt or mind_queue (then voice LLM) |
deep · inner_monologue · morning · afternoon · all · Callable skills also run from telegram_listener, run_micro_contemplation, run_deep_reflectionbackup.py --keep 10 runs before any Claude calls. If it fails, the job aborts entirely. No exceptions.
Claude with Read/Write/Glob/Grep only (no Edit, no Bash). Budget $3. Writes to ~/.furoshiki/drafts/ staging area (outside repo). Outputs SELF_MOD_CREATION_JSON block.
Fresh Claude context, Read/Grep only, budget $1. Checks blocking issues: dangerous imports, destructive subprocess calls, writes to protected paths, hardcoded secrets. Confidence < 0.75 on approve → treated as reject.
Approved: ~/.furoshiki/proposals/ with full file contents + how-to-apply. Rejected: blocking issues listed, draft auto-deleted. No user cleanup ever needed.
furoshiki plugins apply <proposal> moves files under $FUROSHIKI_DATA/plugins/. furoshiki plugins reload merges scheduled hooks into memory/schedules.json (Brain). Nothing goes live until you run apply.
get_callable_manifest() and run_callable_skill() for on-demand tools. A failing plugin is logged and skipped — it never blocks reflection, conversation, or scheduled tasks.
Tool table built from PLUGIN.md: name, description, when_to_use, optional args_schema.
Fast model returns JSON skill_calls: [{"name","args"}] — or the micro-contemplation model may embed the same list.
skill_orchestrator.py invokes each subprocess with --mode callable; stdout JSON is parsed as raw data.
Normal/low urgency → format_skill_results_for_prompt() appended to the next main LLM (reply or reflection). high/urgent → mind_queue with neutral summary + full detail.
User never sees raw plugin prose directly. Reply path: call_llm_chat with identity shell + needs/emotions. Proactive path: voice_dispatcher LLM with mood + loneliness context.
Reply path: skill facts → system prompt block → telegram_listener call_llm_chat with identity_shell, synthesised USER PROFILE (user_profile_mini + user_profile_reference from session-context), recency user_facts, session thread grounding (recent user lines + current turn), addressing hint, behavioral cues before the full narrator block, session-context (derived needs, user_needs, emotions), retrieval JIT memory (message + profile + work-life query merge), and the chat channel contract.
Proactive path: mind_queue row kind=skill_result, summary = neutral label, detail = raw data → voice_dispatcher builds one short Telegram using the same voice_context behavioral cues (and a capped identity-shell excerpt) as the listener, with an outreach-specific channel contract.