Intelligence Gateway · Workspace Agents Partner preview

Lytebulb
Intelligence Gateway and
Workspace Agents

Two parts: an intelligence gateway that routes requests to configured LLM providers with encrypted credentials and retained threads — and workspace-based agents that run in Git repositories with streamed turns, diffs, and explicit promotion.

provider routing · workspace agents · promote

Focus

Two parts: gateway and workspace agents

Use the intelligence gateway, the workspace agent runtime, or both. They share the same single binary, storage, and admin UI.

  • Intelligence gateway: configured routing across LLM providers, encrypted credentials, usage records, retained threads.
  • Workspace agents: run in Git repositories with branches, worktrees, and streamed turns.
  • Every workspace turn captures decisions, diffs, changed files, and events for inspection.
  • Workspace changes promote toward target branches through an explicit flow.
Workspace Agents

Workspace Agent Features

Workspace-backed threads run agents against real Git repositories. Operators see turns, decisions, diffs, and artifacts before anything is promoted.

Work in Git Repositories

Workspace-backed agent threads run against real Git repositories. Agents work in branches and worktrees instead of detached scratch state.

Multi-Turn Threads

Threads preserve multi-turn context so follow-up work can continue across sessions and retain the history of what happened.

Inspectable Turns

Agents can pause for operator input. Every turn captures streamed events, diffs, changed files, and other review artifacts.

Promote Changes Explicitly

Move agent work back toward a target branch through an explicit promotion flow. No silent pushes or unreviewed commits.

Intelligence Gateway

Provider Gateway and Retained Threads

Lytebulb's intelligence gateway routes requests to configured LLM providers. Use it standalone or alongside workspace agents.

Multi-Provider Routing

Configure LLM providers and routes behind one API. Switch models without changing application code.

Encrypted Credentials

Provider keys encrypted at rest with AES-256-GCM. No plaintext credentials in config files or environment variables.

Retained Threads

Conversations persist across sessions with full multi-turn history. Resume, branch, or hand off threads without losing context.

Usage Records

Every request, model, and token count is recorded. Audit usage by tenant, route, or provider directly in the runtime.

Deployment

Lightweight by Design

  • Single Go binary
  • SQLite for storage — no external database
  • REST API for integration
  • Admin UI for threads and providers
  • Embeddable chat widget
  • Runs standalone or alongside Lytebase

Frequently Asked Questions

What can the workspace agents do today?

The first workspace workflow is built around coding agents. The runtime is designed so additional agent tools can be added over time. Independently, the intelligence gateway routes LLM requests for any application — no workspace required.

Does it require Lytebase?

No. Lytebulb runs as a standalone Go binary with SQLite — no external database or infrastructure required. It also integrates naturally with Lytebase as part of the Lyte Stack.

How does it handle model credentials?

Provider API keys are stored encrypted with AES-256-GCM. The runtime manages credential lifecycle and injects them into agent sessions and inference requests without exposing plaintext keys.

Can teams use it today?

Lytebulb is still in development. We are looking for early teams that want a self-hosted intelligence gateway, workspace agent runtime, or both.

Lytebulb is in development

Looking for early teams that want a self-hosted intelligence gateway, workspace agent runtime, or both. Get in touch.