Migration Guide: Moving to Voxta Gateway
This guide provides a structured approach for migrating existing applications from direct VoxtaClient (SignalR) usage to the high-level semantic GatewayClient provided by the Voxta Gateway.
Overview
The Voxta Gateway acts as a state-mirroring proxy. Instead of your application managing raw SignalR events, session pinning, and manual state tracking, the GatewayClient provides:
- Simplified State: Properties like
client.chat_activeandclient.ai_state. - High-Level Actions:
send_dialogue,external_speaker_start, etc. - Automatic Reconnection: Built-in retry logic for both HTTP and WebSocket.
- Semantic Events: Clean events like
chat_started,sentence_ready, andai_state_changed.
Phase 1: Preparation
1. Isolated Migration (Worktrees)
Before modifying the application, create a dedicated git worktree. This prevents interference with the stable codebase while you iterate.
2. Handling Unreleased Gateway Source
If the voxta-gateway is not yet published to PyPI, you must manually point your application to its source directory.
import sys
from pathlib import Path
# Add local voxta-gateway to path
gateway_path = Path("/path/to/dion-labs-oss/voxta-gateway/main")
if gateway_path.exists():
sys.path.append(str(gateway_path))
Phase 2: Client Transition
1. Update Imports
Replace the old VoxtaClient imports with the GatewayClient.
Before:
After:
2. Lifecycle Management
The GatewayClient runs its own event loop and handles reconnection. Start it as a background task.
async def main():
client = GatewayClient(gateway_url="http://localhost:8081", client_id="my-app")
# Start in background
gateway_task = asyncio.create_task(client.start())
# ... your app logic ...
await client.stop()
gateway_task.cancel()
Phase 3: Implementing Core Logic
1. Chat Lifecycle & Queuing
Directly sending messages to Voxta when no chat is active usually causes drops. Use the chat_started event to flush a local queue.
class MyRelay:
def __init__(self, client):
self.client = client
self.queue = []
async def on_message(self, text):
if self.client.chat_active:
await self.client.send_dialogue(text=text, source="my-source")
else:
self.queue.append(text)
async def flush_queue(self):
while self.queue:
text = self.queue.pop(0)
await self.client.send_dialogue(text=text)
# Subscribe to lifecycle
client.on("chat_started", my_relay.flush_queue)
2. Health Monitoring
Implement a periodic health check to ensure the gateway is reachable and Voxta is connected to the gateway.
async def health_loop(client):
while True:
try:
health = await client.health_check()
if not health["voxta_connected"]:
logging.warning("Gateway up, but Voxta disconnected from it")
except Exception:
logging.error("Gateway unreachable")
await asyncio.sleep(30)
Phase 4: Observability (Best Practice)
For relays and background workers, it is highly recommended to add a simple FastAPI Debug App. This allows you to inspect the internal state (queue size, history, gateway status) without digging through logs.
from fastapi import FastAPI
import uvicorn
app = FastAPI()
@app.get("/")
async def status():
return {
"gateway": client.is_connected,
"chat": client.chat_active,
"queue_size": len(my_relay.queue)
}
# Start uvicorn as an additional background task
Phase 5: Containerization
Update your Dockerfile to include the new gateway requirements:
- Dependencies: Add
httpx,fastapi, anduvicorn. - Voxta Client: Ensure
voxta-client >= 0.2.0is used for compatibility. - Source Mounts: If using the unreleased gateway, ensure the directory is mounted or copied into the container.
Common Pitfalls
- Path Conflicts: Ensure the directory containing
voxta_gateway/is insys.path, not the package folder itself. - Blocking Handlers: Never put blocking
sleep()or heavy CPU work inside an@client.onhandler. Useasyncio.create_task()if you need to run something long-lived. - Immediate Reply: By default,
send_dialoguemight not trigger a reply for certain sources. Explicitly setimmediate_reply=Trueif you want the AI to talk back immediately after every message.