API Documentation
Stripe-style developer experience for behavioral intelligence
Getting Started
B_Act Labs provides a simple, RESTful API for integrating behavioral intelligence into your AI systems. The API handles voice, visual, and semantic signal parsing as essential infrastructure.
Authentication
All API requests require an API key. Include it in the Authorization header:
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api.b-act.com/v1/analyze1. Initialize an Agent
Create a behavioral intelligence agent instance for your AI system:
import b_act
agent = b_act.Agent(
name="my-vision-agent",
model="claude-3.5-sonnet",
provider="anthropic",
signals=["voice", "visual", "semantic"]
)
# Agent is now ready to receive signals
print(f"Agent {agent.id} initialized")2. Send Behavioral Signals
Stream multimodal signals to the behavioral intelligence layer:
const response = await fetch('https://api.b-act.com/v1/analyze', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
agent_id: 'my-vision-agent',
signals: {
voice: {
transcript: 'I need help with this',
emotion: 0.78,
clarity: 0.96
},
visual: {
objects: ['person', 'screen'],
confidence: 0.92
},
semantic: {
intent: 'request_help',
context_depth: 0.85
}
}
})
});
const result = await response.json();
console.log('Behavioral analysis:', result.emotional_state);3. Receive Behavioral Analysis
The API returns real-time behavioral intelligence to inform your model's response:
{
"agent_id": "my-vision-agent",
"timestamp": "2024-03-25T14:30:00Z",
"emotional_state": {
"valence": 0.62,
"arousal": 0.71,
"confidence": 0.92
},
"signal_quality": {
"voice": 0.96,
"visual": 0.89,
"semantic": 0.94
},
"behavioral_recommendations": {
"confidence_adjustment": 1.08,
"response_formality": "professional",
"tone_suggestion": "supportive"
},
"latency_ms": 14
}4. Complete Integration Pattern
Here's how to integrate behavioral intelligence into your AI pipeline:
import anthropic
import b_act
# Initialize both agents
b_act_agent = b_act.Agent("my-agent", model="claude-3.5-sonnet")
client = anthropic.Anthropic()
def process_with_behavioral_intelligence(user_input, voice_signal=None):
# Step 1: Get behavioral analysis
behavior = b_act_agent.analyze({
"text": user_input,
"voice": voice_signal,
})
# Step 2: Adjust system prompt based on behavior
system_prompt = f"""You are a helpful AI assistant.
The user's emotional state is {behavior.emotional_state}.
Respond with appropriate {behavior.tone_suggestion} tone.
Confidence level: {behavior.confidence_adjustment}x"""
# Step 3: Generate response with behavioral context
response = client.messages.create(
model="claude-3.5-sonnet",
max_tokens=1024,
system=system_prompt,
messages=[
{"role": "user", "content": user_input}
]
)
return response.content[0].text
# Usage
result = process_with_behavioral_intelligence(
"Can you help me with this task?"
)API Reference
/v1/analyze
Analyze multimodal signals and return behavioral intelligence.
POST /v1/analyze
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
{
"agent_id": "string",
"signals": {
"voice": { ... },
"visual": { ... },
"semantic": { ... }
}
}/v1/agents
List all behavioral intelligence agents.
GET /v1/agents
Authorization: Bearer YOUR_API_KEY
Returns array of agent objects with status and signal quality metrics.Supported AI Providers
B_Act Labs integrates seamlessly with leading AI platforms:
Anthropic
Claude 3.5 Sonnet
OpenAI
GPT-4o
Robotics / Humanoids
Custom Models
Open Source
Llama, Mistral
Best Practices
- →Use signal batching to reduce API calls during high-frequency interactions
- →Configure confidence thresholds to filter low-quality behavioral signals
- →Cache emotional state analysis for repeated user patterns
- →Monitor signal quality metrics to maintain behavioral intelligence accuracy
- →Use webhook callbacks for real-time behavioral state updates