iCendant Empathy
Overall API Design
1. Overview
The API is currently in preview, some aspects may change.
This document outlines the API for iCentant Empathy™, an LLM-powered empathy system that can analyze text and biometric input and generate empathetic output with text, images, and audio.
The output can either be an echo of the input using a different modality, an enhancement of the input using multiple modalities, an empathetic response to the input using multiple modalities.
You can use the preview Playground to experiment a little.
The platform is currently in preview. Send email to [email protected] if you would like to participate in our early access program.
2. API Endpoints
Primary Endpoint
3. Request Structure
4. Response Structure
5. Field Descriptions
5.1 Request Fields
Field | Type | Description |
---|---|---|
model | string | Identifier for the empathy model version to use |
system | string | System instructions that set the context and behavior for the model when mode is set to ‘complete’ |
messages | array | Conversation history between user and assistant |
empathy_options | object | Configuration for the desired response format |
parameters | object | Model parameter settings |
5.1.1 Messages Structure
Each message object contains:
Field | Type | Description |
---|---|---|
role | string | Either “user”, “assistant”, or “system” |
content | array | Array of content objects (only text is supported) |
Each content object contains:
Field | Type | Description |
---|---|---|
type | string | Content type: “text” only |
text | string | The text content |
5.1.2 Empathy Options
Field | Type | Description |
---|---|---|
persona | string | A persona used to adjust the type of content generated |
modes | array | Array of content modes to include in the response (text, image, audio) |
operation | string | Processing mode: “complete”, “enhance”, or “echo” (default: “echo”) |
format | string | Output format for structured content: “html” or “vdom” (default: “vdom”) |
async | string | Asynchronous processing mode: “await” or “poll” (default: “await”) |
Example empathy_options Object
5.1.3 Parameters
Field | Type | Description |
---|---|---|
temperature | number | Controls randomness (0.0 to 1.0, lower is more deterministic) |
max_tokens | number | Maximum length of the generated response |
5.2 Response Fields
Field | Type | Description |
---|---|---|
id | string | Unique identifier for the response |
created_at | string | ISO timestamp when the response was generated |
response | object | Container for all response components |
5.2.1 Response Object
Field | Type | Description |
---|---|---|
completion | object | New message generated by the model |
5.2.2 Response Content Types
The response can include the following content modes:
Type | Description | Output Format |
---|---|---|
text | Textual content | Plain text |
image | Visual content | PNG (URI) |
audio | Voice recordings | MP3 (URI) |
6. Implementation Guidelines
6.1 Authentication
The API uses bearer token authentication:
6.2 Error Handling
Standard HTTP status codes with detailed error messages:
6.3 Rate Limiting
- Default: 60 requests per minute
- Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset
7. Code Examples
7.1 JavaScript Example
8. Best Practices
8.1 Performance Optimization
- Request only the content modes you need (
empathy_options.modes
) - Set appropriate
max_tokens
values to avoid unnecessary computation
8.2 Empathy Guidelines
- Provide sufficient context in the
persona
field to guide the model’s persona approach, but do not speak to empathy - Include relevant conversation history to improve emotional understanding
9. Advanced Features
9.1 Webhook Callbacks
NOT SUPPORTED
9.2 Streaming Responses
NOT SUPPORTED
10. Implementation Roadmap
- Phase 1: Text input and multi-modal output (text, image, audio) for echo, enhance, complete messages. Long run response polling.
- Phase 2**: Multi-modal input (image, audio)
- Phase 3: Streaming and web-hooks