Skip to main content

The Playground is the interactive chat interface where you can test, experiment, and have real-time conversations with your Assistant. It provides a full-featured chat environment with streaming responses, debug capabilities, and integration with your Assistant's Knowledge Base. The Playground serves as both a testing environment for development and a functional chat interface for end-users.

Playground Interface

[Screenshot placeholder: Complete Playground interface]

The Playground consists of several key components:

Chat Area

  • Message history displaying the complete conversation
  • Streaming responses with real-time text generation
  • Message avatars distinguishing user and Assistant messages
  • Timestamp information for conversation tracking

Input Section

  • Message input field with multi-line support
  • Send button with keyboard shortcuts
  • Input hints for user guidance
  • Character/token indicators (when applicable)

Status Indicators

  • Processing states showing when the Assistant is thinking
  • Streaming indicators during response generation
  • Error states with clear error messages
  • Debug information for development purposes

Chat Features

Conversational Interface

Message Flow

[Screenshot placeholder: Conversation with multiple messages]

The Playground provides a natural chat experience:

User Messages:

  • Right-aligned message bubbles
  • Blue background for clear identification
  • User avatar with "You" label
  • Input timestamp for reference

Assistant Messages:

  • Left-aligned message bubbles
  • Gray background with black text
  • Assistant avatar with first letter of Assistant name
  • Response timestamp and processing time

Message Formatting

Rich Text Support:

  • Markdown rendering in Assistant responses
  • Code blocks with syntax highlighting
  • Lists and formatting preserved from AI output
  • Links and references properly formatted

Multi-line Messages:

  • Shift+Enter for new lines in input
  • Enter to send message
  • Automatic text wrapping for long messages
  • Preserved formatting in conversation history

Real-Time Streaming

Streaming Response Generation

[Screenshot placeholder: Streaming response in progress]

The Playground features advanced real-time streaming:

Server-Sent Events (SSE):

  • Character-by-character streaming
  • Real-time updates as AI generates response
  • Progress indicators during processing
  • Error recovery for connection issues

Streaming Benefits:

  • Immediate feedback: See responses starting immediately
  • Better perceived performance: No waiting for complete responses
  • Progress transparency: Clear indication of processing stages
  • User engagement: Interactive feeling during response generation

Processing Stages

1. Initial Processing:

  • "Thinking..." indicator while AI processes the query
  • Knowledge Base search (when applicable)
  • Context preparation with relevant information

2. Response Generation:

  • Streaming text appears in real-time
  • Character count updates during streaming
  • Live formatting as markdown is rendered

3. Completion:

  • Final response fully displayed
  • Debug information becomes available
  • Ready for next user input

Knowledge Base Integration

Automatic RAG (Retrieval-Augmented Generation)

[Screenshot placeholder: Chat showing knowledge base integration]

When your Assistant has uploaded documents in the Knowledge Base:

Automatic Context Inclusion:

  • Semantic search through uploaded documents
  • Relevant chunks automatically included in AI context
  • Transparent integration without user intervention
  • Source awareness for more accurate responses

Knowledge Base Indicators:

  • Badge display showing "Knowledge Base Active" with file count
  • Contextual responses that reference uploaded content
  • Improved accuracy for domain-specific questions

RAG Process Flow

1. Query Analysis:

  • User question is analyzed for intent and keywords
  • System determines if Knowledge Base search is needed
  • Relevant search terms are extracted

2. Document Search:

  • Semantic similarity search across all uploaded documents
  • Ranking algorithm identifies most relevant content chunks
  • Context optimization to fit within AI token limits

3. Response Generation:

  • Enhanced context includes relevant document chunks
  • AI generates response using both training data and uploaded knowledge
  • Source-aware answers that can reference specific information

Debug and Development Features

Debug Information Access

[Screenshot placeholder: Debug logs modal]

For development and troubleshooting, the Playground provides detailed debug information:

Debug Logs Button:

  • "See debug logs" button appears under Assistant responses
  • Complete execution trace showing all processing steps
  • Performance metrics including response times
  • Error details for troubleshooting

Debug Information Includes:

  • System workflow execution step-by-step breakdown
  • Knowledge Base search results with relevance scores
  • AI model interactions and prompt engineering details
  • Performance timing for optimization insights

Response Analysis

Execution Breakdown:

  • Step-by-step processing through the underlying System workflow
  • Knowledge retrieval details and relevance scoring
  • AI model responses at each stage
  • Token usage and performance metrics

Development Insights:

  • Prompt engineering effectiveness
  • Knowledge Base search quality
  • Response generation timing and efficiency
  • Error identification and troubleshooting information

User Experience Features

Smart Scrolling

Automatic Scroll Behavior

The Playground includes intelligent scrolling features:

Auto-scroll Conditions:

  • New messages: Automatically scroll to new content
  • Streaming responses: Follow streaming text in real-time
  • User at bottom: Only auto-scroll when user is at conversation bottom
  • Manual scroll detection: Pause auto-scroll when user scrolls up

User Control:

  • Manual scrolling: User can scroll up to review conversation history
  • Scroll position memory: System remembers if user scrolled away from bottom
  • Return to bottom: Auto-scroll resumes when user returns to bottom

Input Experience

Enhanced Input Field

[Screenshot placeholder: Input field with message being typed]

Multi-line Support:

  • Textarea input with automatic resizing
  • Shift+Enter for new lines
  • Enter to send message
  • Visual feedback for input state

Input Validation:

  • Empty message prevention: Send button disabled for empty input
  • Character limits: Visual feedback for message length
  • Real-time validation: Immediate feedback on input state

Keyboard Shortcuts

Send Message:

  • Enter: Send current message
  • Shift+Enter: Add new line without sending

Navigation:

  • Up/Down arrows: Navigate message history (future feature)
  • Ctrl/Cmd+A: Select all text in input field

Error Handling

Error States and Recovery

Connection Errors:

  • Clear error messages explaining connection issues
  • Retry mechanisms for failed requests
  • Graceful degradation when streaming fails
  • User guidance for resolving issues

Processing Errors:

  • AI model errors handled gracefully
  • Knowledge Base errors don't break conversation
  • System workflow errors provide useful feedback
  • Recovery suggestions for user action

Empty State Experience

Initial Conversation View

[Screenshot placeholder: Empty Playground with welcome message]

Before any messages are sent, the Playground displays:

Welcome Interface:

  • Assistant avatar with name prominently displayed
  • Welcome message explaining the Assistant's purpose
  • Usage hints for getting started
  • Knowledge Base status indicator (if files are uploaded)

Getting Started Guidance:

  • Example questions or conversation starters
  • Feature highlights explaining capabilities
  • Knowledge Base indication showing available resources
  • Clear call-to-action to begin conversation

Testing and Development Workflow

Assistant Testing Process

Functional Testing

Basic Functionality:

  1. Send simple message to test basic response generation
  2. Test streaming to ensure real-time updates work correctly
  3. Verify formatting in both input and output
  4. Check error handling with invalid or problematic inputs

Knowledge Base Testing:

  1. Ask knowledge-specific questions to test RAG integration
  2. Verify document references appear in responses
  3. Test search quality with various query types
  4. Check context relevance in Knowledge Base responses

Performance Testing

Response Time Evaluation:

  • Measure response latency for different query types
  • Test streaming performance across various response lengths
  • Evaluate Knowledge Base search speed and accuracy
  • Monitor system performance under load

Quality Assessment:

  • Response accuracy for domain-specific questions
  • Knowledge integration effectiveness
  • Consistency across multiple similar queries
  • Error rate and handling quality

Iterative Improvement

Feedback Collection

Response Quality Analysis:

  1. Test edge cases and unusual queries
  2. Document response quality for common questions
  3. Identify improvement areas in responses
  4. Note Knowledge Base gaps that need additional content

User Experience Testing:

  1. Interface usability for target users
  2. Response clarity and helpfulness
  3. Conversation flow naturalness
  4. Feature discoverability and ease of use

Optimization Strategies

Knowledge Base Optimization:

  • Add missing information identified through testing
  • Improve document structure for better retrieval
  • Update outdated content that affects response quality
  • Remove irrelevant files that may confuse responses

System Configuration:

  • Review underlying System settings for optimal performance
  • Adjust AI model parameters if response quality is suboptimal
  • Optimize workflow for better response generation

Advanced Features

Conversation Context

Session Memory

The Playground maintains conversation context:

Within-Session Memory:

  • Complete conversation history available to AI
  • Context accumulation for coherent responses
  • Reference capability to earlier conversation points
  • Contextual follow-up questions and responses

Context Limits:

  • Token window management to stay within AI limits
  • Automatic context trimming for long conversations
  • Context prioritization to maintain relevant information
  • Memory optimization for best performance

Integration Points

System Workflow Integration

Workflow Execution:

  • Multi-step processing through underlying System workflow
  • Sequential AI calls as defined in System design
  • Complex reasoning through multiple AI nodes
  • Sophisticated output from multi-stage processing

Configuration Inheritance:

  • AI model settings from System configuration
  • Temperature and creativity settings from System
  • Prompt engineering from System design
  • Output formatting based on System specifications

Best Practices

Testing Strategy

Comprehensive Testing

  1. Functional testing: Verify all features work correctly
  2. Content testing: Ensure Knowledge Base integration works properly
  3. Edge case testing: Test unusual inputs and scenarios
  4. Performance testing: Verify response times and streaming quality

User-Centric Testing

  1. Real user scenarios: Test with actual use cases
  2. Conversation flows: Test natural conversation patterns
  3. Error scenarios: Test error handling and recovery
  4. Accessibility: Ensure interface works for all users

Optimization Guidelines

Performance Optimization

  1. Monitor response times: Track and optimize slow responses
  2. Knowledge Base efficiency: Ensure relevant, well-structured content
  3. System configuration: Optimize underlying System for chat use
  4. Error minimization: Reduce and handle errors gracefully

User Experience Optimization

  1. Clear communication: Ensure Assistant responses are helpful and clear
  2. Consistent behavior: Maintain consistent Assistant personality and responses
  3. Error guidance: Provide helpful guidance when things go wrong
  4. Feature discovery: Make capabilities apparent to users

The Playground serves as the primary interface for interacting with your Assistant, providing a rich, interactive environment for both testing during development and actual usage by end-users. Its combination of real-time streaming, Knowledge Base integration, and debug capabilities makes it a powerful tool for creating and refining AI assistants.