Skip to content

faq

About 955 wordsAbout 3 min

2025-09-08

General Questions

What is RWKV Agent Kit?

RWKV Agent Kit is a comprehensive framework for building intelligent agents powered by RWKV (Receptance Weighted Key Value) language models. It provides tools for creating conversational AI, multi-agent systems, and intelligent automation workflows.

What makes RWKV different from other language models?

RWKV combines the benefits of Transformers and RNNs:

  • Linear scaling: O(n) complexity instead of O(n²)
  • Infinite context length: No fixed context window limitations
  • Efficient inference: Lower memory usage and faster generation
  • Parallelizable training: Can be trained efficiently on modern hardware

What programming languages are supported?

Currently, RWKV Agent Kit primarily supports:

  • Rust (primary implementation)
  • Python (bindings available)
  • JavaScript/TypeScript (planned)

Installation and Setup

What are the system requirements?

Minimum requirements:

  • 8GB RAM
  • 4GB available disk space
  • Rust 1.70+ or Python 3.8+

Recommended:

  • 16GB+ RAM
  • GPU with 8GB+ VRAM (for larger models)
  • SSD storage

How do I install RWKV models?

You can download RWKV models from:

  1. Hugging Face Model Hub
  2. Official RWKV releases
# Example: Download a model
wget https://huggingface.co/BlinkDL/rwkv-4-pile-430m/resolve/main/RWKV-4-Pile-430M-20220808-8066.pth

Why am I getting "model not found" errors?

Check the following:

  1. Verify the model path in your configuration
  2. Ensure the model file exists and is readable
  3. Check file permissions
  4. Verify the model format is compatible
# config.toml
[rwkv]
model_path = "/absolute/path/to/your/model.pth"  # Use absolute paths

Usage and Development

How do I create my first agent?

use rwkv_agent_kit::{RwkvAgentKit, AgentConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let kit = RwkvAgentKit::new("config.toml").await?;
    let agent = kit.create_agent(AgentConfig::default()).await?;
    
    let response = agent.chat("Hello!").await?;
    println!("Agent: {}", response);
    
    Ok(())
}

How do I improve response quality?

  1. Use better system prompts:
let config = AgentConfig::new()
    .with_system_prompt("You are a helpful, accurate, and concise assistant.");
  1. Adjust temperature:
let config = AgentConfig::new()
    .with_temperature(0.7);  // Lower for more focused responses
  1. Provide context:
let response = agent.chat("Given that we're discussing programming, what is Rust?").await?;

How do I handle long conversations?

The memory system automatically manages conversation history:

// Memory is handled automatically
let agent = kit.create_agent(
    AgentConfig::new()
        .with_memory_limit(100)  // Keep last 100 exchanges
).await?;

// For very long conversations, consider summarization
let summary = agent.summarize_conversation().await?;

Can I use multiple models simultaneously?

Yes! You can create agents with different models:

let small_agent = kit.create_agent(
    AgentConfig::new()
        .with_model_path("small-model.pth")
).await?;

let large_agent = kit.create_agent(
    AgentConfig::new()
        .with_model_path("large-model.pth")
).await?;

Performance and Optimization

Why is my agent slow?

Common causes and solutions:

  1. Large model on CPU:

    • Use a smaller model
    • Enable GPU acceleration
    • Use quantization
  2. Memory issues:

    • Reduce max_tokens
    • Clear conversation history periodically
    • Use memory limits
  3. Inefficient prompts:

    • Keep prompts concise
    • Avoid repetitive context

How do I enable GPU acceleration?

# config.toml
[rwkv]
device = "cuda"  # or "mps" for Apple Silicon
model_path = "model.pth"

What's the difference between model sizes?

Model SizeParametersRAM UsageUse Case
430M430 million~2GBTesting, simple tasks
1.5B1.5 billion~6GBGeneral purpose
3B3 billion~12GBComplex reasoning
7B7 billion~28GBProfessional use
14B14 billion~56GBResearch, specialized tasks

Troubleshooting

Common Error Messages

"Failed to load model"

  • Check model path and file permissions
  • Verify model format compatibility
  • Ensure sufficient RAM

"Out of memory"

  • Use a smaller model
  • Reduce batch size or max tokens
  • Enable model quantization

"Agent not responding"

  • Check if the model is still loading
  • Verify network connectivity (for remote models)
  • Look for deadlocks in multi-agent setups

How do I enable debug logging?

use tracing_subscriber;

// Enable debug logging
tracing_subscriber::fmt()
    .with_max_level(tracing::Level::DEBUG)
    .init();

let kit = RwkvAgentKit::new("config.toml").await?;

Memory usage keeps growing

This usually indicates a memory leak. Try:

  1. Set memory limits:
let config = AgentConfig::new()
    .with_memory_limit(50);  // Limit conversation history
  1. Periodic cleanup:
// Clear old memories periodically
agent.clear_old_memories(Duration::from_hours(24)).await?;
  1. Monitor resource usage:
let stats = agent.get_memory_stats().await?;
println!("Memory usage: {} MB", stats.memory_mb);

Advanced Topics

How do I create custom tools?

See the Advanced Features guide for detailed examples.

Can I fine-tune RWKV models?

Yes, but it requires the RWKV training framework. The Agent Kit focuses on inference and agent orchestration.

How do I deploy agents in production?

Consider:

  1. Containerization with Docker
  2. Load balancing for multiple agents
  3. Monitoring and logging
  4. Rate limiting and security
  5. Model caching and optimization

See our deployment guide for details.

Is there a REST API?

You can create one using web frameworks:

// Example with Axum
use axum::{routing::post, Router};

let app = Router::new()
    .route("/chat", post(chat_handler))
    .with_state(kit);

Community and Support

Where can I get help?

How can I contribute?

We welcome contributions! See our Contributing Guide for details.

Is there a roadmap?

Yes! Check our project roadmap for upcoming features and improvements.

What license is RWKV Agent Kit under?

RWKV Agent Kit is released under the MIT License. See the LICENSE file for details.

Can I use this commercially?

Yes, the MIT license allows commercial use. However, check the licenses of any RWKV models you use, as they may have different terms.

Are there any usage restrictions?

The framework itself has no restrictions beyond the MIT license. Model-specific restrictions may apply depending on the RWKV model you choose.