Quick Integration
1. Install Dependencies
Python JavaScript/TypeScript2. Configuration
Python JavaScript TypeScriptSupported Models
LangChain supports the following models through Laozhang API:Text Generation Models
| Model Series | Model ID | Context Length | Features |
|---|---|---|---|
| GPT-4 Turbo | gpt-4-turbo | 128K | Strong reasoning ability |
| GPT-3.5 Turbo | gpt-3.5-turbo | 16K | Fast and economical |
| Claude Sonnet | claude-sonnet-4 | 200K | Long context support |
| Gemini Pro | gemini-2.5-pro | 1M | Multimodal support |
Embedding Models
| Model | Dimension | Features |
|---|---|---|
text-embedding-ada-002 | 1536 | High-quality semantic understanding |
text-embedding-3-small | 512 | Lightweight, fast |
text-embedding-3-large | 3072 | Highest precision |
Core Concepts
1. Chains
Chains are the core concept of LangChain, connecting multiple components:2. Prompts
Prompt template management:3. Memory
Conversation history management:4. Agents
Autonomous decision-making AI agents:Application Scenarios
1. Document Q&A System
Build an intelligent document Q&A system:2. Multi-step Workflow
Build complex multi-step workflows:3. Multi-model Collaboration
Use different models for different tasks:4. Streaming Output
Support streaming output for better user experience:Advanced Features
Custom Tools
Create custom tools:Error Handling
Implement robust error handling:Caching
Enable result caching to improve performance:Best Practices
1. Prompt Optimization
Write effective prompts:2. Token Management
Control token usage:3. Error Retry
Implement retry mechanism:4. Security Practices
Protect sensitive information:Performance Optimization
Batch Processing
Batch process requests to improve efficiency:Asynchronous Calls
Use asynchronous calls to improve concurrency:Troubleshooting
Connection Issues
Problem: Unable to connect to API Solutions:- Check if API Base URL is correct:
https://api.yelinai.com/v1 - Verify API Key validity
- Check network connection
- Confirm firewall settings
Rate Limiting
Problem: Request frequency too high Solutions:- Implement request rate limiting
- Use batch processing
- Add retry mechanism
- Consider upgrading plan
Memory Issues
Problem: Conversation history is too long Solutions:- Use
ConversationBufferWindowMemoryto limit history - Use
ConversationSummaryMemoryto compress history - Regularly clean up old conversations
- Use external storage