divyanshu-iitian opened a new pull request, #662:
URL: https://github.com/apache/burr/pull/662
# Add Multi-LLM Support to Burr Framework
## Short Description
This PR adds support for multiple LLM providers in Burr's system
architecture, enabling developers to seamlessly switch between different
language models (OpenAI, Anthropic, local models, etc.) without changing
application logic.
## Changes
### Modified Files:
- **`burr/system.py`**: Added multi-LLM provider abstraction layer with
unified interface
- **`burr/telemetry.py`**: Extended telemetry to track LLM provider usage
and performance metrics
- **`setup.cfg`**: Updated dependencies to support additional LLM provider
libraries
### Key Enhancements:
1. **Provider Abstraction**: Created a unified interface for different LLM
providers
2. **Configuration Support**: Added provider-specific configuration handling
3. **Telemetry Integration**: Tracks which LLM provider is used for each
request
4. **Backward Compatibility**: Existing single-LLM setups continue to work
without changes
## How I Tested This
### Manual Testing:
- ✅ Tested with OpenAI GPT-4
- ✅ Tested with Anthropic Claude
- ✅ Verified provider switching at runtime
- ✅ Confirmed telemetry correctly logs provider information
### Compatibility Testing:
- ✅ Existing Burr applications run without modification
- ✅ New multi-LLM configuration works as expected
- ✅ No breaking changes to existing API
### Example Usage:
```python
from burr import ApplicationBuilder
# Now supports multiple LLM providers
app = ApplicationBuilder().with_llm_provider(
provider="openai", # or "anthropic", "local", etc.
config={"model": "gpt-4", "api_key": "..."}
).build()
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]