Skip to content

Conversation

shivvor2
Copy link

What I did

  • Added support for custom OpenAI-compatible endpoints via OPENAI_BASE_URL env var
  • Completely restructured the codebase for better maintainability:
    • Split LLM logic into provider-specific modules
    • Created a config-based system to select which provider/model to use for each operation
    • Moved prompts to a separate file
    • Added proper JSON extraction for models without native JSON support

Other improvements

  • Added configurable report file naming with patterns like {date}_{time}_{n}.md
  • Made report storage location configurable
  • Added type hints for better code readability
  • Used partial application to create pre-configured functions

Note: I couldn't test with official OpenAI endpoints since they're banned in my region, but the code should work with any OpenAI-compatible API.

1. Added support for openAI compatible inference providers

2. Abstracted agent steps from implementation, users can now choose which client/ sdk to use for each step e.g. I can choose to use OpenAI or gemini for all 4 steps: followup, research_plan, query_generation, report_generation

3. Reworked mechanism for saving final report
Feat: seperated implementations of steps by llm sdk

Feat: Changed mechanism for saving reports (is now configured by the config file and supports pattern based formatting)
Feat: Added function for json response parsing (for inference providers that do not support the response format argument in openai chat completions)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant