-
-
Notifications
You must be signed in to change notification settings - Fork 13.5k
♻️ refactor: refactor send message to speed up #9046
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #9046 +/- ##
==========================================
- Coverage 77.06% 76.60% -0.47%
==========================================
Files 814 816 +2
Lines 48902 49237 +335
Branches 6343 5219 -1124
==========================================
+ Hits 37688 37719 +31
- Misses 11208 11512 +304
Partials 6 6
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
Reviewer's GuideThis PR refactors the chat message flow to leverage a dedicated server-side API for faster processing by routing sendMessage calls to a new server-mode pathway, updating topics and messages in bulk via new reducer actions, and providing corresponding backend router and service implementations. It also includes miscellaneous type and configuration adjustments for imports, optional parameters, and build settings. Sequence diagram for the new server-side sendMessage flowsequenceDiagram
participant User
participant Frontend
participant AiChatService (frontend)
participant LambdaClient
participant LambdaRouter (backend)
participant AiChatService (backend)
participant MessageModel
participant TopicModel
User->>Frontend: Triggers sendMessage
Frontend->>AiChatService (frontend): sendMessageInServer(params)
AiChatService (frontend)->>LambdaClient: aiChat.sendMessageInServer.mutate(params)
LambdaClient->>LambdaRouter (backend): sendMessageInServer
LambdaRouter (backend)->>AiChatService (backend): getMessagesAndTopics
AiChatService (backend)->>MessageModel: query messages
AiChatService (backend)->>TopicModel: query topics
MessageModel-->>AiChatService (backend): messages
TopicModel-->>AiChatService (backend): topics
AiChatService (backend)-->>LambdaRouter (backend): {messages, topics}
LambdaRouter (backend)-->>LambdaClient: response
LambdaClient-->>AiChatService (frontend): response
AiChatService (frontend)-->>Frontend: response
Frontend->>Frontend: internal_dispatchTopic/updateTopics
Frontend->>Frontend: internal_dispatchMessage/updateMessages
Class diagram for updated reducers with bulk update actionsclassDiagram
class topicReducer {
+updateTopics(value: ChatTopic[]): ChatTopic[]
}
class messagesReducer {
+updateMessages(value: ChatMessage[]): ChatMessage[]
}
class ChatTopicDispatch {
+AddChatTopicAction
+UpdateChatTopicAction
+DeleteChatTopicAction
+UpdateTopicsAction
}
class MessageDispatch {
+CreateMessage
+UpdateMessage
+UpdateMessages
+UpdatePluginState
+UpdateMessageExtra
+DeleteMessage
}
topicReducer --> ChatTopicDispatch
messagesReducer --> MessageDispatch
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
👍 @arvinxx Thank you for raising your pull request and contributing to our Community |
The latest updates on your projects. Learn more about Vercel for GitHub.
|
8858218
to
d6ef723
Compare
和 https://lobe-chat-database-git-fork-sxjeru-93-lobehub-community.vercel.app/chat?topic=tpc_qGGsnIGwf6B2 对比了下还是可以明显感觉出快了 2, 3s 的 |
fea6423
to
5903b58
Compare
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
📝 补充信息 | Additional Information
Summary by Sourcery
Offload AI chat messaging to the server via new TRPC endpoints and client service, integrate a V2 chat action in the store, and improve performance with batched state updates
New Features:
Enhancements:
Build:
Chores: