issue/341 - support internlm3 model#342
Merged
wooway777 merged 1 commit intoInfiniTensor:mainfrom May 6, 2026
Merged
Conversation
pengcheng888
reviewed
May 6, 2026
pengcheng888
reviewed
May 6, 2026
pengcheng888
reviewed
May 6, 2026
pengcheng888
reviewed
May 6, 2026
pengcheng888
reviewed
May 6, 2026
Collaborator
pengcheng888
left a comment
There was a problem hiding this comment.
(1) 请修改代码,然后重新给出测试截图;(2) 请补充服务的测试截图
pengcheng888
reviewed
May 6, 2026
Closed
Collaborator
pengcheng888
reviewed
May 6, 2026
pengcheng888
reviewed
May 6, 2026
Collaborator
pengcheng888
left a comment
There was a problem hiding this comment.
该pr将会合并到main分支。 建议先把两个commit信息压缩成一个,重新push。
pengcheng888
approved these changes
May 6, 2026
Collaborator
|
多谢老师 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.


增加internlm3 model适配
test_infer.py执行截图






服务启动参数如下:
python python/infinilm/server/inference_server.py --device nvidia --model=/data/rubik/models/internlm3-8b-instruct/ --max-new-tokens=100 --max-batch-size=32 --tp=1 --temperature=1.0 --top-p=0.8 --top-k=1 --enable-paged-attn --cache-type=paged --enable-graph --attn=flash-attn
启动截图:
benchmark客户端输出截图如下: