Add fp32 resnet50 model for vitisai ep demo#116
Conversation
Signed-off-by: Song <jamesong@amd.com>
Signed-off-by: Song <jamesong@amd.com>
| url: "webnn/efficientnet-lite4/resolve/main/onnx/model_fp16.onnx", | ||
| path: "./demos/image-classification/models/webnn/efficientnet-lite4/onnx", | ||
| }, | ||
| { |
There was a problem hiding this comment.
I was testing out the changes in this branch, and it looks like I can't fetch the full suite of models (after the seventh one, the npm command exits and doesn't download the remaining ones). Are you seeing anything similar? Example output below.
PS C:\src\webnn-developer-preview-amd> npm run fetch-models
> webnn-developer-preview@1.0.0 fetch-models
> node fetch_models.js
[1/40] Downloading https://huggingface.co/xenova/resnet-50/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\xenova\resnet-50\onnx\model_fp16.onnx
-> downloading [========================================] 100% 0.0s
[1/40] Downloaded https://huggingface.co/xenova/resnet-50/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\xenova\resnet-50\onnx\model_fp16.onnx
[2/40] Downloading https://huggingface.co/webnn/mobilenet-v2/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\mobilenet-v2\onnx\model_fp16.onnx
-> downloading [========================================] 100% 0.0s
[2/40] Downloaded https://huggingface.co/webnn/mobilenet-v2/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\mobilenet-v2\onnx\model_fp16.onnx
[3/40] Downloading https://huggingface.co/webnn/efficientnet-lite4/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\efficientnet-lite4\onnx\model_fp16.onnx
-> downloading [========================================] 100% 0.0s
[3/40] Downloaded https://huggingface.co/webnn/efficientnet-lite4/resolve/main/onnx/model_fp16.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\efficientnet-lite4\onnx\model_fp16.onnx
[4/40] Downloading https://huggingface.co/amd/resnet50/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\resnet50\onnx\model.onnx
-> downloading [========================================] 100% 0.0s
[4/40] Downloaded https://huggingface.co/amd/resnet50/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\resnet50\onnx\model.onnx
[5/40] Downloading https://huggingface.co/amd/MobileNetV2/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\MobileNetV2\onnx\model.onnx
-> downloading [========================================] 100% 0.0s
[5/40] Downloaded https://huggingface.co/amd/MobileNetV2/resolve/main/webnn/onnx/model.onnx to C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\MobileNetV2\onnx\model.onnx
[6/40] Downloading https://huggingface.co/microsoft/sd-turbo-webnn/resolve/main/text_encoder/model_layernorm.onnx to C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\text_encoder\model_layernorm.onnx
-> downloading [========================================] 100% 0.0s
[6/40] Downloaded https://huggingface.co/microsoft/sd-turbo-webnn/resolve/main/text_encoder/model_layernorm.onnx to C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\text_encoder\model_layernorm.onnx
[7/40] Downloading https://huggingface.co/microsoft/sd-turbo-webnn/resolve/main/unet/model_layernorm.onnx to C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\unet\model_layernorm.onnx
PS C:\src\webnn-developer-preview-amd>
There was a problem hiding this comment.
No, I did not see a similar issue with fetch_models.js. And based on the provided log, the amd models added have been downloaded without error. Could you check if amd's models exist in the corresponding folder? And could you also try to run the fetch again to see if the problem persist?
There was a problem hiding this comment.
The new models exist, but I think the problem is that the rest of the models aren't getting downloaded for whatever reason. I'd expect to see ~40 models downloaded. So, I'm not sure if something regressed here, as I don't see this behavior on the main branch of the parent repo.
C:\src\webnn-developer-preview-amd>dir /s /b *.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\MobileNetV2\onnx\model.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\amd\resnet50\onnx\model.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\efficientnet-lite4\onnx\model_fp16.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\webnn\mobilenet-v2\onnx\model_fp16.onnx
C:\src\webnn-developer-preview-amd\demos\image-classification\models\xenova\resnet-50\onnx\model_fp16.onnx
C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\text_encoder\model_layernorm.onnx
C:\src\webnn-developer-preview-amd\demos\sd-turbo\models\unet\model_layernorm.onnx
There was a problem hiding this comment.
Well, not sure what changed in the previous few hours (I restarted and changed networks in the interim, so maybe something there got things unstuck)- but at the moment, running the script seems to make it past the seventh model (it's downloading number 13 as I write this). We can consider this resolved.
There was a problem hiding this comment.
I jinxed it. It bailed out after the 14th model finished downloading. Trying again. 😢
There was a problem hiding this comment.
Hm, there's some degree of flakiness that I'm observing. At any rate, will treat this as a separate issue. Thanks for confirming behavior on your side.
[26/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b_01ec64.encoder-fp16.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.encoder-fp16.onnx Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[27/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b_01ec64.decoder-orig-img-size-dynamic-fp16.onnx Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[28/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b-encoder-int8.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-encoder-int8.onnx Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[29/40] Downloading https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx to C:\src\webnn-developer-preview-amd\demos\segment-anything\models\sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx (2 retries left)
Retrying download for https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx (1 retries left)
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx after multiple attempts Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
Failed to download https://huggingface.co/webnn/segment-anything-model-webnn/resolve/main/sam_vit_b-decoder-orig-img-size-dynamic-int8.onnx Error: HTTP error! Status: 404 Not Found
at downloadFile (file:///C:/src/webnn-developer-preview-amd/fetch_models.js:206:19)
at process.processTicksAndRejections (node:internal/process/task_queues:103:5)
at async file:///C:/src/webnn-developer-preview-amd/fetch_models.js:226:13
[30/40] Downloading https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
-> downloading [========================================] 100% 0.0s
[30/40] Downloaded https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_kvcache_128_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
[31/40] Downloading https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
-> downloading [========================================] 100% 0.0s
[31/40] Downloaded https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_decoder_static_non_kvcache_lm_fp16_layernorm_gelu_4dmask_iobinding.onnx
[32/40] Downloading https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_encoder_lm_fp16_layernorm_gelu.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_encoder_lm_fp16_layernorm_gelu.onnx
-> downloading [========================================] 100% 0.0s
[32/40] Downloaded https://huggingface.co/webnn/whisper-base-webnn/resolve/main/whisper_base_encoder_lm_fp16_layernorm_gelu.onnx to C:\src\webnn-developer-preview-amd\demos\whisper-base\models\whisper_base_encoder_lm_fp16_layernorm_gelu.onnx
| @@ -0,0 +1,2026 @@ | |||
| { | |||
| "architectures": [ | |||
| "ResNetForImageClassification" | |||
There was a problem hiding this comment.
I was trying the sample locally- I did get back an inference but did see some log output indicating attempts at local file system access... is this expected?
[60792:33108:0407/135409.409:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [FATAL: onnxruntime, vaiml_mlopslib_partition_rule.cpp:1317 vaiml_mlopslib_partition_rule.cpp] Failed to open "C:\\temp\\adityar\\vaip\\.cache\\ef572deb32890331e9d986c1826163b6\\aie_unsupported_original_ops.json"
There was a problem hiding this comment.
Including the logs from about://gpu here as well. It seems like that perhaps the computation in my local testing is happening on the CPU?
[53892:58412:0407/140321.915:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, stat.cpp:193 stat.cpp] [Vitis AI EP] No. of Operators :
[53892:58412:0407/140321.915:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, stat.cpp:204 stat.cpp] CPU 124
[53892:58412:0407/140321.915:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, stat.cpp:213 stat.cpp]
There was a problem hiding this comment.
The failed file access caused the model to fall back to CPU. But that on-disk file access should not happen. Could you share the steps to reproduce this?
There was a problem hiding this comment.
Sure- as prerequisites, you'll need to have the 1.8.59 AMD NPU EP installed, and then you'll need to have the 2.0 Preview2 Windows App SDK build installed (you can run the installer from here). I used Google Chrome Canary as my test browser.
I cloned your fork locally, pulled down the models locally, and ran the site via npm run dev.
Once I had the server launched, I launched Google Chrome Canary with the following incantation:
PS C:\Users\adityar\AppData\Local\Google\Chrome SxS\Application> .\chrome.exe --no-first-run --no-default-browser-check --enable-features="WebMachineLearningNeuralNetwork,WebNNOnnxRuntime" --webnn-ort-ignore-ep-blocklist --webnn-ort-logging-level=VERBOSE --webnn-ort-library-path-for-testing="C:\Program Files\WindowsApps\Microsoft.WindowsAppRuntime.2-preview2_2.0.0.2_x64__8wekyb3d8bbwe" --webnn-ort-ep-device="VitisAIExecutionProvider,0x1022,0x17F0" --allow-third-party-modules about:blank --webnn-ort-ep-library-path-for-testing=VitisAIExecutionProvider?"C:\Program Files\WindowsApps\MicrosoftCorporationII.WinML.AMD.NPU.EP.1.8_1.8.59.0_x64__8wekyb3d8bbwe\ExecutionProvider\onnxruntime_vitisai_ep.dll"
(More details on some of these arguments is available here and if you have any questions on any of them, let me know.)
I then navigated to the local web site (http://127.0.0.1:8080/), loaded the Image Classification sample, and tried running the FP32 model. I was inspecting the verbose ORT logs via the chrome://gpu page.
If I should be using a newer AMD NPU EP, please let me know (I know that there's been ongoing work to eliminate the file accesses during model compilation, maybe I am on a build that doesn't have the latest fixes).
There was a problem hiding this comment.
My test EP seems to be an older version than the EP version shown in your log. But I also tried with a newer version and it also has no issue. Is it possible for you to test it with a newer EP? I will also try this specific version at the same time.
There was a problem hiding this comment.
Those EGL driver messages is irrelevant to the compilation process and shouldn't cause any trouble. Similarly, the file access attempts also shouldn't cause errors. But we can try to remove those if needed. Did the compilation go through successfully for you with outputs shown in the page? The process might take 5-10 min.
There was a problem hiding this comment.
The process might take 5-10 min.
Ah, I might have given up too early. Let me try again.
Have you tried enabling any of the other samples, by chance? If these models are taking 5-10 minutes to compile, then the other heavier-weight ones like the GenAI ones might not be usable.
There was a problem hiding this comment.
No, I haven't. Good point though.
There was a problem hiding this comment.
I left it running and you are right- it does eventually complete. Wall clock time was ~11 minutes on my device.
[10:27:34] [Transformer.js] env.allowRemoteModels: false
[10:27:34] [Transformer.js] env.allowLocalModels: true
[10:27:34] [Config] Demo config updated · resnet-50 · webnn · gpu · fp16
[10:27:35] [Config] Demo config updated · resnet-50 · webnn · npu · fp16
[10:27:36] [Config] Demo config updated · resnet-50 · webnn · npu · fp32
[10:27:50] [ONNX Runtime] Options: {"dtype":"fp32","device":"webnn-npu","session_options":{"freeDimensionOverrides":{"batch_size":1,"num_channels":3,"height":224,"width":224},"context":{}},"subfolder":"onnx"}
[10:27:50] [Transformer.js] Loading amd/resnet50 and running image-classification pipeline
[10:38:50] {"warmup":50.30000001192093,"inference":[12.099999994039536,10.700000017881393,10.5,10.599999994039536,9.900000005960464],"throughput":57.64}
[10:38:50] [{"label":"tiger, Panthera tigris","score":0.6751770973205566},{"label":"tiger cat","score":0.31893107295036316},{"label":"tabby, tabby cat","score":0.0012244294630363584},{"label":"jaguar, panther, Panthera onca, Felis onca","score":0.000790551130194217},{"label":"zebra","score":0.000526620598975569}]
[10:38:50] [Transformer.js] Classifier completed
|
@Json288 please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
| log("[Transformer.js] env.allowRemoteModels: " + transformers.env.allowRemoteModels); | ||
| log("[Transformer.js] env.allowLocalModels: " + transformers.env.allowLocalModels); | ||
|
|
||
| const FP16_MODEL_PATHS = { |
There was a problem hiding this comment.
One thing I am still wondering about- I tried using just the production EPs via the following browser command:
PS C:\Users\adityar\AppData\Local\Google\Chrome SxS\Application> .\chrome.exe --no-first-run --no-default-browser-check --enable-features="WebMachineLearningNeuralNetwork,WebNNOnnxRuntime" --webnn-ort-ignore-ep-blocklist --webnn-ort-logging-level=VERBOSE --webnn-ort-library-path-for-testing="C:\Program Files\WindowsApps\Microsoft.WindowsAppRuntime.2-preview2_2.0.0.2_x64__8wekyb3d8bbwe" --webnn-ort-ep-device="VitisAIExecutionProvider,0x1022,0x17F0"
Both the GPU and NPU EP failed to register in this configuration because of a missing DLL dependency, and then the GPU process crashed shortly thereafter.
[12432:33028:0420/092011.118:ERROR:services\webnn\ort\environment.cc:591] : [WebNN] Failed to call ort_api->RegisterExecutionProviderLibrary( env.get(), ep_name.c_str(), package_info->library_path.value().c_str()): [WebNN] ORT status error code: 1 error message: Error loading "C:\Program Files\WindowsApps\MicrosoftCorporationII.WinML.AMD.GPU.EP.1.8_1.8.55.0_x64__8wekyb3d8bbwe\ExecutionProvider\onnxruntime_providers_migraphx.dll" which depends on "migraphx_c.dll" which is missing. (Error 1114: "A dynamic link library (DLL) initialization routine failed.")
[12432:33028:0420/092011.124:ERROR:services\webnn\ort\environment.cc:89] : [ORT] [INFO: onnxruntime, utils.cc:552 onnxruntime::LoadPluginOrProviderBridge] Loading EP library: 0000133C02659FC0 as a plugin
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:591] : [WebNN] Failed to call ort_api->RegisterExecutionProviderLibrary( env.get(), ep_name.c_str(), package_info->library_path.value().c_str()): [WebNN] ORT status error code: 1 error message: Error loading "C:\Program Files\WindowsApps\MicrosoftCorporationII.WinML.AMD.NPU.EP.1.8_1.8.59.0_x64__8wekyb3d8bbwe\ExecutionProvider\onnxruntime_providers_vitisai.dll" which depends on "onnxruntime_providers_shared.dll" which is missing. (Error 126: "The specified module could not be found.")
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\logging.cc:146] : [WebNN] [INFO] Registered OrtEpDevice #0: {ep_name: CPUExecutionProvider, ep_vendor: Microsoft, ep_metadata: {version: 1.24.4}, ep_options: {}}, OrtHardwareDevice: {type: CPU, vendor: AMD, vendor_id: 0x1022, device_id: 0x7, device_metadata: {Description: AMD Ryzen AI 9 365 w/ Radeon 880M }}
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\logging.cc:146] : [WebNN] [INFO] Registered OrtEpDevice #1: {ep_name: DmlExecutionProvider, ep_vendor: Microsoft, ep_metadata: {version: 1.24.4}, ep_options: {device_id: 0}}, OrtHardwareDevice: {type: GPU, vendor: Advanced Micro Devices, Inc., vendor_id: 0x1002, device_id: 0x150e, device_metadata: {Description: AMD Radeon(TM) 880M Graphics, LUID: 202817, DxgiAdapterNumber: 0, DxgiHighPerformanceIndex: 0, DxgiVideoMemory: 4096 MB, Discrete: 0}}
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:501] : [WebNN] No EP device can be selected due to no matching device for user-specified webnn-ort-ep-device: VitisAIExecutionProvider,0x1022,0x17F0. Please check the registered EP devices in the logs by setting webnn-ort-logging-level to VERBOSE or INFO.
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:501] : [WebNN] No EP device can be selected due to no matching device for user-specified webnn-ort-ep-device: VitisAIExecutionProvider,0x1022,0x17F0. Please check the registered EP devices in the logs by setting webnn-ort-logging-level to VERBOSE or INFO.
[12432:33028:0420/092011.132:ERROR:services\webnn\ort\environment.cc:501] : [WebNN] No EP device can be selected due to no matching device for user-specified webnn-ort-ep-device: VitisAIExecutionProvider,0x1022,0x17F0. Please check the registered EP devices in the logs by setting webnn-ort-logging-level to VERBOSE or INFO.
GpuProcessHost: The GPU process crashed! Exit code: STATUS_BREAKPOINT.
The reason I am interested in this configuration is that it's closer to the production environment (i.e., using the released EPs). I'll dig around more on my end but wanted to check if this looks familiar to you or if you spot something wrong with this configuration.
There was a problem hiding this comment.
I noticed that you removed this arg: --webnn-ort-ep-library-path-for-testing=VitisAIExecutionProvider?"C:\Program Files\AMD-EP\onnxruntime_vitisai_ep.dll". Why is that?
There was a problem hiding this comment.
Yeah good question- it goes back to this remark:
The reason I am interested in this configuration is that it's closer to the production environment
Basically, trying to understand and diagnose why the official versions of the EP don't work from the MSIX since that is how AMD customers would use the feature.
There was a problem hiding this comment.
For error caused by loading ep dll from msix installation path, we have a pr to fix it that's waiting to be merged.
As for this one, I think it should try to load onnxruntime_vitisai_ep.dll instead of onnxruntime_providers_vitisai.dll. Is this expected?
There was a problem hiding this comment.
I think it should try to load onnxruntime_vitisai_ep.dll instead of onnxruntime_providers_vitisai.dll. Is this expected?
Yeah, I noticed this as well- something is causing the EP routing to get confused and pick the wrong EP version... not sure what's going on here, trying to debug ORT to get more details.
There was a problem hiding this comment.
Ok, I think I root-cause why the wrong EP version is getting selected. Currently, the browser process is using the EP catalog from Windows App SDK (WASDK) 1.8, which only enumerates 1.8-compatible EPs.
AMD's EP packages contain both the 1.8- and 2.0-compatible versions of the EP binaries, and so if we don't explicitly point to the EP binary on the command line, the browser process will locate the 1.8 binary and pass that along to the GPU process.
We've meanwhile passed in command-line options to use the ORT from WASDK 2.0, where the ORT provider bridge (required by the 1.8-compatible EPs) has been intentionally removed. So that is the cause of the crash we're seeing here. This problem will go away shortly once we update Chromium to use WASDK 2.0.

Why is this change being made?
To verify and demonstrate VitisAI EP 's support for WebNN with fp32 image_classification models: ResNet50 & MobileNet.
What changed?
Added
demos/image-classification/models/amd/resnet50/—config.json,preprocessor_config.json(local assets for AMD FP32 ResNet-50).demos/image-classification/models/amd/MobileNetV2/—config.json,preprocessor_config.json(local assets for AMD FP32 MobileNetV2).Modified
demos/image-classification/index.js— FP32 AMD paths (amd/resnet50,amd/MobileNetV2), WebNN Hub layout handling (webnn/JSON +webnn/onnxweights on remote vs flatonnx/locally),env.fetchrewrites for Hubconfig.json/preprocessor_config.json,options.subfolderfor remote/local, and AMDpixel_values→inputpatch for the classifier pipeline.demos/image-classification/index.html— UI wiring for FP32 / model selection as needed for the new paths.demos/image-classification/static/main.css— Styles for any new/updated controls.fetch_models.js— Extra download entries for AMD ResNet-50 / MobileNetV2 from the Hubwebnn/onnxpaths into the localonnx/mirror layout.How was the change tested?
./modelsflows; WebNN NPU + FP32 + MobileNet V2 / ResNet50.node fetch_modelsagainst Hub when validating AMDwebnn/asset URLs.