Qwen2.5‑Coder‑3B‑Instruct — WebGPU (ONNX)

Pure front‑end · Transformers.js v3 + ONNX Runtime Web · WebGPU preferred
First run will download ~2 GB+ (q4* ONNX weights). Desktop Chrome/Edge recommended. HTTPS or localhost required for WebGPU.
If you mirror locally, ensure it contains an onnx/ directory.
Example mirror path: /models/onnx-community/Qwen2.5-Coder-3B-Instruct/onnx/*

This will download model files (potentially several GB) and run inference via WebGPU. Proceed?

Examples

Debug log