onnx/
/models/onnx-community/Qwen2.5-Coder-3B-Instruct/onnx/*
This will download model files (potentially several GB) and run inference via WebGPU. Proceed?