PreferredAcceleration
Contents
[
Hide
]
PreferredAcceleration forces the SDK to download a specific acceleration variant of the native binaries. Set it to override auto-detection.
Quick reference
| Type | AccelerationType? enum |
| Default | null (auto-detect best available) |
| Values | None, CUDA, HIP, Metal, Vulkan, Kompute, OpenCL, SYCL, AVX512, AVX2, AVX, OpenBLAS, NoAVX |
| Category | Native binary source |
| Field on | BinaryManagerParameters.PreferredAcceleration |
What it does
At binary download time, the manager picks an asset matching your host and PreferredAcceleration. When null, the auto-detection priority is: CUDA > HIP > Metal > Vulkan > best CPU AVX level.
null(default) — auto-detect.CUDA— NVIDIA GPUs (Windows / Linux).HIP— AMD GPUs on Linux.Metal— Apple Silicon.Vulkan— cross-vendor GPU; Windows AMD users; Intel iGPUs.AVX2/AVX512— force CPU at a specific instruction level.NoAVX— legacy x64 without AVX.
When to change it
| Scenario | Value |
|---|---|
| Default | null |
| CPU-only despite having a GPU | AVX2 or AVX512 |
| Force CUDA on a CUDA-capable host | CUDA |
| Windows + AMD GPU | Vulkan (HIP is Linux-only) |
| Cross-vendor single-codepath deployment | Vulkan |
| Force a specific AVX level to test performance | AVX2 vs AVX512 |
Example
using Aspose.LLM.Abstractions.Acceleration;
var preset = new Qwen25Preset();
preset.BinaryManagerParameters.PreferredAcceleration = AccelerationType.CUDA;
preset.BaseModelInferenceParameters.GpuLayers = 999;
using var api = AsposeLLMApi.Create(preset);
Interactions
GpuLayers— pair a GPU acceleration with a non-zeroGpuLayersto actually use the GPU.SystemSpecification— lower-level override;PreferredAccelerationis the recommended path.
What’s next
- Acceleration overview — per-backend setup.
- Supported acceleration — platform × backend matrix.
- GPU not detected troubleshooting — when auto-detection falls back to CPU.