Aspose.LLM for .NET

Welcome to Aspose.LLM for .NET

Aspose.LLM for .NET

Aspose.LLM for .NET lets you integrate large language models into your .NET applications and run them locally — on CPU or GPU, without calling a hosted inference service. Create an API instance from a preset (Qwen 2.5, Qwen 3, Gemma 3, Llama 3.2, Phi 4, DeepSeek, and others), start chat sessions, send messages with optional image input, and save or load conversation state.

The library targets .NET Standard 2.0 and ships native llama.cpp runtimes for CPU, CUDA, HIP, Metal, and Vulkan — downloaded automatically on first use. A single NuGet package (Aspose.LLM) adds everything you need to one project.

Start with system requirements, installation, and the Hello, world! example — or jump straight into the quick-win recipes.

Product overview

Learn about Aspose.LLM for .NET, its architecture, capabilities, and supported models.

Getting started

Install, license, and run your first example.

Developer’s reference

Conceptual reference for every public type and pattern.

See the full Developer’s reference hub.

Use cases

Build common scenarios with Aspose.LLM for .NET — full runnable code for each.

See the full Use cases hub.

How-to recipes

Short focused answers for common questions.

See the full How-to recipes hub.

Troubleshooting

Diagnose and fix common problems.

See the full Troubleshooting hub.

Resources