Skip to main content
Deploying .NET AI Applications with Docker

Deploying .NET AI Applications with Docker

A step-by-step guide to containerizing a .NET application with an embedded ONNX model for cross-platform, isolated, and scalable deployments.

  1. Posts/

Deploying .NET AI Applications with Docker

·900 words·5 mins· loading
👤

Chris Malpass

Author

You’ve built a brilliant AI-powered application in .NET, perhaps using an ONNX model for local inference. It works perfectly on your machine. Now, how do you ship it?

The answer is Docker. Containerization solves the “it works on my machine” problem by packaging your application, its dependencies, your .NET runtime, and even your AI models into a single, isolated unit called an image.

This guide will walk you through creating an optimized, multi-stage Dockerfile for a .NET application that uses the ONNX Runtime.

Why Docker for AI Apps?
#

  • Environment Consistency: The exact same environment is used for development, testing, and production. No more “missing dependency” errors.
  • Dependency Isolation: Your app’s specific version of CUDA, ONNX Runtime, or any other library won’t conflict with other applications on the host machine.
  • Scalability: Docker containers can be easily scaled up or down using orchestrators like Kubernetes.
  • Portability: A Docker image built on a Windows machine will run flawlessly on a Linux server in the cloud.

The Project Structure
#

Let’s assume we have a simple console application with the following structure:

1
2
3
4
5
6
/MyNetAiApp
|-- MyNetAiApp.csproj
|-- Program.cs
|-- model/
|   |-- sentiment-model.onnx
|-- Dockerfile

The sentiment-model.onnx file should be marked as Content and set to Copy if newer in the .csproj file:

1
2
3
4
5
<ItemGroup>
  <Content Include="model\sentiment-model.onnx">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </Content>
</ItemGroup>

The Multi-Stage Dockerfile
#

A multi-stage build is the best practice for creating lean, secure Docker images. We use one stage (the build stage) to compile the application with the full SDK, and a final stage that only contains the minimal runtime and our application artifacts.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# Stage 1: The Build Stage
# We use the .NET SDK image which contains all the tools needed to build the app
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /source

# Copy the project file and restore dependencies first
# This leverages Docker's layer caching. As long as the .csproj doesn't change,
# this layer won't be re-run, speeding up subsequent builds.
COPY *.csproj .
RUN dotnet restore

# Copy the rest of the source code and build the application
COPY . .
RUN dotnet publish -c Release -o /app --no-restore

# Stage 2: The Final Stage
# We use the much smaller ASP.NET runtime image. For console apps, you can even use
# mcr.microsoft.com/dotnet/runtime:8.0 if you don't need any web-related libraries.
FROM mcr.microsoft.com/dotnet/runtime:8.0
WORKDIR /app

# CRITICAL: Install libgomp1. 
# The ONNX Runtime on Linux depends on OpenMP, which isn't included in the 
# default .NET runtime image. Without this, your app will crash with a 
# "DllNotFoundException" for onnxruntime.
RUN apt-get update && apt-get install -y --no-install-recommends libgomp1 && rm -rf /var/lib/apt/lists/*

# Copy the published application from the build stage
COPY --from=build /app .

# Set the entry point of the container
ENTRYPOINT ["dotnet", "MyNetAiApp.dll"]

Key Optimizations in This Dockerfile:
#

  1. Multi-Stage Build: The final image doesn’t contain the .NET SDK (which is huge). It only contains the minimal runtime needed to execute the application.
  2. Layer Caching: By copying the .csproj file and running dotnet restore before copying the rest of the code, we ensure that Docker doesn’t need to re-download all the NuGet packages every time we change a single line of C# code.
  3. Missing Dependencies: We explicitly install libgomp1. This is a classic “gotcha” when moving .NET AI apps from Windows to Linux containers.

Handling Large Models with .dockerignore
#

If your AI models are massive (e.g., 5GB+), you don’t want to copy them into the build stage context if they aren’t needed for compilation.

Create a .dockerignore file to exclude heavy assets from the initial build context copy, and then copy them explicitly only where needed.

1
2
3
4
# .dockerignore
bin/
obj/
model/

Then, in your Dockerfile, copy the model directly to the final stage:

1
2
# In Stage 2
COPY model/ /app/model/

Handling GPU-Enabled Models
#

Running AI on a CPU is fine for simple tasks, but for heavy lifting, you need a GPU. This complicates things because the standard .NET images don’t include NVIDIA drivers or CUDA libraries.

To support GPUs, you typically need to:

  1. Use an NVIDIA CUDA base image (e.g., nvidia/cuda:12.1-base-ubuntu22.04).
  2. Install the .NET Runtime on top of it.
  3. Use the Microsoft.ML.OnnxRuntime.Gpu NuGet package instead of the CPU version.
1
2
3
4
5
6
7
8
9
# Example: Using a CUDA base image and installing .NET
FROM nvidia/cuda:12.1-base-ubuntu22.04

# Install .NET Runtime (simplified for brevity)
RUN apt-get update && apt-get install -y dotnet-runtime-8.0

WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "MyNetAiApp.dll"]

When running the container, you must use the --gpus all flag:

1
docker run --rm --gpus all my-net-ai-app-gpu

Conclusion
#

Docker is an essential tool for modern application deployment, and it’s a perfect match for .NET AI applications. By using a multi-stage Dockerfile, you can create lean, portable, and scalable images that encapsulate your application, your models, and all their dependencies. This consistent and reproducible approach simplifies deployment from your local machine to any cloud provider.

Further Reading
#