Skip to main content
Mastering Semantic Kernel Plugins

Mastering Semantic Kernel Plugins

Give your AI agents real-world powers by building custom Semantic Kernel Plugins for system administration.

  1. Posts/

Mastering Semantic Kernel Plugins

·880 words·5 mins· loading
👤

Chris Malpass

Author

Large Language Models (LLMs) are brilliant reasoning engines, but they live in a box. They can’t restart a server, check disk space, or query your internal APIs—unless you give them Plugins.

In this post, we’ll move beyond the basic “Hello World” examples and build a practical System Administration Plugin using Semantic Kernel.

The “Brain in a Jar” Problem
#

Imagine hiring a brilliant sysadmin who is locked in a room with no computer. They can tell you how to fix a server, but they can’t actually do it.

Plugins bridge this gap. They act as the “hands” of the AI, allowing it to:

  1. Read State: Fetch real-time data (e.g., “Is the database healthy?”).
  2. Take Action: Execute commands (e.g., “Restart the IIS service”).
  3. Offload Logic: Perform deterministic calculations that LLMs often get wrong.

Building a “SysAdmin” Plugin
#

Let’s build a plugin that allows an AI agent to monitor and manage servers. We’ll define two functions: one to check disk space and another to restart services.

1. Define the Plugin Class
#

The core of a plugin is a standard C# class. We use the [KernelFunction] attribute to expose methods to the AI, and [Description] to tell the AI when and how to use them.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
using Microsoft.SemanticKernel;
using System.ComponentModel;

public class ServerManagementPlugin
{
    [KernelFunction]
    [Description("Checks the available disk space on a specific server.")]
    public string GetDiskSpace(
        [Description("The name of the server (e.g., WEB-01, DB-02)")] string serverName)
    {
        // In a real app, this would query WMI or an infrastructure API.
        // We'll simulate a response for this demo.
        return serverName.ToUpper() switch
        {
            "WEB-01" => "45% free (200GB)",
            "DB-01" => "5% free (CRITICAL)",
            _ => "Server not found"
        };
    }

    [KernelFunction]
    [Description("Restarts a specific service on a target server.")]
    public string RestartService(
        [Description("The name of the service to restart")] string serviceName,
        [Description("The server to perform the action on")] string serverName)
    {
        Console.WriteLine($"[AUDIT] Restarting {serviceName} on {serverName}...");
        
        // Simulate the restart delay
        return $"Service '{serviceName}' on '{serverName}' has been successfully restarted.";
    }
}

2. Register and Run the Kernel
#

Now we need to register this plugin with the Kernel and let the AI use it. Notice how we don’t explicitly call the functions; the AI decides to call them based on our prompt.

Prerequisites:

1
dotnet add package Microsoft.SemanticKernel

The Code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;

// 1. Initialize the Builder
var builder = Kernel.CreateBuilder();

// 2. Add your LLM Service (Azure OpenAI or OpenAI)
// Always use environment variables or secrets management in production!
builder.AddAzureOpenAIChatCompletion(
    deploymentName: "gpt-4o",
    endpoint: Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT"),
    apiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY"));

// 3. Register the Plugin
builder.Plugins.AddFromType<ServerManagementPlugin>("SysAdmin");

var kernel = builder.Build();

// 4. Enable Automatic Function Calling
// This setting tells the AI it's allowed to "use its tools" automatically.
OpenAIPromptExecutionSettings settings = new() 
{ 
    ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions 
};

// 5. The Scenario
// We give the AI a high-level goal. It should figure out it needs to:
// 1. Check disk space on DB-01.
// 2. See that it's low.
// 3. (Hypothetically) decide to restart a cleanup service or just report back.
// Let's try a direct command first.

var prompt = "I'm getting alerts for DB-01. Check its disk status, and if it's critical, restart the 'LogArchiver' service.";

Console.WriteLine($"User: {prompt}");
var result = await kernel.InvokePromptAsync(prompt, new(settings));

Console.WriteLine($"Assistant: {result}");

Expected Output
#

When you run this, the interaction happens in a loop (handled by AutoInvokeKernelFunctions):

  1. AI Thought: “I need to check disk space for DB-01.” -> Calls GetDiskSpace("DB-01").
  2. Plugin Output: “5% free (CRITICAL)”.
  3. AI Thought: “The user said if it’s critical, restart ‘LogArchiver’.” -> Calls RestartService("LogArchiver", "DB-01").
  4. Plugin Output: “Service ‘LogArchiver’ on ‘DB-01’ has been successfully restarted.”
  5. AI Final Response: “I checked DB-01 and found disk space was critical (5% free). As requested, I have restarted the ‘LogArchiver’ service.”

Best Practices for Plugins
#

  1. Descriptive Names: The [Description] attribute is your API documentation for the AI. Be verbose. If a parameter format matters (e.g., “YYYY-MM-DD”), say so in the description.
  2. Keep it Deterministic: Plugins should ideally be reliable tools. If a plugin fails, ensure it returns a clear error message string so the AI can understand what went wrong and tell the user.
  3. Security First: Never expose dangerous functions (like DeleteDatabase) without a “human in the loop” confirmation step. You can implement this by having the plugin return a “Confirmation required” message instead of executing immediately.

Pro Tip: Testing Your Plugins
#

Because plugins are just C# classes, you can (and should) write standard unit tests for them! You don’t need the AI to test the logic inside GetDiskSpace. Test the deterministic code deterministically, and trust the Kernel to handle the routing.

By structuring your code as plugins, you transform your application from a text generator into an intelligent automation platform.

Further Reading
#