02. Apr 2025
AI boost for your app with Semantic Kernel

The first part of this two-part series provides an overview of the possibilities offered by Semantik Kernel and how AI can be integrated into an application with little effort. The second part then looks at the topics of prompting and prompt templates as well as the function calling loop. It also explains how an AI application can be secured with filters and protected against misuse.
AI - why and how?
AI is being used more and more in modern applications. Be it in the form of chat bots, for the automation and support of business processes, in image and text processing or in the form of co-pilot support for complex and diverse tasks. In all of these applications, the aim is either to offer users the best possible support or to delegate complex tasks to AI models and have them carry them out.
Large Language Models (LLMs) are generally used for this type of AI support. Many of the providers of such models provide APIs to interact with the models. However, the APIs are often very complex, integration requires a lot of know-how and is time-consuming. If models from different providers are then also to be used for individual subtasks, the effort involved increases all the more.
This is where tools such as Semantic Kernel come into play. They are natively integrated into the language and offer a simple abstraction of the usually complex APIs. This makes it possible to add AI functions to your application with little effort. Semantic Kernel is an SDK developed by Microsoft and is available for applications developed in C#, Java and Python.
The kernel
The kernel acts as a link between the application or the user, the LLMs and any third-party systems. Semantic Kernel not only offers a simple interface for interacting with LLMs, but also supports the function calling mechanism. This makes it possible to provide a model with additional functions via plugins, which it can call independently. The kernel is the central component, which - similar to a dependency injection container - contains all the services required to solve a task with the help of AI. Because there is only one central component, the kernel is very easy to configure and monitor.
Whenever an LLM is interacted with, the kernel is involved. It provides the AI services, renders the prompts if necessary and makes the calls to the LLM. As soon as a result is available from this, it is processed by the kernel, a plugin is called if necessary and finally a response is generated.
AI services
Semantic Kernel comes with a range of pre-built AI services:
- Chat Completion
- Embedding Generation
- Text-to-Audio / Audio-to-Text
- Text-to-Image / Image-to-Text
Depending on which service is to be used, the right model must be selected. It is important to consider whether the model supports the required functionality, how quickly a response is generated, how accurate it is and how much it will cost to use.
Chat completion
The chat completion service can be used to hold a conversation with an AI agent. However, it can also be used to develop autonomous services that execute business processes or generate code, for example. This service is one of the most frequently used functions when LLMs are used.
Embedding generation
Embeddings are used when an LLM is to be provided with additional information. This can be business-specific data or information that did not exist at the time of training.
Embeddings are used if, for example, semantic search is required or suggestions, such as in a streaming service, are to be generated. Semantic Kernel also offers connectors to connect different types of vector databases.
Image and text processing
A variety of use cases can be implemented with the image and text processing services. For example, pre-reading or translation functions are possible in order to increase the accessibility of an application.
Plugins
Semantic Kernel uses a plugin mechanism to make functions available to an LLM. Plugins can be defined in different ways. The following plugin types are currently supported:
- Native Code
- Open API Endpoints
- Azure Logic Apps
For a native plugin, the methods of a class are provided with the "KernelFunction " attribute. This class can be made known to the kernel as a plugin. The example code shows how the methods of a class are marked as "KernelFunction " and the plugin is added to the kernel.
public class LightsPlugin
{
// Mock data for the lights
private readonly List<LightModel> _lights =
[
new LightModel { Id = 1, Name = "Table Lamp", IsOn = false },
new LightModel { Id = 2, Name = "Porch light", IsOn = false },
new LightModel { Id = 3, Name = "Chandelier", IsOn = true }
];
[KernelFunction("get_lights")]
[Description("Gets a list of lights and their current state")]
public Task<List<LightModel>> GetLightsAsync()
{
return Task.FromResult(_lights);
}
[KernelFunction("change_state")]
[Description("Changes the state of the light.")]
public Task<LightModel?> ChangeStateAsync(int id, LightModel lightModel)
{
var light = _lights.FirstOrDefault(light => light.Id == id);
if (light == null)
{
return Task.FromResult<LightModel?>(null);
}
// Update the light with the new state
light.IsOn = lightModel.IsOn;
return Task.FromResult<LightModel?>(light);
}
}
kernel.Plugins.AddFromType<LightsPlugin>("Lights");
Soll aus einer OpenAPI-Definition ein Plugin erzeugt werden, muss dem Kernel ein Link zu dieser zur Verfügung zu stellen werden. Der Kernel generiert hieraus einen HTTP-Client, der dem LLM bekannt gemacht wird. Entscheidet das LLM, dass eine oder mehrere Routen der API aufgerufen werden müssen, führt der Kernel diese im Anschluss aus. Folgendes Beispiel zeigt, wie man dem Kernel eine solches Plugin hinzufügt.
await kernel.ImportPluginFromOpenApiAsync(
pluginName: "lights",
uri: new Uri("https://example.com/v1/swagger.json"),
executionParameters: new OpenApiFunctionExecutionParameters()
{
// Determines whether payload parameter names are augmented with namespaces.
// Namespaces prevent naming conflicts by adding the parent parameter name
// as a prefix, separated by dots
EnablePayloadNamespacing = true
}
);
Azure Logic Apps are integrated in the same way as plugins based on OpenAPI definitions. The kernel requires a link that refers to the API definition. In addition, the metadata for the endpoints must be enabled in the Logic App so that a suitable HTTP client can be generated.
With plugins, it is generally important to ensure that the functions and arguments do not contain any abbreviations or other ambiguities. These make it very difficult for the LLMs to interpret the scope of a function. Although additional descriptions can be added, these consume tokens for each request, which leads to additional costs.
As soon as a plugin returns data to the LLM that is processed by it, you should think about IT security and data protection. Sensitive data should be anonymized if necessary and only the data that is actually needed should be provided. This should also be considered for cost reasons, as all data processed by the model incurs additional costs.
Frameworks
Agent Framework
In addition to the AI services already mentioned, Semantic Kernel offers two frameworks. The agent framework makes it possible to create AI agents for dedicated tasks that are processed autonomously or semi-autonomously. Agents can send and receive messages and thus collaboratively solve more complex tasks in a multi-agent conversation. Semantic Kernel comes with some predefined agents.
Chat Completion Agent
The chat completion service provides the basis for a chat-based interaction with an AI model. The integration into an agent simplifies the management of the chat history and makes it possible to integrate a chat service into a multi-agent conversation. The example shows how a chat completion agent can be created that uses the previously created plugin to realize a light control.
ar builder = Kernel
.CreateBuilder()
.AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey);
builder.Plugins.AddFromType<LightsPlugin>("lights");
Kernel kernel = builder.Build();
ChatCompletionAgent agent = new ()
{ Name = "LightAgent",
Instructions = "You are an AI that is supposed to assist the user with controlling the lights and show the state of the lights. " +
"You can turn on and off the lights and show the state of the all available lights." +
"You are not supposed to do anything different." +
"If a user asks for something different you tell him in a respectful way this and tell what you are able to do for him. " +
"Do not rely on your memory always check the current state of the lights.",
Kernel = kernel,
Arguments = new KernelArguments(
new AzureOpenAIPromptExecutionSettings()
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
})
};
ChatHistory history = [];
while (true)
{
Console.WriteLine();
Console.Write("> ");
string? input = Console.ReadLine();
Console.WriteLine();
history.Add(new ChatMessageContent(AuthorRole.User, input));
await foreach (ChatMessageContent response in agent.InvokeAsync(history))
{
Console.WriteLine($"{response.Content}");
}
}
Assistant Agents
Semantic Kernel also offers implementations for assistant agents. These make it possible to easily interact with additional resources such as files. They also provide the option of generating and executing code. For examples of how to use the assitant agents, I recommend the official GitHub repository [https://github.com/microsoft/semantic-kernel/tree/main/dotnet/samples/GettingStartedWithAgents]
Process Framework
The process framework can be used to model business processes. It follows an event-driven architecture and thus makes it possible to map complex processes. Integration into the Semantic Kernel SDK means that the full capabilities of the kernel and AI services are available in each individual process step. This makes it very easy to link business processes and LLMs with each other.
Conclusion
Semantic Kernel offers a wide range of options for supporting and expanding an application with the help of AI. The simple configuration and usability make it easy to get started, allowing initial results to be achieved quickly. With the support of C# and Java, there are also a wide range of possible applications for business software systems. The next part of this series will look at the topics of prompt templates and function calling. It will also show how filters help to make an AI application "business-ready".