Application Security — Why One Does Not Simply Protect a Data Store Connection String and Other Login Credentials?

2 April 2026

A developer asks: “How do I protect my connection string in a desktop application?”

This is one of the most common security questions in .NET development, and it sounds like it should have a straightforward answer. But there is a fundamental problem we need to analyze.

When an application connects to a service, SQL Server, a REST API, any service at all, it needs credentials. That’s unavoidable.

Sometimes those are long-lived “root” credentials: usernames and passwords, API keys, client secrets. Sometimes they’re derived tokens with limited scope and lifetime. Sometimes they come from the environment, like Windows Authentication. The exact form doesn’t matter.

What matters is this:

At the moment your application uses those credentials, it has everything it needs to act on them.

And that leads directly to a part which is easily underestimated.

The reality of using credentials

If your application can use credentials, then anything that can control your application can use them too.

On a typical desktop system, the user fully controls the machine. That means they can inspect memory, attach a debugger, intercept calls, or simply drive the application in ways you didn’t intend.

To illustrate: a connection string used by an application through Entity Framework or ADO.NET can typically be found in plain text in a process memory dump, using standard tools like WinDbg or dotnet-dump. No reverse engineering required.

Managed enterprise environments can raise the cost of such attacks significantly, through tools like AppLocker, WDAC, or restricted user accounts. But the fundamental dynamic does not change: the complexity of exploitation increases, while the possibility remains. This applies to WPF or Windows Forms applications, other types of native applications, and also to server applications like those served by a web server.

If your application uses a secret to access a remote resource, you can try to hide the secret. You can encrypt it at rest. You can obfuscate it.

But none of that changes the core fact: the application itself is already authenticated, at the point where it works with the remote resource.

An attacker doesn’t need to extract the password if they can just make your application run the query, call the API, or perform the operation on their behalf.

This is a key point that many discussions miss. The problem is not just that secrets can be stolen. It’s that the application already embodies the permission those secrets grant.

So can we protect connection strings?

Not in the way people often expect. If someone can run code on the same machine, under the same user account as your application, they can make your application do anything it is allowed to do. That’s not a framework limitation or a missing feature. It’s simply how software works.

Microsoft’s own guidance reflects this reality indirectly. For example, Entity Framework explicitly recommends not relying on storing connection strings with sensitive information directly in application configuration, and instead using more secure patterns where possible (see: Entity Framework Core connection string guidance). Guidelines for ASP.NET Core go in the same direction, you can read them here.

For development, Microsoft recommends the use of the Secret Manager tool. Note that it is meant to be used for development purposes only! Local storage mechanisms are for convenience and isolation - not for real security boundaries. The fundamental issue isn’t changed by the use of these tools: locally stored secrets are not safe from the user of the machine.

The .NET platform itself does not provide a single, clear, production-ready solution for secure storage on client machines. Its built-in configuration system can read values from many sources, but it does not make those sources secure.

In Microsoft’s more current guidance, the picture does not fundamentally change. For example, documentation for MSAL.NET, specifically for desktop apps, recommends persisting token caches locally and, on Windows, protecting them using DPAPI or similar mechanisms. In other words, the responsibility for secure storage remains with the application and the underlying operating system rather than the .NET framework itself.

Integrating facilities such as DPAPI or the Windows Credential Manager with application configuration often requires additional code or libraries, as there is no standard built-in bridge between secure OS storage and the .NET configuration system. The documentation for MSAL.NET, linked just above, includes mention of the Microsoft.Identity.Client.Extensions.Msal NuGet package, which has persistence support for token caches. The best recommendation for general purpose secret persistence however is the ProtectedData class. There are third party NuGet packages in this space as well, but trustworthiness is a concern for such an important feature.

What actually helps

Once you accept that secrets inside a client application cannot be fully protected, the question changes. It’s no longer how do I hide this credential? but how do I limit the damage when it’s used by the wrong person?

There are two strategies, and they work best together.

Make credentials less worth stealing

If a credential is compromised, how much damage can it do? The answer should be: as little as possible.

  • Restrict permissions to the minimum required
  • Scope credentials to specific operations
  • Prefer short-lived tokens over long-lived root secrets

This is where modern identity systems earn their keep, whether cloud-based or self-hosted. OpenID Connect providers, managed identities, token services, they don’t hide secrets better. They issue weaker, narrower ones, secrets of much lower value to a potential hacker.

Reduce the blast radius

If a credential or an application is compromised, how far can the damage spread? There are two ways to constrain it: change the architecture, or change the environment.

Changing the architecture is the most powerful option: instead of giving a client direct access to a database, you introduce a service layer. The client talks to an API, the API talks to the database.

Now the high-value credentials live in a controlled environment: on a server you manage, not on every user’s machine. The API of that server, the services it publishes, define what is possible. A connection string allows arbitrary SQL. An API should not.

A service layer also solves another problem: it decouples client authentication from backend authentication. If your database or backend service only supports long-lived credentials, the API can still offer token-based, short-lived access to clients. The powerful credential stays on the server, the client never sees it.

The narrower and more specific your API is, the less damage can be done. Of course the client will still need credentials to access the API, but with a careful structure potential damage is very limited even if a client is fully compromised.

This idea can also be extended to the client environment itself. In managed Windows environments, administrators can reduce the blast radius further by restricting what users are allowed to do on the machine: limiting which applications can be executed (e.g. via AppLocker or Windows Defender Application Control), enforcing restricted user accounts, or even locking systems into kiosk-style operation. These measures do not eliminate the underlying issue, but they can make it significantly harder to exploit in practice by removing the ability to run arbitrary code alongside the application.

A service layer also opens up an entire category of server-side protections that are simply impossible to enforce on a client machine: rate limiting, anomaly detection, device or IP binding, audit logging. These don’t prevent credential misuse on their own, but they make abuse observable and containable - and they only become options once privileged access has moved off the client.

The path most people take

In practice, developers rarely jump straight to architectural changes. They tend to go through a series of steps.

First, they look for ways to avoid handling credentials at all, using Windows Authentication or similar mechanisms. If that works, it’s ideal.

If this is not an option, the second-best approach is to avoid embedding powerful credentials directly, introducing identity providers or token-based access instead.

As long as any credentials need to be stored on the client machine, secure storage becomes a concern - options have been discussed above.

At this point, many developers feel they’ve “secured” their application. But if you follow the logic from earlier, you arrive at an uncomfortable conclusion:

None of these steps prevent a determined attacker with control over the machine from using the application’s access.

And that’s where the final step becomes unavoidable.

When architecture is the only real fix

If the risk still matters - and in many systems it does! - the only meaningful improvement comes from changing the shape of the system.

Move privileged access away from the client. Introduce services. Narrow what those services do.

This doesn’t make your system invulnerable. But it changes the game from anyone with access can do anything to even with access, only specific actions are possible.

That’s a huge difference.

I strongly recommend reading our related blog series (starting with Modern Desktop Apps And Their Complex Architectures) and documentation pages about the Backend Web API Service, Data Access Security, and XAF Security Tiers. They go into more detail about how to design systems with these principles in mind. Of course this approach is complex and specific to your application, its architecture and requirements. Please don’t hesitate to reach out if we can help!

A common misconception: .NET "Secure" String

This comes up often enough to be worth addressing directly, in a few words.

SecureString was designed to minimize the time secrets exist in memory in plain text. In theory, that sounds helpful.

In practice, especially in the managed .NET environment, it does little to solve the problem. There are two reasons:

  • Most APIs which connect you to services require “normal” strings, so the secure in-memory representation SecureString offers needs to be converted into .NET strings. These conversions leave data in managed memory, just as if you’d stored it there all along.
  • Even if this could be prevented, the application still performs authenticated operations. This leaves it open to control by a local user just like before.

So while it may reduce exposure in very narrow scenarios, it does nothing to address the fundamental issue: the application already has the capability to act. Finally, it's difficult to ignore the official Microsoft recommendations (onetwo, three).

Your Feedback Matters!

Free DevExpress Products - Get Your Copy Today

The following free DevExpress product offers remain available. Should you have any questions about the free offers below, please submit a ticket via the DevExpress Support Center at your convenience. We'll be happy to follow-up.