New AiGen Functionality in CodeRush for Visual Studio

CodeRush's AiGen is Faster and Smarter for Everyday Development

AI-assisted coding tools live or die by two factors: speed and quality of the code response. If an AI response takes too long, or if it regenerates more code than necessary, developers won't use it for small, focused changes.
With this release, CodeRush’s AiGen takes significant leaps forward on both fronts. The improvements are all about making AI genuinely fast and useful during day-to-day development -- where precision, latency, and context matter most.
We'll cover what’s new in this release using real world examples that you can easily try yourself (clone the sample project).

Improved Context Acquisition & Architectural Awareness

One of the challenges for AI coding assistants is how to get sufficient context for a correct code response, without exceeding the model's context window limits. Some AI tools solve this by placing the burden of specifying this additional context -- the files to include in a query -- on the user. This can get tedious when many files are involved. Simple AI tooling might see the file you’re editing, but miss surrounding architectural relationships -- interfaces, implementors, ancestors, descendants, helper classes, and corresponding test fixtures.
To meet this challenge, AiGen now starts each request with a lightweight context-acquisition pass which:
  • Identifies which related types matter for the request
  • Traverses the solution hierarchy as needed
  • Optimally packs a reasonable portion of the context window with only the most relevant information
  • Sends that optimized and packed context to the reasoning model
The result is higher quality code -- without you having to name (or worry about missing) related types to submit to the model explicitly.

Example: Consolidating Validation Logic Using the Type Hierarchy

In the sample project, open the ContextAcquisition folder and navigate to OrderValidator.cs.
Inside the ValidateCore() method, you’ll see some validation logic.
protected override void ValidateCore(Order order, ValidationResult result) {
    if (order is null) {
        result.Add("Order is required.");
        return;
    }

    // TODO: Ask AiGen to consolidate this based on what we have in the base class.

    var customer = order.Customer;

    if (customer is null) {
        result.Add("Customer is required.");
        if (StopOnFirstError) return;
    }

    if (customer?.BillingAddress is null) {
        result.Add("Billing address is required.");
        if (StopOnFirstError) return;
    }

    if (string.IsNullOrWhiteSpace(customer?.BillingAddress?.CountryCode)) {
        result.Add("Billing address country is required.");
        if (StopOnFirstError) return;
    }

    if (string.IsNullOrWhiteSpace(order.OrderId)) {
        result.Add("OrderId is required.");
    }
}
This logic duplicates some of the behavior already implemented in the ancestor class (BaseValidator<T> -- check out its RequireCustomer() method).
// Shared helpers that derived validators can (and should) reuse.
protected void RequireCustomer(Customer? customer, ValidationResult result) {
    if (customer is null) {
        result.Add("Customer is required.");
        if (StopOnFirstError)
            return;
    }

    if (customer?.BillingAddress is null) {
        result.Add("Billing address is required.");
        if (StopOnFirstError)
            return;
    }

    if (string.IsNullOrWhiteSpace(customer?.BillingAddress?.CountryCode)) {
        result.Add("Billing address country is required.");
    }
}
This kind of duplication can happen in systems that evolve over time.
Back in OrderValidator.cs, place your caret inside the ValidateCore()method, Rather than telling AiGen exactly how to fix it, we can talk to it the way we might speak to a human teammate.

Invoke AiGen (double-tap and hold the right Ctrl key if speaking or press Caps+G if you prefer to type -- see AiGen setup for more details). Then say something like (one of the following):

“Consolidate this logic with what we already have in the base class.”

“Take a look at the ancestor class and see if we can reuse any of that code here.”

Notice, the prompts are void of symbol or file names. They're just expressions of intent.
AiGen uses the hierarchy to discover the relevant base class, identifies the overlapping logic, and refactors the code so the shared behavior lives where it belongs -- in the base class -- while keeping order-specific checks local.

After applying this change, the AiGen Navigator will appear. You can click the "Show Difference View" button to see the before and after for the changes applied to this method:

Here, AiGen consolidates the duplicated validation logic into a single call to RequireCustomer(), while maintaining the existing StopOnFirstError behavior.

The AiGen navigator's status bar shows total in/out token counts for both the initial context pass ("Ctx in/out"), and the main reasoning model ("Reasoning in/out"). Output token counts are low, as they should be (only 67 output tokens were used by the reasoning model to produce the minimal, targeted change shown in the screenshot). Output tokens typically cost four times more than input tokens, so we want to keep the output token count low, The status bar also shows estimated cost (about 3 cents) and how long it took from request to integrated response (2.5 seconds, in this example). If you're interested to see how CodeRush stacks up against competing AI tooling solutions, note the AiGen completion time in the status bar, undo this last change and try the same prompt on the same code in any other competing AI coding assistance tool.

Speed Where It Matters: Fine-grained Deltas

Most AI coding assistants treat every change the same way: they regenerate large chunks of code, even when only a few lines actually need to change. This approach can break down quickly in real projects -- especially when methods are large and the requested change is small. With most AI tooling it just doesn't make sense to ask AI for small changes when you'll have to wait for an entire method or even a type to be regenerated.
CodeRush's AiGen takes a different approach.
Instead of regenerating entire members, AI works to produce only the smallest deltas to support fast surgical code integrations without manual copy/paste.
This impacts:
  • Speed — generating fewer output tokens (to support the fine grained delta) is faster than regenerating all the code that doesn't change.
  • Cost — output tokens are significantly more expensive than input tokens.
  • Precision — small, focused changes reduce churn and unintended side effects.

Example: Small Business Rule Change Inside a Large Method

In the sample project, open the FineGrainedDeltas folder and navigate to OrderTaxCalculator.cs. Notice the ComputeTaxes() method is non-trivial and could resemble real production code, with multiple early exits, counters, and guard clauses.
Inside the main loop, you’ll see a TODO describing a business rule:
// TODO: Apply the customer's discount tax policy when calculating taxableBase.

This is a subtle rule change. We don’t want to rewrite the method -- we just want to adjust how the taxable base is calculated under specific conditions.

Let's delete the TODO comment and invoke AiGen. You can double-tap the right Ctrl key and keep it held down while using a natural, descriptive prompt such as:

Let's incorporate the customer discount policy.

AiGen should:

  • Change only the minimal code region required
  • Leave the rest of the method untouched
  • Apply the update directly in the editor (no manual copy/paste)

Turning this:

decimal taxableBase = order.Subtotal - order.DiscountAmount;

into something like this:

decimal taxableBase = customer.DiscountPolicy switch {
    Customer.Discounts.ReduceTaxableBase => order.Subtotal - order.DiscountAmount,
    Customer.Discounts.FullyTaxable => order.Subtotal,
    _ => order.Subtotal - order.DiscountAmount
};

Note both the size of the method and how little code AiGen needs to generate to apply the change.

The prompt doesn’t require exact symbol names or structured references — we describe intent, and AiGen resolves the implementation from surrounding context.

This example demonstrates fine-grained deltas in practice: smaller outputs, lower token usage, reduced latency, and a more immediate turnaround — especially when working inside large methods.

After AiGen applies the change, you can find evidence of the smaller delta in two places:

  1. AiGen Navigator status bar: Check the Reasoning out token count and the elapsed time (114 output tokens and 3.2 seconds, respectively, in the screenshot below). Fine-grained deltas yield small output token counts — typically far smaller than the count required to regenerate an entire method.image

  2. Editor selection: Selecting a change in the AiGen Navigator (e.g., "∆ selection") highlights the inserted/modified region, making the delta boundary immediately visible. In the screenshot below, the selection represents the small portion of the method that was regenerated. The rest of the method was untouched by AI.


This is why AiGen feels fast even when working inside large blocks of code: you’re not overpaying (in time or tokens) for code that isn’t changing.

What's also exciting about these optimized deltas, you are free to edit (or apply tooling to) surrounding code while waiting for AI to return.

Which brings us to our next topic....

Working While AI Is Running: In-Flight Changes & Conflict Navigation

One of the most frustrating aspects of modern AI-assisted coding tools is that they repeatedly block your flow. You ask for help, then you wait -- hands off the keyboard -- until the response finally returns.
AiGen removes that bottleneck.
When AiGen requests are launched, you’re free to continue coding. You can edit the same file, move elsewhere in the solution, or even launch additional coding agents.  AiGen anchors each request to the code state at launch and safely validates those changes on return.
When changes are non-overlapping -- as they can be in intentional multi-agent workflows -- AiGen applies the results cleanly and without interruption. 
For example, you can ask AiGen to modify one part of a C# method while you continue working in another part of the same method.
If in-flight edits overlap with part of a reasoning response's intended changes, AiGen withholds only the conflicting change and surfaces both the reason and the change it was prepared to apply. We'll see an example of this in a moment.

Scenario A: Non-Conflicting Changes -- AI + Human

Navigate to the InFlightEdits folder and open OrderSubmissionService.cs. The Submit() method contains several failure points -- exactly the kind of code where you might want better diagnostics. To prepare for this simultaneous edit demo, first, copy the following text to the clipboard:

Logs any failures.

Next, place the caret anywhere inside the Submit() method and invoke AiGen with a request like:
“Add logging around failures in this method.”
While the AI request is running, move the caret to the XML documentation comment and paste the text from the clipboard to the end of the XML doc comment. The goal is to make this change before the AI response lands (if AI is faster, you can undo and try again -- you typically have 2-5 seconds to make the change while this logging request is in-flight).
When the AI response lands, you’ll see:
  • The logging changes are applied correctly.
  • Your manual documentation edit is preserved.
  • No manual merging is required.
AiGen doesn’t lock the editor or assume code is frozen while it works. It assumes you’re still working -- because you are.

Scenario B: Parallel, Non-Conflicting Changes -- Multiple AI Agents

Let's undo our recent changes and restore OrderSubmissionService.cs to its original cloned state.
For this next scenario, we'll launch two AI agents simultaneously to work on the same method.
The first agent will add logging (like we just did in the last scenario), and the second agent will add telemetry so we can track orders (using the TrackOperationAttribute declared in the Shared folder).

Start by opening TrackOperationAttribute.cs from the Shared folder and examining the class. This is the custom attribute we'll use for telemetry. Note that it includes properties for:

  • Name -- describes whatever we're tracking
  • Category -- groups related telemetry events

Next, switch back to OrderSubmissionService.cs and place the caret in the Submit() method. The next two steps should be performed back to back.

  1. Launch the first agent with: “Add logging around failures in this method.”
  2. Launch the second agent with: "Add the track operation attribute. Category is orders."

The goal here is to start up a second agent while the first is still inflight. If the first AI response lands before you can launch the second, perform an undo (close the AiGen Navigator) and then copy the second prompt to the clipboard. Then after launching the first agent by voice, invoke the second agent with Caps+G (plus a paste).

When multiple AI responses land, the AiGen Navigator shows multiple landings using tabs. In the screenshot below, notice the agents landed in reverse order from take-off, with the tracking attribute finishing first.

Notice AiGen correctly discovered the TrackOperationAttribute from the Shared folder and added it with appropriate values for both Name and Category. Also note the orange arrow in the screenshot above -- it points to the second AI landing ("Failure logging" in this example -- your tabs may have different tab titles and agents may complete in a different order than they were launched).

AiGen Navigator tabs may display one or more of the following icons:

  • ⭐ Star -- the landing has not yet been viewed (typically a recent result).
  • ✔️ Check -- all changes were successfully integrated (no conflicts)
  • ❗ Exclamation -- one or more conflicts occurred on landing
Notice that:
  • One agent modifies the **method body** (logging)
  • The other modifies the **method metadata** (the attribute)
Because these changes target **different structural regions**, both land successfully without conflict.
Skilled developers can safely run multiple AI agents in parallel when:
  • The requested changes **do not overlap**, and
  • Each agent’s task can be completed **independently** of the others.
Note: When multiple in-flight AI agents complete their changes, undo follows landing order, not launch order — the most recently applied change is undone first. This mirrors standard Visual Studio editor behavior and keeps AI changes fully integrated into the undo stack.

    Scenario C: Conflicting Changes

    Now let’s look at what happens when a conflict does occur.
    Start by undoing the previous edits to restore OrderSubmissionService.Submit()to its original state.
    Note: When multiple in-flight AI agents complete their changes, undo follows landing order, not launch order -- the most recently applied change is undone first. This mirrors normal Visual Studio editor behavior and keeps AI changes fully integrated into the standard undo stack.
    Next, we'll launch the same AI request as before:
    “Add logging around failures in this method.”
    While the request is in flight, make a meaningful overlapping change to one of the failure points. For example, replace a throw with an early return null;.
    When the reasoning response lands, AiGen detects that one of the targeted code blocks has changed since the request was launched. Rather than guessing or overwriting your work, AiGen blocks only the conflicting integration while allowing all non-conflicting changes to apply normally.
    In the screenshot below, notice that logging was successfully added around every remaining throw statement (the five "selection" entries marked with only the delta symbol). Only the single failure point we modified in our example was held back.
    For the blocked change, AiGen shows:
    • The original code as it existed when the request was launched
    • The current code at apply time
    • The reasoning model's attempted replacement
    This makes the conflict isolated,explicit, and safe to reason about. You can clearly see:
    • which change was blocked,
    • why it was blocked, and
    • exactly what AiGen was prepared to do.
    Because AiGen typically produces **fine-grained deltas**, conflicts are isolated to the smallest possible change. A single overlapping edit does not invalidate the rest of the AI response — only the affected update is held back, while all other safe modifications are integrated.

    The conflict report for the blocked delta shows the original code at request time and the current code at apply time, as well as the attempted replacement. You might see something like this:

    The change for this member was skipped because the target code changed inflight.

    Original code at request time:
    if (order.Customer is null)
      throw new InvalidOperationException("Customer is required.");
    
    Current code at apply time:
    if (order.Customer is null)
      return null;
    Attempted replacement:
    if (order.Customer is null) {
      Console.Error.WriteLine($"[{nameof(OrderSubmissionService)}] Order submission failed: {nameof(order.Customer)} is required. OrderId='{order.OrderId ?? "<null>"}'.");
      throw new InvalidOperationException("Customer is required.");«Caret»
    }
    Original and current code blocks must match on landing.

    This section demonstrates how AiGen behaves when the code changes while AI requests are in flight — including parallel agents and isolated conflicts.

    Everything else proceeds as expected -- no rollback, no guesswork, and no silent overwrites.

    Runtime State to Tests: Debug-Time Object Reconstruction

    So far, all of the examples have focused on static code -- types, methods, and edits driven by what’s visible in the editor. The next capability goes a step further by incorporating runtime state.
    When you invoke AiGen while stopped at a breakpoint, it can now inspect the live debug-time values in scope and include them (if appropriate) as part of the request. That allows the reasoning model to also work from actual execution data, not just source code.
    This is especially powerful when the issue you’re investigating only becomes obvious at runtime.

    Example: Turning a Debug-Time Bug into a Targeted Test

    From the DebugRuntimeState folder open OrderAddressFormatter.cs. The BuildShippingLabel() method formats a shipping label from customer and address data.
    Place a breakpoint on the final line of this method and run the program. When execution stops, inspect the values inside the Visual Studio debugging environment. In this scenario, you’ll notice that:
    • Region is empty or whitespace
    • The computed label (which includes cityRegionPostal) contains a dangling comma and extra spaces
    The bug is subtle and data-dependent. You could fix it immediately -- but it’s a good idea to capture the behavior in a test first, so it doesn’t regress.
    While still stopped at the breakpoint (with the caret inside the BuildShippingLabel() method), invoke AiGen and say:
    “Create a test case for this method based on these debug-time parameter values.
    Add asserts to make sure the label has no double spaces and no dangling comma when the region is blank.”
    While waiting for the response, you might want to drill into the order parameter. It has a Customer property that in turn holds the BillingAddress with the empty region that led to this bug.
    AiGen will:
    • Reconstruct the runtime object graph for the order parameter using live debug-time values
    • Locate the appropriate OrderAddressFormatterTests fixture
    • Generate a new xUnit test that reproduces the observed state and behavior
    • Add targeted assertions that detect the formatting defectUnderstand the object graph in scope (including nested properties/objects)
      You should get a test case like this (note the object graph reconstruction at the top, which recreates the somewhat sophisticated debug-time state that exposed the bug):
          [Fact]
          public void BuildShippingLabel_RegionBlank_NoDoubleSpaces_AndNoDanglingComma() {
              // Arrange (debug-time values)
              var address = new Address {
                  City = "Seattle",
                  Region = " ", // blank/whitespace
                  PostalCode = "98101",
                  CountryCode = "US",
                  Line1 = "123 Example St"
              };
              var customer = new Customer { DisplayName = "Ada Lovelace", BillingAddress = address, Id = "C-42" };
              var order = new Order {
                  Customer = customer,
                  DiscountAmount = 10m,
                  Subtotal = 120m,
                  OrderId = "DBG-1001",
                  TaxAmount = 0m
              };
      
              var formatter = new OrderAddressFormatter();
      
              // Act
              string label = formatter.BuildShippingLabel(order);
      
              // Assert
              Assert.DoesNotContain("  ", label);       // no double spaces anywhere
              Assert.DoesNotContain(", ", label);       // no dangling comma + space
              Assert.DoesNotMatch(@",\s*(—|$)", label); // no comma before em-dash or end
              Assert.DoesNotMatch(@",\s+\d", label);    // no comma followed by spaces then digits (e.g., ",  98101")
              Assert.Contains("Seattle", label);
              Assert.Contains("98101", label);
          }

      This workflow makes it practical to capture real-world edge cases in the moment they are discovered. Instead of manually attempting to duplicate observed state, you can promote live runtime data directly into a durable, repeatable test.


      After creating the new test case, you can stop debugging and run the test case if you like (it should fail since the issue we discovered in the sample code hasn't been fixed yet).
      Why This Matters
      This capability closes a familiar gap in debugging workflows. Developers often discover bugs while stepping through code, but translating those live-data observations into tests is tedious and error-prone.
      By grounding test generation in captured debug-time values, AiGen makes it practical to:
      • Easily capture real-world edge cases
      • Generate focused, high-signal tests
      • Preserve the conditions that exposed the issue
      It’s a natural extension of pair programming: you identify the problem while debugging, and AiGen helps you lock it down with a test before you move on.

      Large-Scale Architectural Changes: Evolving Interfaces Across Many Implementations

      So far, we’ve focused on fast, fine-grained edits — the kind that make AI practical for everyday work. But AiGen is not limited to small changes. It can also perform broad, cross-cutting architectural updates that span multiple files, types, and layers of a solution.
      This next example demonstrates AiGen’s ability to generate new types in bulk and then evolve an interface contract across all of its implementers.

      Example: Generating and Evolving Order Rules

      In the sample project, open the ArchitecturalEdits folder and navigate to IOrderRule.cs. You’ll see a minimal interface:
      namespace CodeRush.AiGen.Main.ArchitecturalEdits;
      
      public interface IOrderRule {
          RuleResult Apply(Order order);
      }
      Place your caret inside the interface and invoke AiGen with the following prompt:
      “I need ten non-trivial implementers of this interface. Put them in the rules namespace.”
      Unlike previous demos, this request generates multiple new classes, each implementing distinct business logic. Because the reasoning model is synthesizing several non-trivial implementations, this step may take longer than the earlier examples.
      When the request completes, AiGen creates a new Rules namespace containing ten concrete implementations of IOrderRule — each reflecting realistic order-processing concerns such as fraud detection, pricing validation, eligibility checks, and fulfillment constraints.
      This demonstrates AiGen’s ability to:
      • Generate multiple production-quality types in a single request
      • Maintain a shared contract across many implementations
      • Respect namespace organization and solution structure
      • Coordinate multi-file changes across a codebase

      Evolving the Contract Across the Hierarchy

      Now return to IOrderRule.cs and invoke AiGen again with:
      “Add two properties — name and description — and update all implementers.”
      AiGen updates the interface to include the new properties, then propagates those changes across all existing rule implementations — ensuring each class remains compliant with the updated contract.
      Because this step modifies existing code rather than generating new types, it typically completes much faster than the bulk-creation step.
      Together, these prompts demonstrate AiGen’s ability to perform:
      • Large-scale architectural refactorings
      • Contract evolution across many implementers
      • Coordinated multi-file updates without manual intervention
      • High-level structural changes (in addition to surgical edits)
      Earlier examples highlighted AiGen’s speed and precision for small changes. This scenario shows the other side of the system — the ability to reshape architecture across an entire subsystem when needed.

      Fixing Code Using Active Compiler Errors

      Until now, we’ve focused on how AiGen understands source code, hierarchy, and runtime state. But AiGen can also reason over active compiler errors — both at the caret and across the Visual Studio Errors list.

      This allows you to resolve broken code using short, natural prompts, without needing to restate what the error is or why it occurred.

      Example: Fixing a Real Compiler Error with “Fix this”

      In the sample project, open the ActiveErrorAnalysis folder and navigate to OrderQueryService.cs.
      At the top of the file, you’ll see a demo toggle:
      // Demo toggle:
      // TODO: Uncomment the line below to intentionally introduce a compiler error on the `where` clause, below.
      //#define DEMO_ACTIVE_ERRORS
      Now place your caret on the Where() call in the broken LINQ query and invoke AiGen. Simply say:
      “Fix this.”
      There’s no need to explain the problem. AiGen inspects the active compiler diagnostics, analyzes the surrounding code, and determines the appropriate fix.
      Here is a typical AiGen response to this error and surrounding code:
      public async Task<int> GetHighValueOrderCountAsync(decimal minSubtotal) {
        var orders = await GetOrdersAsync().ConfigureAwait(false);
        return orders.Count(o => o.Subtotal >= minSubtotal);
      }
      The fix is driven by real compiler feedback, not by a manually described problem.
      This demonstrates AiGen’s ability to:
      • Read live compiler diagnostics from the editor
      • Infer intent from the actual error, not just the source text
      • Generate a non-trivial corrective refactor with a minimal prompt
      • Resolve failures without requiring developers to restate what the compiler already knows
      In practice, this makes AI useful not just for writing new code — but for repairing broken code quickly and contextually.

      What This Update Unlocks

      Taken together, these examples illustrate a shift in how AI assistance fits into everyday development.
      AI Assistance is no longer something you reach for only when you’re willing to pause, wait, and review a large block of regenerated code. With faster/smarter context acquisition, fine-grained deltas, conservative integration, and awareness of both code and runtime state, it also becomes practical to use AI for small, precise tasks -- the kind that make up most real work.
      In practice, this means:
      • You can ask for surgically precise changes inside large methods without paying or waiting for redundant regeneration.
      • You can continue working while AI requests are in-flight, or even launch multiple agents in parallel when tasks impact non-overlapping areas of code.
      • Individual conflicts -- caused by overlapping edits while AI requests are in-flight -- are isolated and surfaced in the UI, rather than forcing all-or-nothing outcomes.
      • Debug-time state observations can be turned into targeted tests, grounded in real execution data.
      None of this requires verbose prompts, detailed instructions, or explicitly naming symbols or files. AiGen is designed to infer intent from context -- types, hierarchy, editor state, and runtime values -- so you can speak naturally and stay in flow.
      If you’re an existing CodeRush user, these capabilities are available now in the latest release. If you’re new to CodeRush (it's free on the Visual Studio Marketplace), the sample project referenced in this post is a good place to start exploring how AiGen behaves in real scenarios.
      AI tooling works best when it’s fast, precise, and respectful of the code you already have. This release is about moving closer to that ideal -- so AI becomes something you use routinely, not something you have to plan around.

      Limitations

      AiGen is optimized to work fast in C# files and XAML. However, as with any tooling involving AI models, AiGen has some limitations:

      • AiGen can generate new files in any programming language and add them to the active project, but it can only modify C# and XAML files.
      • Multi-agent support and editing while AI is in-flight is currently available in C# only, For XAML, wait for the AI response to land before making changes to the same XAML file.
      • Blazor support is pending
      • There are some reasonable depth limitations on reconstructing debug-time object graphs and on traversing class hierarchies for the context window.
      • AiGen cannot delete files or remove files from the project.
      • Aside from adding new files and installing NuGet packages, AiGen cannot modify the project file or the solution file (so there's currently no way to create or add new projects or add/modify project properties).
      • AiGen is not a chat window based tool, and there is no history/memory of previous actions. Context is derived entirely from your prompt, the editor view, surrounding code, the solution source, debug values, etc..
      • There are the usual limitations and disclaimers on the power of AI itself. Sometimes AI "hallucinates" (this is especially true when generating code based on frameworks that have changed or where API evolution has introduced breaking changes over time). Also, it is possible to have a request so complex AI has difficulty implementing it correctly.

      Free DevExpress Products - Get Your Copy Today

      The following free DevExpress product offers remain available. Should you have any questions about the free offers below, please submit a ticket via the DevExpress Support Center at your convenience. We'll be happy to follow-up.