Advanced AI Setup for AiGen and AiFind in CodeRush for Visual Studio
This post explores advanced setup options for CodeRush's AI-powered features. CodeRush currently supports two AI providers — OpenAI’s direct API and Microsoft’s Azure OpenAI Service. An API key is needed for either service.
Let's first enable AI features and select a service, then dive into the appropriate section for individual options on the service you select.
Quick Jump:
About AiGen & AiFind
CodeRush AI features are a powerful alternative to chat-based AI coding assistance. CodeRush AI assistance is generally:
- Faster (smaller context window - one question + one answer).
- Cheaper (fewer tokens, lower environmental impact).
- More integrated (multiple AI-driven responses are immediately integrated directly into the source).
- Safer (single undo/redo, language-specific agents check/condition response before insertion, AI usage is logged and auditable, supports Azure enterprise tenant so your source code is secure).
General Setup
To enable CodeRush AI features, follow these steps:
- Bring up the CodeRush Options dialog.
- Navigate to the "IDE->Cognitive->General" options page.
- Check the "Enable Cognitive Features" checkbox. You can also optionally enable voice features. For more about voice setup and troubleshooting, click here.
- Navigate to the "IDE->Cognitive->API Keys" options page.
- If you have an OpenAI direct API key, click "OpenAI (official API)". If you prefer to use Azure OpenAI, click "Azure Open AI (Microsoft-hosted)"
- Fill out the service-specific options. I'll dive into each of these options in the separate sections below. Jump to OpenAI direct API setup or Azure Open AI setup.
- Enter your chosen service API key in the corresponding field below. API keys are stored on the local machine in the specified environment variables.
OpenAI direct API setup is next. Or jump ahead to Azure Open AI setup.
OpenAI direct API setup
- Bring up the CodeRush Options dialog.
- Navigate to the "IDE->Cognitive->API Keys" options page.
- Select the "OpenAI (official API)" service. You will need an API key from OpenAI.
- Select the desired model and fill out the service-specific options. The user interface will adapt to allow you to configure settings for the model you select.

Individual options are covered next.
Max Response Tokens
This is the maximum length of the reply. This tells the model how many tokens it’s allowed to generate at most. Larger values give longer answers (but likely cost more), while smaller values produce shorter, faster replies. Since the response is often generated code, it's generally a good idea to use the maximum response token limit supported by the model. I recommend 16k-32k for a solid experience, or an even higher limit if the model and your budget can support it. This field accepts whole numbers (e.g., "4096") as well as multiplier abbreviations (e.g., using "k" for x1000 and "M" for x1000000 as in 128k or 1M). Setting this field to a value higher than your model can support may result in an error response returned from OpenAI, so if you see that simply dial back the number in this field until it is in range.
Token Pricing
This is the cost per 1M tokens (input & output). CodeRush uses these values to estimate the cost of each interaction, showing it in the AiGen Navigator's status bar and recording it in the logs. Input tokens (prompts + context sent to AI) and output tokens (the AI’s response) typically have different prices, so you’ll see two fields here. The values specified here don’t change how the API calls are made — it’s just for cost tracking and visibility.
Note that values entered here can be in any currency. Whatever currency you use, CodeRush will use in the logs and in the display text on the status bar.
Note also that when you switch models, CodeRush will auto-fill these fields with the latest pricing at the time of our last build. You can check these prices for accuracy by clicking the link below the Token Pricing fields. You can also use this to compare the relative pricing of the different models (careful -- some models can be significantly more expensive than others). You can find OpenAI's latest pricing here (the same link also exists in the API keys Options page).
Temperature
This setting adjusts how predictable or creative the AI’s replies are. Lower values (near 0) produce more consistent, reliable results, while higher values (near 1) encourage more variation and creativity. This option is supported and visible when gpt-4.1 and earlier models are selected, and does not appear for gpt-5 models and up.

Verbosity & Reasoning Effort
These controls let you tune how much detail the model includes in its answers (Verbosity) and how much internal effort it invests in solving the problem (Reasoning Effort). Lower settings are generally faster and cheaper, while higher settings provide more thorough, detailed, and accurate responses at the cost of speed and token usage. These options appear only when you select gpt-5 and later models in the options dialog. They are hidden for gpt-4.1 and earlier models.

Performance Note
Both Verbosity and Reasoning Effort affect performance and token cost:
Higher verbosity means more tokens returned (increasing costs).
Higher reasoning effort means more compute cycles occur inside the model, so responses may take noticeably longer.
Once you have the options set as desired, jump down to the Quick Test section below for instructions on how you can try this out.
Deploying a new Azure Resource
Before you can use Azure OpenAI with CodeRush AiGen, you’ll need to create a new Azure resource (full, official Microsoft guide is here). This resource hosts your deployments (specific models such as gpt-4 or gpt-5) and provides the endpoint and API keys that CodeRush uses to connect.
If you already have an Azure OpenAI resource created, you can jump ahead to Azure Open AI setup.
1. Sign in to the Azure Portal at portal.azure.com.
2. Click Create a resource, search for OpenAI.
3. Find the Azure OpenAI service, click Create, and select "Azure OpenAI"

4. On the Basics page:
- Select your Subscription.
- Choose or create a Resource group.
- Pick a Region that supports Azure OpenAI.
- Under the Instance Details section, enter a unique resource Name. This resource name will be used in the CodeRush options dialog.
- Leave the pricing tier at Standard S0.

5. Click Next until you land on the Review + submit page, then click Create at the bottom of the page. Deployment usually takes less than a minute.

6. Once deployment finishes, click Go to resource.

7. Select Explore Azure AI Foundry portal. This should open the AI Foundry in a new browser tab.
8. Inside AI Foundry, open the Chat Playground, click Create a deployment, and choose a model (e.g. gpt-4.1, gpt-5, etc.) and click Confirm.
9. Enter a Deployment name (or accept the default). This deployment name will be used in the CodeRush options dialog.

10. Click Deploy.
11, Back on the previous browser tab for your Azure OpenAI service, click the "Click here to manage keys" link. This will bring up the Keys and Endpoint page. You can copy your API keys from this page (also needed for setup, below).
If you encountered any challenges on setting up a new Azure OpenAI resource, be sure to check out the full, official Microsoft guide here.
Azure OpenAI Setup
Base Model
Resource Name
Deployment Name
API Version
Max Response Tokens
Token Pricing (Input/Output)
Verbosity, Reasoning Effort, and Temperature
Quick Test
Now with AI features enabled and an AI Service selected, it's time to test out the new features.
- First, create a new C# project (e.g., console or WPF).
- Place the caret inside some C# code.
- If using voice, Tap and hold the right Ctrl key (like a double-tap) and say "I need a new customer class with all the usual properties" (and release the Ctrl key when done speaking). Alternatively, press Caps+G to invoke the CodeRush AiGen Prompt window, enter the same text, and click Send.

After a few moments, you should have a new class named "Customer" added to the project with properties for Name, Email, Address, etc.
