All Posts Next

VS Code Setup for AI Development and Cybersecurity

Posted: April 14, 2026 to Technology.

Visual Studio Code is the most widely used code editor in the world, and for good reason. It is free, cross-platform, extensible, and backed by an ecosystem of over 50,000 extensions. But the default installation is a blank canvas. If you work in AI development, machine learning, or cybersecurity, you need a deliberate setup that turns VS Code from a generic text editor into a purpose-built workstation for your domain. This guide covers the exact extensions, configurations, and workflows our engineering team uses daily for AI research, model development, penetration testing, and compliance work.

At Petronella Technology Group, our engineers use VS Code across Linux, macOS, and Windows for everything from training machine learning models on remote GPU servers to analyzing malware binaries and writing compliance documentation. We have tested hundreds of extensions and configuration approaches. This guide reflects what actually works in production workflows, not theoretical recommendations. Every extension ID, settings.json snippet, and keybinding listed here is something our team runs daily.

Why VS Code for AI Development and Cybersecurity

The core advantage of VS Code for AI and security work is not any single feature. It is the combination of four capabilities that no other editor matches simultaneously: extensibility, remote development, Jupyter integration, and terminal integration.

Extensibility means you can assemble a completely different tool depending on your workflow. An AI researcher installs Python, Jupyter, and Copilot extensions. A penetration tester installs Hex Editor, REST Client, and YAML support for Ansible playbooks. A DevSecOps engineer installs Docker, Kubernetes, and Snyk. The same editor serves all three roles because the extension model is modular rather than monolithic.

Remote development is the feature that changed how our team works. VS Code can open a full editing session on a remote server over SSH, inside a Docker container, or in a WSL instance. The editor runs locally on your laptop, but the language servers, linters, debuggers, and terminal all execute on the remote machine. This means you can write and debug Python on a GPU server with 8 NVIDIA A100s from a MacBook Air over a coffee shop Wi-Fi connection. The experience is identical to local development. If you work with dedicated AI development systems, Remote SSH is what ties your local editor to those machines.

Jupyter integration brings notebook-style interactive computing directly into the editor. You do not need to run a separate Jupyter server in a browser tab. VS Code renders notebook cells, executes them against local or remote kernels, displays inline visualizations, and provides a variable explorer. For exploratory data analysis and model prototyping, this eliminates the context-switching penalty of jumping between an editor and a browser.

Terminal integration might seem trivial, but the implementation matters. VS Code supports multiple terminal instances, split panes, and custom shell profiles. You can run a training job in one terminal, monitor GPU utilization with nvidia-smi in another, and tail logs in a third, all without leaving the editor. Combined with tmux, this creates a self-contained workspace that persists across disconnections.

Essential Extensions for AI and Machine Learning

These are the extensions our AI development team installs on every workstation. Each entry includes the extension ID so you can install from the command line with code --install-extension <id>.

GitHub Copilot and Copilot Chat

Extension IDs: GitHub.copilot and GitHub.copilot-chat

Copilot is the most impactful single extension for AI development productivity. It provides inline code completions trained on billions of lines of code, and Copilot Chat adds a conversational interface for explaining code, generating boilerplate, writing tests, and debugging errors. For machine learning work specifically, Copilot excels at generating data preprocessing pipelines, writing PyTorch training loops, creating evaluation metrics, and translating between frameworks (e.g., converting a TensorFlow model definition to PyTorch). We cover Copilot configuration in depth in the dedicated section below.

Python and Pylance

Extension IDs: ms-python.python and ms-python.vscode-pylance

The Python extension provides debugging, linting, formatting, and virtual environment management. Pylance is the language server that powers IntelliSense for Python. It provides type checking, auto-imports, and go-to-definition that actually works across complex ML codebases. For AI work, enable strict type checking in your settings to catch shape mismatches and incorrect tensor operations before runtime:

{
    "python.analysis.typeCheckingMode": "basic",
    "python.analysis.autoImportCompletions": true,
    "python.analysis.inlayHints.functionReturnTypes": true,
    "python.analysis.inlayHints.variableTypes": true
}

Set typeCheckingMode to "basic" rather than "strict" for ML projects. Many popular libraries like NumPy, pandas, and older versions of PyTorch have incomplete type stubs, and strict mode generates excessive false positives that drown out real issues.

Jupyter

Extension IDs: ms-toolsai.jupyter, ms-toolsai.jupyter-keymap, ms-toolsai.jupyter-renderers, ms-toolsai.vscode-jupyter-cell-tags

The Jupyter extension suite transforms VS Code into a full notebook environment. You can open .ipynb files natively, run cells interactively, view matplotlib and plotly visualizations inline, and use the variable explorer to inspect DataFrames and tensors. It also supports .py files with # %% cell markers, which gives you the interactivity of notebooks with the version control friendliness of plain Python files. We detail the full Jupyter workflow in the section below.

Remote - SSH

Extension ID: ms-vscode-remote.remote-ssh

This extension connects VS Code to any machine accessible over SSH. For AI development, this typically means connecting to a GPU server, a cloud instance, or a homelab machine running CUDA workloads. All file operations, terminal sessions, and language server processes run on the remote host. Your local machine handles only the UI rendering. This is detailed in the Remote Development section.

Docker

Extension ID: ms-azuretools.vscode-docker

ML workflows increasingly rely on containerized environments for reproducibility. The Docker extension lets you build, run, and manage containers directly from VS Code. It provides syntax highlighting for Dockerfiles, docker-compose.yml IntelliSense, and a container explorer that shows running containers, images, and volumes. You can right-click a running container and attach a VS Code session to it, which is invaluable for debugging training jobs running inside Docker containers.

YAML and TOML Support

Extension IDs: redhat.vscode-yaml and tamasfe.even-better-toml

Model configuration files, Kubernetes manifests, CI/CD pipelines, and Hydra configs are all written in YAML or TOML. The Red Hat YAML extension provides schema validation, auto-completion, and error highlighting. Even Better TOML does the same for TOML files, which are used by tools like pyproject.toml, Ruff, and many ML experiment trackers. Enable schema associations to get autocomplete for specific file types:

{
    "yaml.schemas": {
        "https://json.schemastore.org/github-workflow.json": "/.github/workflows/*.yml",
        "https://json.schemastore.org/docker-compose.json": "docker-compose*.yml"
    }
}

Thunder Client

Extension ID: rangav.vscode-thunder-client

Thunder Client is a lightweight REST API client built into VS Code. For AI development, you use it to test model serving endpoints (FastAPI, Flask, TorchServe, Triton Inference Server), debug webhook integrations, and validate API responses from ML pipelines. It replaces the need for Postman or Insomnia and keeps your API testing inside the same editor where you write the code. Collections and environments can be saved as JSON files and committed to version control.

GitLens

Extension ID: eamodio.gitlens

GitLens adds inline git blame annotations, commit history visualization, and file comparison tools. For ML projects with multiple collaborators, it helps you understand who changed a training script, when a hyperparameter was modified, and what the model configuration looked like at any point in history. The inline blame is especially useful for debugging regressions in model performance by correlating code changes with metric drops.

Essential Extensions for Cybersecurity

Security work has different extension requirements than AI development. The focus shifts toward binary analysis, infrastructure-as-code validation, API testing, and static analysis for vulnerabilities.

Remote - SSH (Again)

Extension ID: ms-vscode-remote.remote-ssh

Remote SSH appears in both lists because it is equally critical for security work. Penetration testers connect to attack boxes, security researchers work on isolated analysis VMs, and compliance auditors access customer infrastructure from jump hosts. The ability to edit files, run tools, and debug scripts on a remote machine without transferring files back and forth is a workflow accelerator across every security discipline.

Hex Editor

Extension ID: ms-vscode.hexeditor

The Hex Editor extension lets you inspect and edit binary files directly in VS Code. For malware analysis, this means examining PE headers, ELF binaries, and firmware images without switching to a separate hex editor. It supports large files, search by hex pattern or ASCII string, and data inspection that shows values in multiple formats (integer, float, UTF-8) at the cursor position. Right-click any file in the explorer and select "Open With" to choose the hex editor view.

REST Client

Extension ID: humao.rest-client

REST Client lets you send HTTP requests from .http or .rest files directly in the editor. Unlike Thunder Client (which provides a GUI), REST Client uses a plain-text format that is version-controllable and scriptable. For security testing, this is ideal for crafting and replaying HTTP requests during web application assessments:

### Test SQL injection on login endpoint
POST https://target.example.com/api/login
Content-Type: application/json

{
    "username": "admin' OR '1'='1",
    "password": "test"
}

### Check for IDOR vulnerability
GET https://target.example.com/api/users/{{userId}}
Authorization: Bearer {{token}}

Variables like {{userId}} can be defined in environment files, making it easy to switch between targets during an engagement.

YAML for Ansible and Terraform

Extension IDs: redhat.vscode-yaml and hashicorp.terraform

Infrastructure-as-code is central to security operations. Ansible playbooks, Terraform configurations, and Kubernetes manifests all need proper syntax support. The Terraform extension (hashicorp.terraform) provides HCL syntax highlighting, validation, auto-completion from provider schemas, and a built-in terraform fmt on save. For compliance work, being able to review and modify infrastructure configurations with full language support reduces errors in security-critical deployments.

Snyk Security

Extension ID: snyk-security.snyk-vulnerability-scanner

Snyk scans your code, dependencies, and container images for known vulnerabilities. The VS Code extension integrates Snyk directly into the editor, highlighting vulnerable dependencies in your requirements.txt, package.json, or Dockerfile with severity ratings and remediation advice. For organizations subject to compliance frameworks like CMMC, HIPAA, or SOC 2, automated dependency scanning is not optional. Snyk catches CVEs in transitive dependencies that manual review would miss.

SonarLint

Extension ID: SonarSource.sonarlint-vscode

SonarLint performs real-time static analysis as you type, detecting bugs, code smells, and security vulnerabilities. It covers Python, JavaScript, TypeScript, Java, C, C++, and more. For security-focused development, SonarLint catches injection vulnerabilities, hardcoded credentials, weak cryptographic algorithms, and insecure deserialization patterns before the code leaves your editor. Connect it to a SonarQube or SonarCloud instance for team-wide rule enforcement and quality gate tracking.

Error Lens

Extension ID: usernamehw.errorlens

Error Lens displays diagnostic messages (errors, warnings, info) inline at the end of the line where they occur. Instead of hovering over red squiggles or checking the Problems panel, you see the full error message right in the code. This is particularly useful when SonarLint or Snyk flags a security issue because the warning text appears immediately next to the vulnerable line, making it impossible to overlook.

Remote Development Deep Dive: SSH into GPU Servers

Remote SSH is the single most transformative VS Code feature for AI developers. It eliminates the traditional friction of developing on remote machines: no more scp file transfers, no more editing in vim over SSH (unless you prefer it), and no more maintaining separate tool installations on your laptop and your server.

How It Works

When you connect to a remote host via the Remote SSH extension, VS Code installs a lightweight server process (code-server) on the remote machine. This server handles all file I/O, language server execution, extension processing, and terminal management. Your local VS Code instance communicates with this server over the SSH tunnel. The result is that extensions like Python, Pylance, and Jupyter run on the remote machine where your code, data, and GPUs live, while the UI renders locally on your laptop.

SSH Config Setup

Configure your SSH connections in ~/.ssh/config so VS Code can discover them automatically:

# GPU Development Server
Host gpu-server
    HostName 10.10.10.50
    User researcher
    IdentityFile ~/.ssh/id_ed25519
    ForwardAgent yes
    ServerAliveInterval 60
    ServerAliveCountMax 3

# Cloud GPU Instance (AWS, GCP, Lambda Labs)
Host cloud-gpu
    HostName 34.xx.xx.xx
    User ubuntu
    IdentityFile ~/.ssh/cloud-gpu-key.pem
    LocalForward 6006 localhost:6006
    LocalForward 8888 localhost:8888

The LocalForward directives are critical for AI work. Port 6006 forwards TensorBoard, and port 8888 forwards Jupyter. This means you can access http://localhost:6006 on your laptop and see TensorBoard running on the remote GPU server. VS Code also provides automatic port forwarding: if a process on the remote machine starts listening on a port, VS Code detects it and offers to forward it.

Working with GPU Servers

Once connected, your VS Code session is fully remote. Open a terminal and run nvidia-smi to confirm GPU access. Create and activate virtual environments on the remote machine. Install extensions on the remote side (VS Code prompts you to install workspace-relevant extensions on the remote host). The Python extension automatically detects remote interpreters and virtual environments.

For teams with multiple GPU servers, configure SSH jump hosts to access machines behind firewalls:

Host gpu-behind-firewall
    HostName 192.168.1.100
    User researcher
    ProxyJump bastion-host
    IdentityFile ~/.ssh/id_ed25519

This approach is how our team at Petronella accesses GPU workstations on internal networks from anywhere. The bastion host handles authentication and network access control, while VS Code treats the connection as a transparent SSH session.

Developing on Remote Linux from Windows or macOS

Remote SSH removes the operating system from the equation. Your laptop can run Windows, macOS, or Linux. The remote server runs whatever OS your workload requires (typically Ubuntu for CUDA-based ML work). All tools, libraries, Python versions, and CUDA drivers are on the server. Your laptop just needs VS Code and an SSH key. This is particularly valuable for organizations where developers use company-issued macOS laptops but run training workloads on Linux GPU servers or in cloud instances.

Jupyter Notebook Workflow Inside VS Code

The Jupyter extension transforms VS Code into a notebook environment that offers advantages over the traditional browser-based Jupyter interface.

Opening and Running Notebooks

Open any .ipynb file and VS Code renders it as an interactive notebook. Each cell is editable with full IntelliSense, and you execute cells with Shift+Enter (run and advance) or Ctrl+Enter (run in place). Outputs render inline, including matplotlib charts, plotly interactive visualizations, pandas DataFrames, and even images from PIL or OpenCV.

Interactive Python Files with Cell Markers

For version control, .ipynb files are problematic because they store outputs and metadata as JSON, which creates noisy diffs. The better approach for production ML code is to use .py files with # %% cell markers:

# %% [markdown]
# ## Data Loading and Preprocessing

# %%
import pandas as pd
import torch
from pathlib import Path

data_dir = Path("data/processed")
df = pd.read_parquet(data_dir / "training_set.parquet")
print(f"Loaded {len(df):,} rows with columns: {list(df.columns)}")

# %%
# Feature engineering
features = df.select_dtypes(include=["float64", "int64"]).columns
X = torch.tensor(df[features].values, dtype=torch.float32)
y = torch.tensor(df["target"].values, dtype=torch.float32)
print(f"Feature tensor shape: {X.shape}")
print(f"Target tensor shape: {y.shape}")

# %% [markdown]
# ## Model Definition

VS Code recognizes these cell markers and lets you run them interactively just like notebook cells, while the file itself is a standard Python file that diffs cleanly in git.

Variable Explorer and Data Viewer

Click the "Variables" button in the Jupyter toolbar to open the variable explorer. It shows all variables in the current kernel session with their types, sizes, and preview values. Click a DataFrame variable to open it in a full data viewer with sorting, filtering, and column resizing. For tensors, it shows the shape and dtype. This eliminates the need to pepper your code with print(df.shape) and print(type(x)) statements during exploratory work.

Kernel Management

VS Code supports multiple Jupyter kernels: local Python environments, conda environments, remote kernels via SSH, and kernels running inside Docker containers. Select your kernel from the dropdown in the notebook toolbar. For ML work, maintain separate environments for different projects (one for PyTorch 2.x, another for TensorFlow, another for JAX) and switch kernels without restarting VS Code. Remote kernels are especially powerful: connect to a GPU server via Remote SSH and your notebook cells execute on the remote machine while visualizations render locally.

Terminal Integration: tmux, GPU Monitoring, and Shell Workflows

The integrated terminal in VS Code is not a toy. It is a full terminal emulator that supports multiple instances, split panes, custom shell profiles, and proper escape sequence rendering.

Multiple Terminal Profiles

Configure different shell profiles for different tasks:

{
    "terminal.integrated.profiles.linux": {
        "bash": {
            "path": "/usr/bin/bash"
        },
        "fish": {
            "path": "/usr/bin/fish"
        },
        "python-env": {
            "path": "/usr/bin/bash",
            "args": ["-c", "source .venv/bin/activate && exec bash"]
        }
    },
    "terminal.integrated.defaultProfile.linux": "fish"
}

Using tmux Inside VS Code

For long-running processes like model training, tmux sessions inside the VS Code terminal provide persistence. If your SSH connection drops or VS Code restarts, the tmux session continues running on the remote machine. Reconnect and reattach:

# Start a named training session
tmux new-session -s training

# Inside tmux: start your training job
python train.py --epochs 100 --lr 1e-4 --output ./checkpoints/

# Detach: Ctrl+B, then D

# Reattach later (after reconnecting SSH)
tmux attach -t training

This is essential for GPU training runs that take hours or days. Never run a long training job directly in a VS Code terminal without tmux. A momentary network interruption will kill the process. For a deeper look at terminal multiplexers, see our Nerd Fonts guide for making your terminal both functional and readable.

GPU Monitoring in a Split Terminal

Open a split terminal (Ctrl+Shift+5) and run watch -n 1 nvidia-smi to monitor GPU utilization in real time while you work in the main terminal or editor. For more detailed monitoring, use nvitop or gpustat:

# Install nvitop for rich GPU monitoring
pip install nvitop

# Run in a split terminal
nvitop --monitor

This gives you a live dashboard showing GPU utilization, memory usage, temperature, and per-process resource consumption, all in a VS Code terminal pane alongside your code.

Settings and Configuration for Performance

VS Code stores settings at three levels: User settings (global), Workspace settings (per project), and Folder settings (per folder in a multi-root workspace). For AI and security work, leveraging all three levels keeps your editor fast and context-appropriate.

Optimized settings.json for AI Development

{
    // Editor performance
    "editor.minimap.enabled": false,
    "editor.renderWhitespace": "selection",
    "editor.bracketPairColorization.enabled": true,
    "editor.guides.bracketPairs": "active",
    "editor.smoothScrolling": false,
    "editor.cursorBlinking": "solid",
    "editor.cursorSmoothCaretAnimation": "off",

    // File management
    "files.autoSave": "afterDelay",
    "files.autoSaveDelay": 1000,
    "files.watcherExclude": {
        "**/.git/objects/**": true,
        "**/.git/subtree-cache/**": true,
        "**/node_modules/**": true,
        "**/__pycache__/**": true,
        "**/.venv/**": true,
        "**/data/**": true,
        "**/checkpoints/**": true,
        "**/wandb/**": true
    },

    // Search exclusions (critical for ML projects with large data dirs)
    "search.exclude": {
        "**/data/**": true,
        "**/checkpoints/**": true,
        "**/wandb/**": true,
        "**/mlruns/**": true,
        "**/*.parquet": true,
        "**/*.pkl": true,
        "**/*.h5": true
    },

    // Python specific
    "python.analysis.typeCheckingMode": "basic",
    "python.terminal.activateEnvironment": true,
    "[python]": {
        "editor.defaultFormatter": "charliermarsh.ruff",
        "editor.formatOnSave": true,
        "editor.codeActionsOnSave": {
            "source.fixAll": "explicit",
            "source.organizeImports": "explicit"
        }
    },

    // Terminal
    "terminal.integrated.scrollback": 10000,
    "terminal.integrated.gpuAcceleration": "on",

    // Telemetry (disable for security-conscious environments)
    "telemetry.telemetryLevel": "off",
    "redhat.telemetry.enabled": false
}

The files.watcherExclude and search.exclude settings are not optional for ML projects. Without them, VS Code tries to index multi-gigabyte data directories, checkpoint folders, and experiment tracking logs. This consumes RAM, slows search, and can freeze the editor on machines with limited memory.

Workspace-Specific Settings

Create a .vscode/settings.json in each project to override user settings. An ML training project might enable strict linting and disable distractions:

// .vscode/settings.json for an ML training project
{
    "python.analysis.typeCheckingMode": "basic",
    "python.defaultInterpreterPath": "./.venv/bin/python",
    "editor.rulers": [88],
    "files.exclude": {
        "**/__pycache__": true,
        "**/.pytest_cache": true,
        "**/checkpoints": true,
        "**/*.egg-info": true
    },
    "python.testing.pytestEnabled": true,
    "python.testing.pytestArgs": ["tests/"]
}

A security assessment project would have different settings:

// .vscode/settings.json for a security engagement
{
    "files.associations": {
        "*.rules": "yara",
        "*.nse": "lua",
        "*.conf": "ini"
    },
    "editor.wordWrap": "on",
    "terminal.integrated.scrollback": 50000,
    "search.useIgnoreFiles": false
}

Keybinding Customization

Effective keybindings reduce the friction between thinking and executing. These are the custom bindings our team finds most valuable for AI work. Add them to your keybindings.json (Ctrl+Shift+P then search "Open Keyboard Shortcuts (JSON)"):

[
    {
        "key": "ctrl+shift+j",
        "command": "jupyter.runcell",
        "when": "editorTextFocus && jupyter.hascodecells"
    },
    {
        "key": "ctrl+shift+t",
        "command": "workbench.action.terminal.toggleTerminal"
    },
    {
        "key": "ctrl+k ctrl+g",
        "command": "workbench.action.terminal.split"
    },
    {
        "key": "alt+shift+f",
        "command": "editor.action.formatDocument"
    }
]

Multi-Root Workspaces

AI projects often span multiple repositories: a model training repo, a data pipeline repo, an inference server repo, and an infrastructure repo. Multi-root workspaces let you open all of them in a single VS Code window. Create a .code-workspace file:

{
    "folders": [
        { "path": "./model-training", "name": "Training" },
        { "path": "./data-pipeline", "name": "Pipeline" },
        { "path": "./inference-server", "name": "Inference" },
        { "path": "./infrastructure", "name": "Infra" }
    ],
    "settings": {
        "python.analysis.extraPaths": [
            "./model-training/src",
            "./data-pipeline/src"
        ]
    }
}

Each folder can have its own .vscode/settings.json that overrides the workspace settings. The training folder uses PyTorch-specific linting; the infrastructure folder uses Terraform formatting.

GitHub Copilot Configuration and Custom Instructions

Default Copilot works reasonably well out of the box, but configuring it properly for your domain significantly improves suggestion quality.

Custom Instructions

Copilot supports custom instructions that guide its suggestions toward your coding standards. Create a .github/copilot-instructions.md file in your repository root:

# Copilot Instructions

## Python Style
- Use type hints on all function signatures
- Use pathlib.Path instead of os.path
- Prefer f-strings over .format() or % formatting
- Use dataclasses or Pydantic models for structured data
- Follow Google-style docstrings

## ML Conventions
- Use PyTorch for all neural network code
- Log metrics with wandb.log(), not print statements
- Use torch.no_grad() context for inference
- Prefer torch.utils.data.DataLoader over manual batching
- Set random seeds with a utility function, not inline

## Security
- Never hardcode credentials, API keys, or tokens
- Use environment variables or secrets managers
- Validate all external input before processing
- Use parameterized queries for database operations

Copilot reads this file and adjusts its completions to match your conventions. For a team that works in both AI and cybersecurity, this file bridges the gap between ML best practices and secure coding standards.

Workspace Context and File References

In Copilot Chat, use @workspace to give Copilot context about your entire project. This is particularly useful for questions like "How is the data loader configured in this project?" or "Where is the model checkpoint saved?" Copilot Chat indexes your workspace and provides answers grounded in your actual code rather than generic training data.

Reference specific files with #file to focus the conversation. For example, type #file:train.py in Copilot Chat to discuss your training script specifically. Combine with slash commands:

  • /explain to understand unfamiliar code (useful when reviewing open-source model implementations)
  • /tests to generate unit tests for a function
  • /fix to propose fixes for highlighted errors
  • /doc to generate docstrings for functions and classes

Copilot Settings

{
    "github.copilot.enable": {
        "*": true,
        "plaintext": false,
        "markdown": true,
        "yaml": true,
        "jsonc": true
    },
    "github.copilot.advanced": {},
    "github.copilot.chat.localeOverride": "en"
}

Disable Copilot for plaintext files to prevent it from suggesting completions in documentation, notes, and configuration files where its suggestions are more distracting than helpful.

VS Code with Copilot vs Cursor: When to Use Which

Cursor is a VS Code fork that integrates AI more deeply into the editing experience. It shares VS Code's extension ecosystem and settings format, but adds features like multi-file editing (Composer), an AI-powered command palette, and a chat interface that can modify code directly. We have a complete Cursor AI IDE setup guide that covers its strengths in detail.

The decision between VS Code with Copilot and Cursor depends on your workflow:

Use VS Code with Copilot When

  • Remote development is critical. VS Code's Remote SSH extension is mature and reliable. Cursor supports remote development but it has historically lagged behind VS Code in stability and feature parity for remote scenarios.
  • You need maximum extension compatibility. Some extensions, particularly those with custom webviews or complex language servers, occasionally break in Cursor due to its fork divergence from VS Code upstream.
  • Organizational policy requires official Microsoft software. VS Code is published by Microsoft and covered under enterprise licensing agreements. Cursor is a third-party product from Anysphere.
  • Your AI assistance needs are primarily inline completions and chat. Copilot handles these tasks well without the additional complexity of Cursor's multi-file editing features.

Use Cursor When

  • You frequently refactor across multiple files. Cursor's Composer feature can modify multiple files in a single operation, which is useful for large-scale refactoring tasks like renaming a model class across training, evaluation, and inference code.
  • You want AI-driven codebase navigation. Cursor's Ctrl+K inline editing and codebase-aware chat provide a tighter feedback loop than Copilot Chat for exploratory coding sessions.
  • You work primarily on local projects. Cursor's AI features work best with local code where it can index the full project. Remote development reduces some of its advantages.

Many of our engineers run both. VS Code is the primary editor for remote GPU development and security work. Cursor is used for local refactoring sessions and exploratory AI-assisted coding. They share the same settings and extensions, so switching between them is frictionless.

Performance Tuning for Large Projects

AI and security projects can push VS Code's limits. ML repositories with large data directories, thousands of generated files, and heavy language servers need deliberate tuning.

Disable Extensions Per Workspace

Every extension consumes memory and CPU. An AI development workspace does not need the Terraform extension. A security assessment workspace does not need Jupyter. Use VS Code's "Disable (Workspace)" feature to turn off irrelevant extensions on a per-project basis. Open the Extensions panel, right-click an extension, and select "Disable (Workspace)". This alone can reduce memory usage by 200-400 MB on extension-heavy installations.

Increase Memory Limits

VS Code is an Electron application, and Electron inherits Node.js memory limits. For large ML projects, increase the heap size by adding to your launch arguments. On Linux, edit the VS Code desktop entry or launch from the terminal:

# Launch VS Code with increased memory
code --max-memory=8192 .

For persistent configuration, add to ~/.config/Code/User/argv.json:

{
    "enable-crash-reporter": false,
    "js-flags": "--max-old-space-size=8192"
}

Reduce File Watching Overhead

ML projects generate massive numbers of temporary files: checkpoints, logs, cached datasets, compiled bytecode. Exclude these from file watching and search (as shown in the settings section above). Additionally, if your project has more than 10,000 files, consider creating a .vscodeignore pattern in your workspace settings to limit the files VS Code indexes.

GPU Acceleration for the Terminal

Enable GPU-accelerated terminal rendering for smoother scrolling through large log outputs:

{
    "terminal.integrated.gpuAcceleration": "on"
}

This offloads terminal text rendering to the GPU, which makes a noticeable difference when tailing training logs that scroll at thousands of lines per second.

Disable Unnecessary Visual Features

Every visual effect costs CPU cycles. For maximum performance on resource-constrained machines or when running VS Code alongside GPU training jobs that compete for system resources:

{
    "editor.minimap.enabled": false,
    "editor.smoothScrolling": false,
    "editor.cursorBlinking": "solid",
    "editor.cursorSmoothCaretAnimation": "off",
    "workbench.list.smoothScrolling": false,
    "breadcrumbs.enabled": false,
    "editor.occurrencesHighlight": "off",
    "editor.renderLineHighlight": "line",
    "editor.matchBrackets": "near"
}

These settings collectively reduce CPU usage by 15-25% during editing sessions, which frees resources for language servers and training processes.

Font Recommendations

For long coding sessions, use a font designed for code readability. Our team uses JetBrains Mono with ligatures enabled. For terminal sessions, a Nerd Font variant adds useful icons for git status, file types, and system monitoring. See our complete Nerd Fonts guide for installation and configuration, and our Tokyo Night theme guide for a cohesive color scheme across VS Code and your terminal:

{
    "editor.fontFamily": "'JetBrains Mono', 'Fira Code', 'Cascadia Code', monospace",
    "editor.fontLigatures": true,
    "editor.fontSize": 14,
    "editor.lineHeight": 1.6,
    "terminal.integrated.fontFamily": "'JetBrainsMono Nerd Font', monospace"
}

Frequently Asked Questions

Can I use VS Code for AI development without GitHub Copilot?

Yes. Copilot is a productivity accelerator, not a requirement. VS Code provides a complete AI development environment through the Python, Jupyter, Remote SSH, and Docker extensions alone. You can also use alternative AI coding assistants like Codeium (Codeium.codeium) or Continue (Continue.continue) if your organization does not license Copilot. The core workflow of remote development, Jupyter notebooks, and terminal integration works identically without any AI assistant installed.

Is VS Code secure enough for cybersecurity work?

VS Code itself is open source and regularly audited. However, extensions can access your file system, network, and terminal. For security-sensitive work, limit extensions to trusted publishers (Microsoft, Red Hat, HashiCorp), disable telemetry ("telemetry.telemetryLevel": "off"), review extension permissions before installation, and use workspace trust to restrict extensions in untrusted folders. VS Code's workspace trust feature, introduced in version 1.57, prevents extensions from executing code in folders you have not explicitly trusted, which is critical when opening downloaded malware samples or untrusted repositories.

How do I connect VS Code to a remote GPU server behind a firewall?

Use SSH jump hosts (also called bastion hosts or proxy jumps). Configure a ProxyJump directive in your ~/.ssh/config file pointing to an intermediate server that has access to both the internet and your internal network. VS Code's Remote SSH extension respects SSH config directives, so the connection is transparent. You can also use SSH tunneling (LocalForward) to access web services like TensorBoard, Jupyter, and Grafana running on the remote machine without exposing them to the internet.

Does VS Code work well on Linux for AI development?

VS Code on Linux is a first-class experience. The official .deb and .rpm packages work on Ubuntu, Fedora, and derivatives. On NixOS, it is available as pkgs.vscode in nixpkgs. The Remote SSH extension was originally designed for Linux-to-Linux connections, and all GPU-related extensions (CUDA debugging, container attachment, Jupyter kernel management) work natively on Linux without the abstraction layers required on macOS or Windows. If you run Wayland compositors, launch VS Code with the --ozone-platform=wayland flag for native rendering without XWayland blurriness.

How much RAM does VS Code need for AI development?

Base VS Code uses approximately 300-500 MB of RAM. With Python, Pylance, Jupyter, Copilot, and Docker extensions loaded, expect 800 MB to 1.5 GB. Opening large notebooks with rendered visualizations can push memory to 2-3 GB. On machines running training jobs alongside VS Code, allocate at least 4 GB of system RAM for the editor. The file watcher and search exclusions described in this guide prevent memory from growing unboundedly when large data directories are present in the workspace.

Can I use VS Code for both local and remote Jupyter notebooks?

Yes. The Jupyter extension supports local kernels (Python environments on your machine), remote kernels (via Remote SSH), and kernels running inside Docker containers. You can switch kernels per notebook using the kernel picker in the toolbar. A common workflow is to prototype locally with a small dataset, then switch to a remote GPU kernel for full training runs, all within the same notebook file and the same VS Code window.

What is the difference between the Python and Pylance extensions?

The Python extension (ms-python.python) provides core functionality: debugging, virtual environment management, test discovery, and linting integration. Pylance (ms-python.vscode-pylance) is the language server that powers IntelliSense: type checking, auto-completion, go-to-definition, find references, and code navigation. They are separate extensions but designed to work together. Install both. Pylance is technically optional, but without it, you lose the intelligent code navigation that makes VS Code competitive with full IDEs like PyCharm for Python work.

Getting Started

Install VS Code from code.visualstudio.com. Install the extensions listed in this guide using the command line (code --install-extension <id>) or the Extensions panel. Copy the settings.json examples into your User settings. Configure your SSH connections for remote development. Open a project and verify that Pylance, Jupyter, and your terminal profile are working correctly.

For organizations that need help setting up AI development infrastructure, including GPU servers, secure remote access, and compliance-ready development environments, contact Petronella Technology Group at (919) 348-4912. Our team builds and manages AI-ready IT infrastructure for businesses that need both performance and security.

About the Author: Craig Petronella is the CEO of Petronella Technology Group, a cybersecurity and IT infrastructure firm in Raleigh, NC. With CMMC-RP, CCNA, CWNE, and DFE certifications and over 30 years in IT, Craig’s team uses VS Code daily across AI research, security assessments, and compliance engineering workflows.

Need help implementing these strategies? Our cybersecurity experts can assess your environment and build a tailored plan.
Get Free Assessment

About the Author

Craig Petronella, CEO and Founder of Petronella Technology Group
CEO, Founder & AI Architect, Petronella Technology Group

Craig Petronella founded Petronella Technology Group in 2002 and has spent more than 30 years working at the intersection of cybersecurity, AI, compliance, and digital forensics. He holds the CMMC Registered Practitioner credential (RP-1372) issued by the Cyber AB, is an NC Licensed Digital Forensics Examiner (License #604180-DFE), and completed MIT Professional Education programs in AI, Blockchain, and Cybersecurity. Craig also holds CompTIA Security+, CCNA, and Hyperledger certifications.

He is an Amazon #1 Best-Selling Author of 15+ books on cybersecurity and compliance, host of the Encrypted Ambition podcast (95+ episodes on Apple Podcasts, Spotify, and Amazon), and a cybersecurity keynote speaker with 200+ engagements at conferences, law firms, and corporate boardrooms. Craig serves as Contributing Editor for Cybersecurity at NC Triangle Attorney at Law Magazine and is a guest lecturer at NCCU School of Law. He has served as a digital forensics expert witness in federal and state court cases involving cybercrime, cryptocurrency fraud, SIM-swap attacks, and data breaches.

Under his leadership, Petronella Technology Group has served 2,500+ clients, maintained a zero-breach record among compliant clients, earned a BBB A+ rating every year since 2003, and been featured as a cybersecurity authority on CBS, ABC, NBC, FOX, and WRAL. The company leverages SOC 2 Type II certified platforms and specializes in AI implementation, managed cybersecurity, CMMC/HIPAA/SOC 2 compliance, and digital forensics for businesses across the United States.

CMMC-RP NC Licensed DFE MIT Certified CompTIA Security+ Expert Witness 15+ Books
Related Service
Enterprise IT Solutions & AI Integration

From AI implementation to cloud infrastructure, PTG helps businesses deploy technology securely and at scale.

Explore AI & IT Services
All Posts Next
Free cybersecurity consultation available Schedule Now