How to Get OpenAI Codex Inside Your Claude Code Session
Install the official Codex plugin for Claude Code. Get a second-pass code review, adversarial challenge, or hand off entire tasks to Codex without leaving your workflow.
OpenAI just shipped an official plugin that puts Codex inside Claude Code. No separate window, no context switching. You get a real second pass from a completely different AI agent on every piece of code you write. The plugin wraps your local Codex CLI, so it uses your existing auth, config, and environment. Three commands cover 90% of what you need: /codex:review for standard reviews, /codex:adversarial-review for pressure-testing high-stakes code, and /codex:rescue for handing a task directly to Codex.
Install the Codex CLI
Open your terminal and run:
npm install -g @openai/codex
This requires Node.js 18.18 or later. If you already have Codex installed, skip this step.
Log into Codex
Run codex login in your terminal. Sign in with your ChatGPT account (Free tier works) or an OpenAI API key.
If you're already signed in from using Codex directly, the plugin picks up that auth automatically.
Add the plugin marketplace
In your Claude Code session, run:
/plugin marketplace add openai/codex-plugin-cc
This registers OpenAI's plugin repository as a source.
Install and activate
Run these two commands:
/plugin install codex@openai-codex/reload-plugins
You should now see the Codex slash commands and the codex:codex-rescue subagent in /agents.
Run setup
Run /codex:setup to verify everything is working. It checks whether Codex is installed, authenticated, and ready. If something is missing, it tells you exactly what to fix.
Run your first code review
The simplest first run:
/codex:review --background/codex:statusto check progress/codex:resultto read the output
This gives you the same quality review as running /review inside Codex directly. It's read-only and won't change your code.
Use --base main to review your whole branch instead of just uncommitted changes.
Adversarial review for high-stakes code
When the risk is hidden assumptions, not obvious syntax mistakes:
/codex:adversarial-review
Unlike regular review, this one is steerable. You can tell it what to focus on:
/codex:adversarial-review --base main challenge whether this was the right caching design/codex:adversarial-review --background look for race conditions
Best for migrations, auth changes, infra scripts, refactors, and anything where you need someone to question your approach, not just inspect the code.
Hand off a task with rescue
When a thread stalls or you want Codex to take over completely:
/codex:rescue investigate why the tests started failing/codex:rescue fix the failing test with the smallest safe patch/codex:rescue --resume apply the top fix from the last run
Rescue supports --model and --effort flags. You can also just say "Ask Codex to redesign the database connection" and the subagent handles it.
Command reference
--base, --background, --wait.--resume, --fresh, --model, --effort.Watch out for
/codex:setup --enable-review-gate) when you're actively watching.--background for anything non-trivial. Check back with /codex:status.Install the plugin, run /codex:setup, then /codex:review --background on whatever you're working on right now. That's the fastest way to see it in action.
Tools used in this guide
FAQ
codex login to authenticate.~/.codex/config.toml and project-level overrides from .codex/config.toml. Same model, effort, and base URL settings you already use./codex:setup --enable-review-gate. Disable with --disable-review-gate. Use carefully. It can create long loops and burn through usage limits./codex:review as your default on everything. Use /codex:adversarial-review on migrations, auth changes, infra scripts, refactors, and anything where the danger is hidden assumptions rather than obvious mistakes. Adversarial review is steerable, so you can tell it exactly what to question.