Roman Peschke @roman.peschke
Free Guide

How to Get OpenAI Codex Inside Your Claude Code Session

Install the official Codex plugin for Claude Code. Get a second-pass code review, adversarial challenge, or hand off entire tasks to Codex without leaving your workflow.

Tools you'll use

OpenAI just shipped an official plugin that puts Codex inside Claude Code. No separate window, no context switching. You get a real second pass from a completely different AI agent on every piece of code you write. The plugin wraps your local Codex CLI, so it uses your existing auth, config, and environment. Three commands cover 90% of what you need: /codex:review for standard reviews, /codex:adversarial-review for pressure-testing high-stakes code, and /codex:rescue for handing a task directly to Codex.

Prerequisites
1

Install the Codex CLI

Open your terminal and run:

npm install -g @openai/codex

This requires Node.js 18.18 or later. If you already have Codex installed, skip this step.

2

Log into Codex

Run codex login in your terminal. Sign in with your ChatGPT account (Free tier works) or an OpenAI API key.

If you're already signed in from using Codex directly, the plugin picks up that auth automatically.

Install the plugin
3

Add the plugin marketplace

In your Claude Code session, run:

/plugin marketplace add openai/codex-plugin-cc

This registers OpenAI's plugin repository as a source.

4

Install and activate

Run these two commands:

  • /plugin install codex@openai-codex
  • /reload-plugins

You should now see the Codex slash commands and the codex:codex-rescue subagent in /agents.

5

Run setup

Run /codex:setup to verify everything is working. It checks whether Codex is installed, authenticated, and ready. If something is missing, it tells you exactly what to fix.

Use it
6

Run your first code review

The simplest first run:

  • /codex:review --background
  • /codex:status to check progress
  • /codex:result to read the output

This gives you the same quality review as running /review inside Codex directly. It's read-only and won't change your code.

Use --base main to review your whole branch instead of just uncommitted changes.

7

Adversarial review for high-stakes code

When the risk is hidden assumptions, not obvious syntax mistakes:

/codex:adversarial-review

Unlike regular review, this one is steerable. You can tell it what to focus on:

  • /codex:adversarial-review --base main challenge whether this was the right caching design
  • /codex:adversarial-review --background look for race conditions

Best for migrations, auth changes, infra scripts, refactors, and anything where you need someone to question your approach, not just inspect the code.

8

Hand off a task with rescue

When a thread stalls or you want Codex to take over completely:

  • /codex:rescue investigate why the tests started failing
  • /codex:rescue fix the failing test with the smallest safe patch
  • /codex:rescue --resume apply the top fix from the last run

Rescue supports --model and --effort flags. You can also just say "Ask Codex to redesign the database connection" and the subagent handles it.

Command reference

/codex:review
Standard read-only code review. Supports --base, --background, --wait.
/codex:adversarial-review
Steerable challenge review. Questions your design, tradeoffs, and hidden assumptions. Accepts free-text focus after the flags.
/codex:rescue
Delegate a task to Codex. Supports --resume, --fresh, --model, --effort.
/codex:status
Check running and recent Codex jobs for the current repo.
/codex:result
Show the final output from a finished job. Includes session ID for resuming in Codex.
/codex:cancel
Cancel an active background job.
/codex:setup
Verify install and auth. Manage the optional review gate.

Watch out for

!
The review gate can burn through usage fast. It creates a Claude/Codex feedback loop where Codex blocks Claude from stopping until issues are resolved. Only enable it (/codex:setup --enable-review-gate) when you're actively watching.
!
Multi-file reviews can take a while. Always run reviews with --background for anything non-trivial. Check back with /codex:status.
!
Codex usage counts against your ChatGPT plan limits. Check OpenAI's pricing page to understand what your plan includes.
Start here

Install the plugin, run /codex:setup, then /codex:review --background on whatever you're working on right now. That's the fastest way to see it in action.

Tools used in this guide

Claude Code
Anthropic's agentic coding tool. Terminal-based AI assistant that reads, writes, and runs code in your local environment. Supports plugins for extending functionality.
docs.anthropic.com/en/docs/claude-code
Codex Plugin for Claude Code
Official OpenAI plugin that brings Codex into Claude Code. Code reviews, adversarial challenges, and task delegation. Wraps your local Codex CLI. Apache-2.0 licensed.
github.com/openai/codex-plugin-cc
Node.js
JavaScript runtime required by the Codex CLI. Version 18.18 or later needed.
nodejs.org

FAQ

Do I need a separate Codex account?
If you're already signed into Codex on your machine, it works immediately. If not, you need a ChatGPT subscription (the Free tier works) or an OpenAI API key. Run codex login to authenticate.
Does the plugin run a separate Codex runtime?
No. It delegates through your local Codex CLI and app server. Same install, same auth, same config, same environment. It's Codex, just invoked from inside Claude Code.
Will it pick up my existing Codex config?
Yes. It reads your user-level config from ~/.codex/config.toml and project-level overrides from .codex/config.toml. Same model, effort, and base URL settings you already use.
What's the review gate?
An optional Stop hook that runs a Codex review before Claude Code exits. If Codex finds issues, Claude addresses them before stopping. Enable with /codex:setup --enable-review-gate. Disable with --disable-review-gate. Use carefully. It can create long loops and burn through usage limits.
When should I use adversarial review vs regular review?
Use /codex:review as your default on everything. Use /codex:adversarial-review on migrations, auth changes, infra scripts, refactors, and anything where the danger is hidden assumptions rather than obvious mistakes. Adversarial review is steerable, so you can tell it exactly what to question.
Where did this come from?
OpenAI shipped it on March 30, 2026. Announced on X and published to GitHub under Apache-2.0. Already at 3.4k stars.