Nudge: A skill to offer in-line feedback to coding agents
I have been using Coding agents a bunch, particularly since Opus 4.5 and GPT 5.2 came out.
I have a long post about my general LLM coding experiences coming out soon. This is a quick tip I have been recently using.
One of the UX affordances I miss the most about the agent iteration cycle is not being able to quickly offer in-line suggestions to the code changes proposed by the LLM, while staying in the context of the current conversation. What I mean is, the LLM will have applied some large-ish diff across multiple files. I have nits/architectural suggestions/refactors that make the most sense by leaving prompts near specific lines (Similar to the in-line assistance mode of the various LLM IDEs). When I want the LLM to process those prompts, I want them to happen within the current thread’s context, as well as affecting the context going forward. Of the limited set of agents/IDEs I’ve used, none really do this, except Aider, which I’ve long stopped using.
- Codex and Claude - since these are CLI based tools if used without IDE integration, they don’t have any way except laboriously explaining with
@based file references. - Zed - Zed allows you to mention LLM threads in Inline Assistant, but it is tedious. I am not a “run 10 agents in parallel” person. I know which context I’m currently in, and don’t want to specify it every time.
- JetBrains Junie - No mention of something like this in the docs.
So, I’ve created a tiny skill instead. Depending on your agent, drop this SKILL.md in the appropriate place. I primarily use Codex CLI. For this, I drop it in ~/.codex/skills/nudge/SKILL.md.
---
name: nudge
description: Apply when the user explicitly invokes the nudge skill; scan files modified in the current session for language-specific comments starting with AI! or AI? to make inline code changes or answer questions, then optionally remove those comments upon user confirmation.
---
# Nudge
## Core workflow
- Determine which files were modified in the current session.
- Scan those files for language-specific comments whose content (after the comment marker and optional whitespace) starts with AI! or AI?.
- For each AI! comment, treat the remainder of the line as an inline suggestion and implement the change in the nearby code.
- For each AI? comment, answer the question in the conversation, using the current context.
- After updates and answers, ask the user whether to delete all AI!/AI? comments you processed.
- If the user confirms, remove only those special comments and leave other comments untouched.
## Comment parsing guidance
- Treat both line comments and block comments as valid, based on the file’s language.
- Match only when the comment text itself begins with AI! or AI? after optional whitespace.
- Ignore occurrences in strings or code; only act on actual comments.
- When a block comment spans multiple lines, treat each comment line independently.
## Execution notes
- Make minimal, local changes aligned with each AI! hint.
- If a hint is ambiguous, ask a clarifying question before changing code.
- Do not delete any special comments
Note that this was all generated by Codex itself, using it’s $skill-creator skill, with a very short prompt.
After that, the workflow is:
- Usual LLM iteration cycle, building up context and edits.
- After a turn is done, go through the changes in my IDE, adding
AI!andAI?comments as appropriate. - Come back to Codex and type
$nudge. - Wait for the LLM to iterate, then repeat this loop until happy.
I’d really like JetBrains to build on their decent in-IDE review functionality and bring something like this to the LLM iteration cycle. i.e. I’d never immediately accept/reject diffs. Instead, after the agent has done a turn, I can go through the extant diff, make inline suggestions, and then “submit the review”, and the AI does another turn.
The Claude Code/Codex IDE integrations could also build on this skill, if they allowed their integrations to be able to launch a custom skill.