Claude Code is now live / ultrareview, multiple agents perform parallel cloud-based code review to find bugs

robot
Abstract generation in progress

According to Beating Monitoring, Anthropic has launched /ultrareview (Research Preview) in Claude Code, a cloud-based multi-agent code review feature. Users input /ultrareview in the CLI, and the system will start a set of review agents in a remote sandbox to concurrently check the diff between the current branch and the default branch (including uncommitted changes), or directly review a GitHub PR by passing in the PR number. The entire process does not consume local resources, takes about 5 to 10 minutes, and the results are returned to the session as notifications.

The core difference from local /review lies in the verification mechanism: each finding is independently reproduced and confirmed by separate agents, focusing on real bugs rather than code style suggestions. The official documentation positions the two as tools for different stages: /review provides quick feedback during coding, while /ultrareview is used for in-depth review of key changes (such as authentication, data migration) before merging.

Regarding billing, /ultrareview uses extra usage billing and does not consume plan quota. Pro and Max users each have 3 free uses before May 5 (one-time, non-renewable), after which each use costs approximately $5 up to $20, depending on the scope of changes. Team and Enterprise users have no free quota. This feature requires Claude.ai account authentication and cannot be used with Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, or organizations with zero data retention enabled.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin