hojas/scratch/coderabbit.md
2026-03-21 18:52:56 +01:00

3.3 KiB

Coderabbit exploration

2026-03-19, I'm exploring around coderabbit.

Auth

Our auth is tied by github.

The funky user

I checked the user lists and there's this guy that goes by ayushsaksena30 who appears in our user list. He's not part of our Github org and he somehow got a seat assigned. From what I understand after looking at the Subscription and Billing section, we get charged 30$ per assigned user.

I'm concerned security wise (who's this guy? how did he get in here? what power does he has as a member level user?), plus we shouldn't go around paying for seats stupidly.

I checked and he has one open PR in lana-bank: https://github.com/GaloyMoney/lana-bank/pull/4276

Does this mean anybody who opens a PR in lana-bank will get a seat automatically? Are we fine with that?

Okay, it seems this is the setting I should fiddle with: https://docs.coderabbit.ai/management/seat-assignment#manual-approval

Local plugin

Coderabbit has plugins for Claude Code, Codex, etc.

You can install this locally, auth, and then call it with a simple command. Performs the same type of review as what we see in github. If we truly value the coderabbit suggestions, then it would be useful to run this locally before just throwing an unreviewed PR, thus making the PR review just an automated check/guardrail.

More details here: https://docs.coderabbit.ai/cli/claude-code-integration

PR Summaries

Documented here: https://docs.coderabbit.ai/pr-reviews/summaries Can be configured here: https://app.coderabbit.ai/organization/settings/review/summary

  • They can be turned on or off
  • Prompt can be modified

PR Review

Most interesting details can be modified here: https://app.coderabbit.ai/organization/settings/review/behavior

Mostly, adapt prompts for certain codepaths, and make filters around paths and PR tags.

Pre-merge checks

Mostly feel arbitrary stuff (Docstring coverage?), I think it's going to be more noise than help.

The only one which could be interesting is the "Assess how well the PR addresses linked issues.", but we would need to see how well it reasons out, plus think about what happens with issue-less PRs or PRs with anemic issues.

The custom checks is more interesting. It feels like a good idea for conventions that are philosophical and easy to catch through observation, but hard to nail down in specific CI scripts for checking.

One check, for example, could be that the PR updates relevant parts of the documentation alongside it.

Coderabbit config

We can configure via file, we definetely should go for that if we stick with the tool: https://docs.coderabbit.ai/getting-started/yaml-configuration

Learnings

Kind of a long-term memory tool. How good will depend on what it picks up. Hard to maintain if it's only manageable through web UI, will probably need curation. Exportable via CSV, but still feels quite walled-garden-ish.

It's also unclear what's the diff point between learning and code guidelines.

https://app.coderabbit.ai/learnings

Other stuff

Parts of their own docs feel like slop, which I don't like.

For example, the sections around this link: https://docs.coderabbit.ai/knowledge-base/learnings#kpi-cards, read like slop. Purely descriptive, boring and systematic, no value in them.

I would basically look into this instead: https://code.claude.com/docs/en/github-actions