Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.raydium.io/llms.txt

Use this file to discover all available pages before exploring further.

Audits catch some classes of bugs (known attack patterns, access-control mistakes, integer overflow) and miss others (economic design flaws, game-theoretic manipulation, integration bugs with other programs). Raydium’s programs have multiple rounds of audits each; this page lists them and discusses what each audit actually verified.

Per-program audit table

ProgramAuditorDateReport
Order-book AMMKudelski SecurityQ2 2021View
Concentrated liquidity (CLMM)OtterSecQ3 2022View
Updated order-book AMMOtterSecQ3 2022View
StakingOtterSecQ3 2022View
Order-book AMM & OpenBook migrationMadShieldQ2 2023View
Constant-product AMM (CPMM)MadShieldQ1 2024View
Burn & Earn (liquidity locker)HalbornQ4 2024View
LaunchLabHalbornQ2 2025View
CPMM (update)Sec3Q3 2025View
CLMM update — Limit Order, Dynamic Fee, Single Asset FeeSec3Q2 2026View
Members of the Neodyme team have also performed extensive reviews via bug-bounty agreements. All audit reports for Raydium programs are mirrored under github.com/raydium-io/raydium-docs/audit/. Each auditor also publishes on their own site.

What audits cover

A typical Raydium audit (~3–6 weeks, 2 auditors) covers:
  • Access control — is every privileged operation correctly gated?
  • Arithmetic — overflows, underflows, rounding direction, fixed-point precision.
  • Account validation — does every account have the correct owner, mint, authority?
  • Reentrancy-like patterns — does state update before or after a CPI?
  • PDA derivation — are seeds consistent across all sites?
  • Error codes and messages — do error conditions revert cleanly?
  • Code quality — idiomatic Rust, dead code, unreachable branches.

What audits don’t cover

  • Economic game theory — e.g. “if I can create 1000 pools for free, can I grief the router?”
  • MEV / ordering — sandwich attacks, front-running via validator collusion.
  • Off-chain infrastructure — RPC reliability, indexer correctness, frontend.
  • Integrations with other programs — bugs that only manifest when composed with specific lending, options, or aggregator contracts.
  • Emergent behaviors over time — what happens after 10 million positions? Audits look at small-scale test cases.
This is why audit ≠ safety guarantee. Raydium supplements audits with bug bounties, monitoring, and defensive engineering.

Finding-resolution status

Every audit produces a findings list (critical / high / medium / low / informational), with severity counts and per-finding status (Fixed / Acknowledged / Won’t fix). Per-finding breakdowns are not duplicated here — read each report directly via the table above.

Re-audit after significant changes

When a program ships a significant upgrade (new instruction, new account field, new extension support), Raydium commissions a re-audit. The Sec3 Q3 2025 review of CPMM and the Sec3 Q2 2026 review of CLMM (Limit Order, Dynamic Fee, Single Asset Fee) listed in the table above are both re-audits of this kind. The re-audit scope is narrower (just the diff), but it’s genuinely a re-audit — not just a code review. Reports for re-audits are appended to the primary audit report.

On-chain verification

The deployed program hash should match the audited code hash. Anyone can verify:
# Pull the deployed program bytecode.
solana program dump <PROGRAM_ID> program.so

# Build the repo at the audited commit.
git clone https://github.com/raydium-io/raydium-cp-swap
cd raydium-cp-swap
git checkout <audited_commit_hash>
anchor build --verifiable

# Compare.
sha256sum program.so target/deploy/raydium_cp_swap.so
The Anchor verifiable builds produce deterministic bytecode; hashes should match exactly. If they don’t, the deployed program is not the audited one — escalate. Raydium publishes the expected hashes per deploy in the repo releases section.

How to read an audit report

A short guide for non-auditors:
  1. Skip to the findings summary — a table of severity counts. If the “Critical” count is >0 and you see “Open” status, dig in.
  2. Read each finding’s description and status. “Fixed in commit XYZ” means resolved; “Acknowledged” means the team accepted the risk; “Partially fixed” is worth a closer look.
  3. Scan the scope section. If the audit didn’t cover the instruction or account you care about, the absence of findings there isn’t evidence of safety.
  4. Skim the auditor’s recommendations section. Often more useful than the findings — surfaces “we couldn’t formally prove this but we’re uneasy” notes.

Bug-bounty integration

Audits run pre-deploy; bug bounties run continuously post-deploy. Raydium’s bounty program (security/disclosure) covers everything audits do plus:
  • Economic attacks that audits don’t cover.
  • Bugs found in new integrations.
  • Implementation bugs in SDKs and off-chain components.
A whitehat finding in the bounty program typically gets paid out faster than waiting for the next audit cycle; incentives are aligned to disclose rapidly. The active program is hosted on Immunefi: immunefi.com/bug-bounty/raydium/information.

Historical incidents

Raydium’s programs have had two notable real-world incidents:

Pool authority exploit (December 2022)

What: AMM v4 pool authority private key was compromised, allowing an attacker to drain several pools. Scope: Operational key management, not a program bug. The audit had not flagged the code since the code was correct; the key management process was the failure. Fix: Multisig migration (all authority roles moved to Squads multisig); additional operational controls. Lesson: Audits don’t cover key management. See security/admin-and-multisig.

OpenBook integration freeze (January 2023)

What: An OpenBook program update changed account semantics; AMM v4’s MonitorStep crank couldn’t settle PnL until an AMM v4 patch shipped. Scope: Integration bug — neither program was wrong in isolation. Fix: AMM v4 patch and coordinated deploy. Lesson: Audits of program A don’t catch bugs in program A’s integration with program B. The right tool is integration testing + staged rollouts.

Pointers

Sources: