Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Greeting

banner

Welcome, stranger!

This is the Dango Book, all you need to know about the one app for everything DeFi.

What is Dango?

Dango is a DeFi-native Layer-1 blockchain built from the ground up for trading. Where most blockchains are general-purpose platforms that happen to host DeFi apps, Dango inverts this: the chain is purpose-built around a DEX, with every infrastructure decision made to serve traders.

Dango describes itself as “the one app for everything DeFi” — combining spot trading, perpetual futures, vaults, and lending within a single interface and a single unified margin account.

Problems Dango Solves

  1. Capital Inefficiency

    On today’s platforms, collateral is siloed. A trader on Aave must deposit separately from their dYdX position, their Uniswap LP, and so on. Dango’s Unified Margin Account lets a single pool of collateral back spot trades, perpetual positions, and lending simultaneously.

  2. Execution Quality & MEV

    AMMs suffer from slippage and impermanent loss by design. Orders are also vulnerable to MEV — bots that front-run transactions for profit at the user’s expense. Dango’s on-chain Central Limit Order Book (CLOB) with periodic batch auctions eliminates both problems.

  3. Terrible UX

    DeFi onboarding is notoriously difficult: manage private keys, pay gas in native tokens, bridge assets across chains, juggle multiple wallets. Dango introduces Smart Accounts — a keyless system where accounts are secured by passkeys (biometrics) instead of seed phrases. Gas is paid in USDC.

  4. Developer Inflexibility

    EVM and Cosmos SDK give developers limited control over gas mechanics, scheduling, and account logic. Dango’s Grug execution environment gives developers programmable gas fees, on-chain cron jobs, and customizable account logic — without hard forks.

Key Stats

MetricValue
X / Twitter followers~111,000
Testnet unique users180,000+
Testnet transactions1.75M+
Seed funding raised$3.6M
Alpha Mainnet launchJanuary 2026

What Makes Dango Different

Most chains compete on speed (TPS). Dango competes on product design — specifically by building its own execution environment (Grug) co-designed with the application layer. This “app-driven infra development” enables features impossible or prohibitively expensive on EVM chains:

  • On-chain CLOB with sub-second batch settlement
  • Protocol-native cron jobs for automatic funding rate calculation
  • Smart account architecture enabling biometric signing
  • Zero gas fees
  • Unified cross-collateral margin for all trading products

Margin

1. Overview

All trader margin is held internally in the perps contract as a USD value on each user’s userState.

Internal logics of the perps contract use USD amounts exclusively. Token conversion only happens at two boundaries:

  • Deposit — the user sends settlement currency (USDC) to the perps contract; the oracle price converts the token amount to USD and credits userState.margin.
  • Withdraw — the user requests a USD amount; the oracle price converts it to settlement currency tokens (floor-rounded) and transfers them out.

2. Trader Deposit

The user sends settlement currency as attached funds. The perps contract:

  1. Values the settlement currency at a fixed $1 per unit (no oracle lookup).
  2. Converts the token amount to USD: .
  3. Increment userState.margin by .

The tokens remain in the perps contract’s bank balance.

3. Trader Withdraw

The user specifies how much USD margin to withdraw. The perps contract:

  1. Computes (see §8), clamped to zero.
  2. Ensures the requested amount does not exceed .
  3. Deducts the amount from userState.margin.
  4. Converts USD to settlement currency tokens at the fixed $1 rate (floor-rounded to base units).
  5. Transfers the tokens to the user.

4. Equity

A user’s equity (net account value) is:

where is the USD value of the user’s deposited margin (userState.margin).

Per-position unrealised PnL is:

and accrued funding is:

Positive accrued funding is a cost to the trader (subtracted from equity). Refer to Funding for details on the funding rate.

5. Initial margin (IM)

where is the per-pair initial margin ratio. IM is the minimum equity required to open or hold positions. It is used in two places:

  • Pre-match margin check — verifies the taker can afford the worst-case 100 % fill (see Order matching §5).
  • Available margin calculation — determines how much can be withdrawn or committed to new limit orders (see §8 below).

When checking a new order the IM is computed with a projected size: the user’s current position in that pair is replaced by the hypothetical post-fill position (). Positions in other pairs use their actual sizes.

6. Maintenance margin (MM)

where is the per-pair maintenance margin ratio (always ). A user becomes eligible for liquidation when:

See Liquidation for details.

7. Reserved margin

When a GTC limit order is placed, margin is reserved for the worst-case scenario (the entire order is opening):

The user’s total is the sum across all resting orders. Reserved margin is released proportionally as orders fill and fully released on cancellation. Reduce-only orders reserve zero margin (they can only close).

See Order matching §10 for when reservation occurs.

8. Available margin

where is the IM of current positions (§5 formula applied to actual sizes, without any projection). This determines how much can be withdrawn (§3) or committed to new limit orders.

Order Matching

This chapter describes how orders are submitted, matched, filled, and settled in the on-chain perpetual futures order book.

1. Order types

An order can be order:

  • Market — immediate-or-cancel (IOC). Specifies a max_slippage relative to the oracle price. Any unfilled remainder after matching is discarded (unless nothing filled at all, which is an error).
  • Limit — good-till-cancelled (GTC). Specifies a limit_price. Any unfilled remainder is stored as a resting order on the book. If post_only is set, the order is rejected if it would cross the best price on the opposite side — it takes a fast path and never enters the matching engine.

Resting orders on the book are stored as:

FieldDescription
userOwner address
sizeSigned quantity (positive = buy, negative = sell)
reduce_onlyIf true, can only close an existing position
reserved_marginMargin locked for this order

The pair ID, order ID, and limit price are part of the storage key.

2. Order decomposition

Before matching, every fill is decomposed into a closing and an opening portion based on the user’s current position:

Order directionCurrent positionClosing sizeOpening size
Buy (+)Short (−)
Sell (−)Long (+)
Same directionAny

Both closing and opening carry the same sign as the original order size (or are zero). For reduce-only orders, the opening portion is forced to zero — if the resulting fillable size is zero, the transaction is rejected.

3. Target price

The target price defines the worst acceptable execution price for the taker:

Market orders (bid/buy):

Market orders (ask/sell):

Limit orders: (oracle price is ignored).

A price constraint is violated when:

  • Bid:
  • Ask:

4. Matching engine

The matching engine iterates the opposite side of the book in price-time priority:

  • A bid (buy) walks the asks in ascending price order (cheapest first).
  • An ask (sell) walks the bids in descending price order (most expensive first). Bids are stored with bitwise-NOT inverted prices so that ascending iteration over storage keys yields descending real prices.

At each resting order the engine checks two termination conditions:

  1. — the taker is fully filled.
  2. The resting order’s price violates the taker’s price constraint.

If neither condition is met, the fill size is:

After each fill the maker order is updated: reserved margin is released proportionally, and if fully filled the order is removed from the book and open_order_count is decremented.

5. Pre-match margin check

Before matching begins, the taker’s margin is verified (skipped for reduce-only orders). The check ensures the user can afford the worst case — a 100 % fill:

where is the initial margin assuming the full order fills (see Margin §5) and is

This prevents a taker from submitting orders they cannot collateralise.

6. Self-trade prevention

The exchange uses EXPIRE_MAKER mode. When the taker encounters their own resting order on the opposite side:

  1. The maker (resting) order is cancelled (removed from the book).
  2. The taker’s open_order_count and reserved_margin are decremented.
  3. The taker continues matching deeper in the book — no fill occurs for the self-matched order.

7. Fill execution

Each fill between taker and maker is executed as follows:

7a. Funding settlement

Accrued funding is settled on the user’s existing position before the fill:

The negated accrued funding is added to the user’s PnL (positive accrued funding is a cost to longs).

7b. Closing PnL

For the closing portion of the fill:

Long closing (selling to close):

Short closing (buying to close):

The position size is reduced by the closing amount. If the position is fully closed, it is removed from state.

7c. Opening position

For the opening portion of the fill:

  • New position: entry price is set to the fill price.
  • Existing position (same direction): entry price is blended as a weighted average:

7d. OI update

Open interest is updated per side:

  • Closing a long:
  • Closing a short:
  • Opening a long:
  • Opening a short:

8. Trading fees

Fees are charged on every fill:

The fee rate differs by role:

RoleRateExample value
Takertaker_fee_rate0.1 %
Makermaker_fee_rate0 %

Fees are always positive (absolute value of fill size is used). They are routed to the vault via the settlement loop described below.

9. PnL settlement

After all fills in an order are complete, PnLs and fees are settled atomically as in-place USD margin adjustments. No token conversions occur during settlement — all values are pure UsdValue arithmetic.

9a. Fee loop

For each non-vault user with a non-zero fee:

Fees from the vault to itself are skipped (no-op). Processing fees first ensures collected fees augment before any vault losses are absorbed.

9b. PnL loop

Non-vault users:

A user’s margin can go negative temporarily — the outer function handles bad debt (see Liquidation).

Vault:

A negative represents a deficit (bad debt not yet recovered via ADL).

10. Unfilled remainder

After matching completes:

  • Market orders: the unfilled remainder is silently discarded. If nothing was filled at all, the transaction reverts with “no liquidity at acceptable price”.
  • Limit orders (GTC): the unfilled remainder is stored as a resting order. Storage requires:
    • open_order_count < max_open_orders
    • Price is aligned to the pair’s tick size ()
    • Sufficient available margin (skipped for reduce-only orders) — see below

Margin reservation (non-reduce-only):

The unfilled portion’s margin requirement is computed and checked against available margin (see Margin §7–§8):

If the check passes, reserved_margin is increased by and open_order_count is incremented. This is the 0 %-fill scenario check — it ensures the user can afford the order even if nothing fills immediately.

Post-only limit orders take a fast path that bypasses the matching engine entirely. They are rejected if they would cross the best price on the opposite side:

  • Buy:
  • Sell:

If the opposite book is empty, the order always succeeds.

11. Open interest constraint

Each pair has a parameter enforcing a per-side cap:

  • Long opening:
  • Short opening:

The constraint is checked before matching and does not apply to reduce-only orders (which have zero opening size). Long and short OI limits are independent but share the same parameter.

12. Order cancellation

Single cancel

A user can cancel any individual resting order by its order ID.

On cancellation:

  1. The order is removed from the book.
  2. reserved_margin is released (subtracted from the user’s total).
  3. open_order_count is decremented.
  4. If the user state is now empty (no positions, no open orders, no pending unlocks), it is deleted from storage.

Bulk cancel

A user can cancel all of their resting orders across both sides of the book in a single transaction. The contract iterates the user’s resting orders, removing each one and releasing margin. The same cleanup logic applies — if the user state becomes empty after all orders are removed, it is deleted.

Funding

Fundings are periodic payments between longs and shorts that anchor the perpetual contract price to the oracle. When the market trades above the oracle, longs pay shorts; when below, shorts pay longs. This mechanism discourages persistent deviations from the spot price without requiring contract expiry.

1. Premium

Each funding cycle begins with measuring how far the order book deviates from the oracle. The contract computes two impact prices by walking the book:

  • Impact bid — the volume-weighted average price (VWAP) obtained by selling worth of base asset into the bid side.
  • Impact ask — the VWAP obtained by buying worth from the ask side.

The premium is then:

If either side has insufficient depth to fill , its term contributes zero. When both sides lack depth, the premium is zero.

2. Sampling

A cron job runs frequently (e.g. every minute). Each invocation samples the premium for every active pair and accumulates it into the pair’s state:

Sampling more frequently than collecting gives the average premium resilience against momentary spikes — a single large order cannot dominate the rate.

3. Collection

When has elapsed since the last collection, the same cron invocation finalises the funding rate:

  1. Average premium:

  2. Clamp to the configured bounds:

  3. Funding delta — scale by the actual elapsed interval and oracle price:

  4. Accumulate into the pair-level running total:

  5. Reset accumulators: , , .

4. Position-level settlement

Accrued funding is settled on a position whenever it is touched — during a fill, liquidation, or ADL event:

After settlement the entry point is reset:

Sign convention: positive accrued funding is a cost to the holder (longs pay when the rate is positive, shorts pay when it is negative). The negated accrued funding is added to the user’s realised PnL. See Order matching §7a and Vault §4 for how this integrates with fill execution and vault accounting.

5. Parameters

FieldTypeDescription
funding_periodDurationMinimum time between funding collections.
impact_sizeUsdValueNotional depth walked on each side of the book to compute impact prices.
max_abs_funding_rateFundingRateSymmetric clamp applied to the average premium before scaling to a delta. Prevents runaway rates during prolonged skew.

Liquidation & Auto-Deleveraging (ADL)

This document describes how the perpetual futures exchange protects itself from under-collateralised accounts and socialises losses via auto-deleveraging and the insurance fund.

1. Liquidation trigger

Every account has an equity and a maintenance margin (MM):

where is the per-pair maintenance-margin ratio. An account becomes liquidatable when

Strict inequality: an account whose equity exactly equals its MM is still safe. An account with no open positions is never liquidatable regardless of its equity.

2. Close schedule

When an account is liquidatable, the system computes the minimum set of position closures needed to restore it above maintenance margin.

  1. For every open position, compute its MM contribution:

  2. Sort positions by MM contribution descending (largest first).

  3. Walk the sorted list and close just enough to cover the deficit:

    • For each position:
      • If : stop

This produces a vector of entries. Each has the opposite sign of the existing position (a long is closed with a sell, a short with a buy). Only positions that contribute to the deficit are touched and they may be partially closed when the deficit is small relative to the position.

3. Position closure

Each entry in the close schedule is executed in two phases:

3a. Order book matching

The close is submitted as a market order against the on-chain order book. It matches resting limit orders at price-time priority. Any filled amount is settled normally (mark-to-market PnL between the entry price and the fill price).

3b. Auto-deleveraging (ADL)

If any quantity remains unfilled after the order book is exhausted, the system automatically deleverages against counter-parties. The unfilled remainder is closed against the most profitable counter-positions at the liquidated user’s bankruptcy price.

Counter-party selection: Positions are indexed by the tuple . For a long being liquidated (selling), the system finds shorts with the highest entry price (most profitable) first. For a short being liquidated (buying), it finds longs with the lowest entry price first.1

Bankruptcy price: The fill price at which the liquidated user’s total equity would be exactly zero:

Since equity is typically negative for liquidatable users, the bankruptcy price is worse than the oracle price — the counter-party receives a favourable fill. The counter-party’s resting limit orders are not affected by ADL; only their position is force-reduced.

Liquidation fills (both order-book and ADL) carry zero trading fees for both taker and maker.

4. Liquidation fee

After all positions in the schedule are closed, a one-time liquidation fee is charged:

The fee is deducted from the user’s margin and routed to the insurance fund (not the vault). It is capped at the remaining margin so the fee itself never creates bad debt.

5. PnL settlement

All PnL from the liquidation fills (user, book makers, ADL counter-parties) is settled atomically as in-place USD margin adjustments — no token transfers occur. Both user and maker PnL are applied via the same settlement logic described in Order matching §8.

6. Bad debt

After PnL and fee settlement, if the user’s margin is negative the absolute value is bad debt. The margin is floored to zero and the bad debt is subtracted from the insurance fund:

The insurance fund may go negative. A negative insurance fund represents unresolved bad debt — future liquidation fees will replenish it.

Note: when positions are fully ADL’d at the bankruptcy price, the user’s equity is zeroed by construction. Bad debt from ADL fills is therefore zero. Bad debt arises only from book fills at prices worse than the bankruptcy price (e.g., thin order books with deep bids/asks far from oracle).

7. Insurance fund

The insurance fund is a separate pool from the vault that absorbs bad debt and is funded by liquidation fees.

Funding: Every liquidation fee (§4) is credited to the insurance fund.

Usage: Every bad debt event (§6) is debited from the insurance fund.

Negative balance: The insurance fund may go negative when accumulated bad debt exceeds accumulated fees. This is the simplest approach — no special trigger or intervention is needed. Future liquidation fees will naturally replenish the fund.

The vault’s margin is never touched for bad debt or liquidation fees. This isolates liquidity providers from liquidation losses.

Examples

All examples use:

ParameterValue
PairBTC / USD
Maintenance-margin ratio (mmr)5 %
Liquidation-fee rate0.1 %
Settlement currencyUSDC at $1

Example 1 — Clean liquidation on book (no bad debt)

Setup

AliceBob (maker)
DirectionLong 1 BTCBid 1 BTC @ $47,500
Entry price$50,000
Margin$3,000$10,000

BTC drops to $47,500

Alice’s account

Close schedule

Alice has one position; the full 1 BTC long is scheduled for closure.

Execution

The long is closed (sold) into Bob’s resting bid at $47,500.

Liquidation fee

Settlement (margin arithmetic)

Alice’s margin starts at $3,000.

Final margin is positive — no bad debt.

Example 2 — ADL at bankruptcy price (no book liquidity)

Setup

CharlieDana
DirectionLong 1 BTCShort 1 BTC
Entry price$50,000$55,000
Margin$3,000$10,000

BTC drops to $46,000

Charlie’s account

Close schedule

Charlie’s full 1 BTC long is scheduled for closure.

Order book matching

No bids on the book — the full 1 BTC is unfilled.

ADL

Bankruptcy price for Charlie’s long:

Dana holds the most profitable short (entry $55,000, current oracle $46,000). Her position is force-closed at $47,000.

Charlie’s PnL at bankruptcy price:

Dana’s PnL at bankruptcy price:

Liquidation fee

No bad debt, no insurance fund impact. Dana receives the full PnL at the bankruptcy price, which is better than the oracle price for her.

Final state

Balance
Charlie$0 (fully liquidated)
Dana$18,000 (profit at bp)
Insurance fundunchanged

Example 3 — Book fill creates bad debt

Setup

CharlieBob (maker)
DirectionLong 1 BTCBid 1 BTC @ $46,000
Entry price$50,000
Margin$3,000$50,000
Insurance fund$500

BTC drops to $46,000

Charlie’s liquidation

Same equity and MM as Example 2. Liquidatable.

Order book matching

The bid at $46,000 fills Charlie’s full 1 BTC sell.

Liquidation fee

Bad debt

Charlie’s margin after PnL: .

The insurance fund goes negative. Future liquidation fees will replenish it.

Final state

Balance
Charlie$0 (fully liquidated)
Insurance fund−$500 (unresolved bad debt)
Vaultunchanged (isolated from losses)

  1. This does not perfectly rank by total PnL since it ignores accumulated funding fees, but is a reasonable and efficient approximation.

Vault

1. Overview

The vault is the passive market maker for the perpetual futures exchange. It continuously quotes bid/ask orders around the oracle price on every pair, earning the spread.

Liquidity providers (LPs) deposit settlement currency into the vault and receive vault shares credited to their account.

2. Liquidity provision

Adding liquidity follows an ERC-4626 virtual shares pattern to prevent the first depositor inflation attack.

Constants

NameValue
Virtual shares1,000,000
Virtual assets$1

Share minting

The LP specifies a USD margin amount to transfer from their trading margin to the vault.

Floor rounding protects the vault from rounding exploitation. A minimum-shares parameter lets depositors revert if slippage is too high.

First depositor protection

The virtual terms dominate when real supply and equity are small. An attacker cannot inflate the share price to steal from subsequent depositors because the initial share price is effectively per share.

3. Liquidity withdrawal

The LP specifies how many vault shares to burn. The USD value to release is computed:

The fund is not released immediately. A cooldown is initiated, with the ending time computed as:

Once is reached, the contract credits the released USD value back to the LP’s trading margin.

4. Vault equity

The vault has its own user state (positions acquired from market-making fills). Its equity follows the same formula as any user:

where is the vault’s internal USD margin (updated in-place during settlement), and the sums run over all of the vault’s open positions.

If is non-positive the vault is in catastrophic loss and both deposits and withdrawals are disabled.

5. Market making policy

The vault uses its margin to market make in the order book. For now, it does so following a naïve policy. We expect to optimize this in the future.

Each block, after the oracle update, the vault cancels all existing quotes and recomputes bid/ask orders for every pair.

Margin allocation

Total vault margin is split across pairs by weight:

Quote size

Each side receives half the allocated margin, capped by a per-pair maximum:

where is the initial margin ratio.

Bid price

Snap down to the nearest tick:

Book-crossing prevention: if , clamp to .

Skip if or notional is below the minimum order size.

Ask price

Snap up to the nearest tick (ceiling):

Book-crossing prevention: if , clamp to .

Skip if notional is below the minimum order size.

Per-pair parameters

ParameterRole
vault_half_spreadHalf the bid-ask spread around oracle price
vault_max_quote_sizeMaximum size per side
vault_liquidity_weightWeight for margin allocation across pairs
tick_sizePrice granularity for snapping
initial_margin_ratioUsed to compute margin-constrained size
min_order_sizeMinimum notional to place an order

If any of vault_half_spread, vault_max_quote_size, vault_liquidity_weight, tick_size, or the allocated margin is zero, the vault skips quoting for that pair.

Referral

The referral system lets existing traders recruit new users and earn a share of the trading fees generated by their referrals. When a referred user trades, a portion of the fee — after the protocol treasury has taken its cut — is distributed to the direct referrer and up to four additional upstream referrers in the referral chain.

1. Overview

Three roles participate in a referral commission:

  • Referee — the user who was referred and is paying trading fees.
  • Direct referrer (level 1) — the user who referred the referee. Earns a commission on the referee’s fees and may share a portion of it back with the referee.
  • Upstream referrers (levels 2–5) — referrers further up the chain. Each receives only the marginal increase in commission rate beyond what lower levels already captured.

Commissions are taken from the trading fee after the protocol treasury has claimed its share. The system can be disabled globally by setting in the referral parameters, which causes the commission pass to be skipped entirely.

Referral commissions are applied whenever an order is filled and trading fees are collected. Exception: liquidation fills (both for the taker and the maker) use zero trading fees, so no referral commissions occur during liquidation.

2. Key concepts

Two rates govern how referral fees are distributed:

  • Commission rate () — the fraction of the post-protocol-cut fee that the referral system distributes at a given level. This rate is tiered: it increases as the referrer’s direct referees accumulate more 30-day rolling trading volume (see §6b). The chain owner can also set a per-user override (see §6a).

  • Fee share ratio () — the fraction of the level-1 commission that the direct referrer gives back to the referee as a rebate. For example, if the commission rate is 20 % and the share ratio is 50 %, the referee receives 10 % and the referrer keeps 10 %. The share ratio is capped at 50 % and can only increase once set.

3. Registration

3a. Becoming a referrer

A user opts in as a referrer by calling SetFeeShareRatio with a value. The share ratio determines what fraction of the level-1 commission the referrer gives back to the referee (see §5a).

Eligibility: the user must have accumulated enough lifetime trading volume:

Users who have a commission rate override (see §6a) bypass this volume requirement.

Constraints:

  • — the maximum share ratio a referrer can set.
  • The share ratio can only increase once set. A subsequent call must supply a value the current ratio.

3b. Registering a referee

A referee is linked to a referrer through one of two paths:

  1. During account creation — the RegisterUser message on the account factory accepts an optional referrer field. If provided, the factory forwards a SetReferral message to the perps contract.
  2. After account creation — the referee (or an account they own) calls SetReferral directly on the perps contract.

Constraints:

  • A user cannot refer themselves ().
  • The referrer must already have a fee share ratio set (i.e. has opted in as a referrer).
  • The referral relationship is immutable once stored — a referee can never change or remove their referrer.

When a referral is registered, a per-referee statistics record is initialised for the (referrer, referee) pair, and the referrer’s is incremented in today’s cumulative data bucket.

4. Fee split recap

For every fill, a trading fee is computed per Order matching §8:

The fee is then split between the protocol treasury and the vault:

The protocol fee is routed to the treasury and is not affected by referrals. Referral commissions are computed against — i.e. the remainder of the fee after the protocol has taken its cut.

5. Commission distribution

After PnL settlement and fee collection, the contract distributes referral commissions for every fee-paying user who has a referrer. Commissions are drawn from the post-protocol-cut fee and credited to the recipients’ margins.

5a. Level 1 — direct referrer

Let be the commission rate of the direct referrer (see §6) and be that referrer’s fee share ratio.

The referee (fee payer) receives:

The direct referrer receives:

Equivalently, the total level-1 commission is , split between referee and referrer by the share ratio.

5b. Levels 2–5 — upstream referrers

The algorithm walks up the referral chain from the direct referrer. At each level (), let be the commission rate of the -th referrer and be the maximum commission rate seen at any prior level (initialised to ).

After computing , update:

If , the referrer at level receives nothing. The chain walk stops early if a referrer at level has no referrer of their own, or after level 5.

Upstream referrers do not use a share ratio — the entire marginal commission goes to the upstream referrer.

5c. Vault deduction

After processing all fee-paying users, the total of all commissions is deducted from the fee that would otherwise have accrued to the vault:

5d. Worked example

Setup. Five users form a referral chain, each with a commission rate override. User C has a fee share ratio of 40 %:

UserCommission rate ()Fee share ratio ()Referrer
A30 %
B20 %A
C15 %40 %B
DC
E40 %D

Trade. User D trades $10 m taker volume, pays $1,000 in fees. Assume so .

LevelUserReceives
1 (referee D)D
1 (referrer C)C15 %
2B20 %15 %
3A30 %20 %

Total referral commissions = $300, equal to the highest commission rate in the chain (A’s 30 %) applied to the fee after the protocol cut.

Counter-example. Now User F signs up under User E (40 % commission rate) and trades. Since E’s 40 % exceeds every upstream referrer, no upstream commissions are paid — only E and F split the level-1 commission of .

6. Commission rate

The commission rate for a referrer determines the fraction of the post-protocol-cut fee that the referral system distributes at that level.

6a. Override

The chain owner can set (or remove) a per-user override via SetCommissionRateOverride. When present, this value is used directly, bypassing the volume-tiered lookup. Users with an override also bypass the requirement when calling SetFeeShareRatio.

6b. Volume-tiered lookup

When no override exists, is derived from the referrer’s direct referees’ 30-day rolling trading volume:

  1. Load the referrer’s latest cumulative referral data; let be its field.

  2. Load the cumulative data at ; let be its field.

  3. Compute the windowed volume:

  4. Walk the map and select the entry with the highest volume threshold .

  5. If no tier qualifies, use .

Cumulative data is bucketed by day (see §7a), so the lookup loads the nearest bucket at or before the start of the window.

7. Data tracking

7a. Cumulative daily buckets

Each user has a UserReferralData record keyed by (user, day). The day is the block timestamp rounded down to midnight. Fields are cumulative (monotonically increasing), so a rolling window is computed by differencing two buckets.

FieldTypeDescription
volumeUsdValueUser’s own cumulative trading volume.
commission_shared_by_referrerUsdValueTotal commission shared by this user’s referrer.
referee_countu32Number of direct referees.
referees_volumeUsdValueCumulative trading volume of direct referees.
commission_earned_from_refereesUsdValueTotal commission earned from direct referees’ trades.
cumulative_active_refereesu32Cumulative count of daily active direct referees. Difference two buckets to get a windowed count.

When a referred user trades:

  • The referee’s bucket: and increment.
  • The direct referrer’s bucket: and increment.
  • Upstream referrers: only increments (and only if they received a non-zero commission).

7b. Per-referee statistics

For every (referrer, referee) pair, a RefereeStats record tracks:

FieldTypeDescription
registered_atTimestampWhen the referral was established.
volumeUsdValueReferee’s total trading volume.
commission_earnedUsdValueCommission earned by referrer from this referee.
last_day_activeTimestampLast day (rounded to midnight) the referee traded.

These records are multi-indexed for sorted queries by registered_at, volume, or commission_earned.

7c. Daily active direct referees

On the first trade of each day by a given direct referee, the referrer’s field in today’s cumulative bucket is incremented. Subsequent trades by the same referee on the same day do not increment it again. This is tracked via the field on RefereeStats: if , it is a new active day.

8. Parameters

These fields are part of the top-level Param struct (not a separate nested struct):

FieldTypeDescription
referral_activeboolMaster switch. When false, referral commissions are skipped entirely.
min_referrer_volumeUsdValueMinimum lifetime trading volume to call SetFeeShareRatio. Bypassed for users with a commission rate override.
referrer_commission_ratesRateScheduleVolume-tiered commission rates. base = fallback rate; tiers = map of 30-day referees volume threshold → rate. Highest qualifying tier wins.

Constants:

NameValueDescription
MAX_FEE_SHARE_RATIO50 %Maximum share ratio a referrer can set.
MAX_REFERRAL_CHAIN_DEPTH5Maximum levels of upstream referrers walked during commission distribution.
COMMISSION_LOOKBACK_DAYS30Rolling-window length (days) for the volume-tiered commission lookup.

Risk Parameters

This chapter describes how to choose the risk parameters that govern the perpetual futures exchange — the global Param fields and per-pair PairParam fields defined in the perps contract. The goal is a systematic, reproducible calibration workflow that balances capital efficiency against tail-risk protection.

1. Margin ratios

The initial margin ratio (IMR) sets maximum leverage (). The maintenance margin ratio (MMR) sets the liquidation threshold. Both are per-pair.

1.1 Volatility-based derivation

Start from the asset’s historical daily return distribution:

  1. Collect at least 1 year of daily log-returns.

  2. Compute the 99.5th-percentile absolute daily return .

  3. Apply a liquidation-delay factor (typically 2–3) to account for the time between the price move and the liquidation execution:

  4. Set IMR as a multiple of MMR:

A higher gives more buffer between position entry and liquidation, reducing bad-debt risk at the cost of lower leverage.

1.2 Peer benchmarks

AssetHyperliquid max leverageHyperliquid IMRdYdX IMR
BTC40×2.5 %5 %
ETH25×4 %5 %
SOL20×5 %10 %
HYPE10×10 %

1.3 Invariants

The following must hold for every pair:

The second constraint ensures a liquidated position can always cover the taker fee and liquidation fee from the maintenance margin cushion.

2. Fee rates

Three fee rates apply globally (not per-pair):

ParameterRole
maker_fee_rateCharged on limit-order fills; revenue to the vault
taker_fee_rateCharged on market / crossing fills; revenue to the vault
liquidation_fee_rateCharged on liquidation notional; revenue to insurance fund

2.1 Sizing principles

  • Taker fee should exceed the typical half-spread of the most liquid pair so the vault earns positive expected value on every fill against a taker.
  • Maker fee can be zero, slightly positive, or negative (rebate). A zero maker fee attracts resting liquidity; a negative maker fee pays the maker on every fill. The absolute value of the maker fee rate must not exceed the taker fee rate, otherwise the exchange loses money on each trade.
  • Liquidation fee must satisfy the invariant in §1.3. It should be large enough to fund the insurance pool but small enough that a liquidated user retains some margin when possible.

2.2 Industry benchmarks

ExchangeMakerTaker
Hyperliquid0.015%0.045%
dYdX0.01%0.05%
GMX0.05%0.07%

3. Funding parameters

Funding anchors the perp price to the oracle. Two per-pair parameters and one global parameter control its behaviour (see Funding for mechanics):

ParameterScopeCalibration guidance
funding_periodGlobal1–8 hours. Shorter periods track the premium more tightly but increase gas cost.
max_abs_funding_ratePer-pairSee §3.1.
impact_sizePer-pairSee §3.2.

3.1 Max funding rate

The max daily funding rate limits how much a position can be charged per day. A useful rule of thumb:

where is the number of days it should take sustained max-rate funding to liquidate a fully leveraged position. For days and :

3.2 Impact size

The impact_size determines how deep the order book is walked to compute the premium. Set it to a representative trade size — large enough that the premium reflects real depth, small enough that thin books don’t produce zero premiums too often. A good starting point is 1–5% of the target max OI.

4. Capacity parameters

4.1 Max open interest

The maximum OI per side caps the exchange’s aggregate exposure:

where is the pair’s weight fraction and (2–5) is a safety multiplier reflecting how many times maintenance margin the vault could lose in a tail event.

Start conservatively — it is easy to raise OI caps but dangerous to lower them (existing positions above the cap cannot be force-closed).

4.2 Min order size

Prevents dust orders. Set to a notional value that covers at least 2× the gas cost of processing the order. Typical values: $10–$100.

4.3 Tick size

The minimum price increment. Too small increases book fragmentation; too large creates implicit spread. Rule of thumb:

For BTC at $60,000: tick sizes of $1–$10 are reasonable.

5. Vault parameters

The vault’s market-making policy is controlled by three per-pair parameters and two global parameters (see Vault for mechanics):

5.1 Half-spread

The half-spread should be calibrated to short-term intraday volatility so the vault earns a positive edge:

where is the standard deviation of intra-block price changes. A larger spread protects against adverse selection but reduces fill probability.

5.2 Max quote size

Caps the vault’s resting order size per side per pair. Should be consistent with max_abs_oi — the vault should not be able to accumulate more exposure than the system can handle:

5.3 Liquidity weight

Determines what fraction of total vault margin is allocated to each pair. Higher-volume, lower-risk pairs should receive higher weights. The sum of all weights equals vault_total_weight.

5.4 Cooldown period

Prevents LPs from front-running known losses. Should exceed the funding period and be long enough that vault positions cannot be manipulated by short-term deposit/withdraw cycles. Typical values: 7–14 days.

6. Operational limits

ParameterCalibration guidance
max_unlocksNumber of concurrent withdrawal requests per user. 5–10 is typical; prevents griefing with many small unlocks.
max_open_ordersMaximum resting limit orders per user across all pairs. 50–200; prevents order-book spam.

7. Calibration workflow

The following checklist produces a complete parameter set from scratch:

  1. Collect data — Gather ≥ 1 year of daily and hourly OHLCV data for each asset.

  2. Compute volatility — For each asset, compute (daily 99.5th percentile absolute return) and (hourly return standard deviation).

  3. Set margin ratios — Derive MMR from (§1.1), then IMR as a multiple of MMR. Cross-check against peer benchmarks (§1.2).

  4. Set fees — Choose maker/taker/liquidation fee rates satisfying §2.1 and the invariant in §1.3.

  5. Set funding — Pick funding_period, derive max_abs_funding_rate (§3.1), and calibrate impact_size (§3.2).

  6. Size exposure — Set max_abs_oi from vault equity and tail-risk tolerance (§4.1).

  7. Set order constraints — Choose min_order_size and tick_size (§4.2, §4.3).

  8. Configure vault — Set vault_half_spread, vault_max_quote_size, and vault_liquidity_weight per pair (§5), and vault_cooldown_period globally.

  9. Backtest — Replay historical price data through the parameter set. Verify:

    • Liquidations occur before bad debt in > 99% of cases.
    • Vault PnL is positive over the test period.
    • Funding rates do not hit the clamp for more than 5% of periods.
  10. Deploy conservatively — Launch with the conservative profile (lower leverage, higher fees, lower OI caps). Tighten parameters toward the aggressive profile as the system proves stable and liquidity deepens.

API Reference

This chapter documents the complete API for the Dango perpetual futures exchange. All interactions with the chain go through a single GraphQL endpoint that supports queries, mutations, and WebSocket subscriptions.

1. Transport

1.1 HTTP

All queries and mutations use a standard GraphQL POST request.

Endpoint: See §11. Constants.

Headers:

HeaderValue
Content-Typeapplication/json

Request body:

{
  "query": "query { ... }",
  "variables": { ... }
}

Example — query chain status:

curl -X POST https://<host>/graphql \
  -H 'Content-Type: application/json' \
  -d '{"query": "{ queryStatus { block { blockHeight timestamp } chainId } }"}'

Response:

{
  "data": {
    "queryStatus": {
      "block": {
        "blockHeight": 123456,
        "timestamp": "2026-01-15T12:00:00"
      },
      "chainId": "dango-1"
    }
  }
}

1.2 WebSocket

Subscriptions (real-time data) use WebSocket with the graphql-ws protocol.

Endpoint: See §11. Constants.

Connection handshake:

{
  "type": "connection_init",
  "payload": {}
}

Subscribe:

{
  "id": "1",
  "type": "subscribe",
  "payload": {
    "query": "subscription { perpsTrades(pairId: \"perp/btcusd\") { fillPrice fillSize } }"
  }
}

Messages arrive as:

{
  "id": "1",
  "type": "next",
  "payload": {
    "data": {
      "perpsTrades": { ... }
    }
  }
}

1.3 Pagination

List queries use cursor-based pagination (Relay Connection specification).

ParameterTypeDescription
firstIntReturn the first N items
afterStringCursor — return items after this
lastIntReturn the last N items
beforeStringCursor — return items before this
sortByEnumBLOCK_HEIGHT_ASC or BLOCK_HEIGHT_DESC

Response shape:

{
  "pageInfo": {
    "hasNextPage": true,
    "hasPreviousPage": false,
    "startCursor": "abc...",
    "endCursor": "xyz..."
  },
  "nodes": [ ... ]
}

Use first + after for forward pagination, last + before for backward.

2. Authentication and transactions

2.1 Transaction structure

Every write operation is wrapped in a signed transaction (Tx):

{
  "sender": "0x1234...abcd",
  "gas_limit": 1500000,
  "msgs": [
    {
      "execute": {
        "contract": "PERPS_CONTRACT",
        "msg": { ... },
        "funds": {}
      }
    }
  ],
  "data": { ... },
  "credential": { ... }
}
FieldTypeDescription
senderAddrAccount address sending the transaction
gas_limitu64Maximum gas units for execution
msgs[Message]Non-empty list of messages to execute atomically
dataMetadataAuthentication metadata (see §2.2)
credentialCredentialCryptographic proof of sender authorization

Messages execute atomically — either all succeed or all fail.

2.2 Metadata

The data field contains authentication metadata:

{
  "user_index": 0,
  "chain_id": "dango-1",
  "nonce": 42,
  "expiry": null
}
FieldTypeDescription
user_indexu32The user index that owns the sender account
chain_idStringChain identifier (prevents cross-chain replay)
nonceu32Replay protection nonce
expiryTimestamp | nullOptional expiration (nanoseconds since epoch); null = no expiry

Nonce semantics: Dango uses unordered nonces with a sliding window of 20, similar to the approach used by Hyperliquid. The account tracks the 20 most recently seen nonces. A transaction is accepted if its nonce is newer than the oldest seen nonce, has not been used before, and not greater than newest seen nonce + 100. This means transactions may arrive out of order without being rejected. SDK implementations should track the next available nonce client-side by querying the account’s seen nonces and choosing the next integer above the maximum.

2.3 Message format

The primary message type for interacting with contracts is execute:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "submit_order": {
          "pair_id": "perp/btcusd",
          "size": "0.100000",
          "kind": {
            "market": {
              "max_slippage": "0.010000"
            }
          },
          "reduce_only": false
        }
      }
    },
    "funds": {}
  }
}
FieldTypeDescription
contractAddrTarget contract address
msgJSONContract-specific execute message (snake_case keys)
fundsCoinsTokens to send with the message: {"<denom>": "<amount>"} or {} if none

The funds field is a map of denomination to amount string. For example, depositing 1000 USDC:

{
  "funds": {
    "bridge/usdc": "1000000000"
  }
}

USDC uses 6 decimal places in its base unit (1 USDC = 1000000 base units). All bridged tokens use the bridge/ prefix.

2.4 Signing methods

The credential field wraps a StandardCredential or SessionCredential. A StandardCredential identifies the signing key and contains the signature:

Passkey (Secp256r1 / WebAuthn):

{
  "standard": {
    "key_hash": "a1b2c3d4...64hex",
    "signature": {
      "passkey": {
        "authenticator_data": "<base64>",
        "client_data": "<base64>",
        "sig": "0102...40hex"
      }
    }
  }
}
  • sig: 64-byte Secp256r1 signature (hex-encoded)
  • client_data: base64-encoded WebAuthn client data JSON (challenge = base64url of SHA-256 of SignDoc)
  • authenticator_data: base64-encoded WebAuthn authenticator data

Secp256k1:

{
  "standard": {
    "key_hash": "a1b2c3d4...64hex",
    "signature": {
      "secp256k1": "0102...40hex"
    }
  }
}
  • 64-byte Secp256k1 signature (hex-encoded)

EIP-712 (Ethereum wallets):

{
  "standard": {
    "key_hash": "a1b2c3d4...64hex",
    "signature": {
      "eip712": {
        "typed_data": "<base64>",
        "sig": "0102...41hex"
      }
    }
  }
}
  • sig: 65-byte signature (64-byte Secp256k1 + 1-byte recovery ID; hex-encoded)
  • typed_data: base64-encoded JSON of the EIP-712 typed data object

2.5 Session credentials

Session keys allow delegated signing without requiring the master key for every transaction.

{
  "session": {
    "session_info": {
      "session_key": "02abc...33bytes",
      "expire_at": "1700000000000000000"
    },
    "session_signature": "0102...40hex",
    "authorization": {
      "key_hash": "a1b2c3d4...64hex",
      "signature": { ... }
    }
  }
}
FieldTypeDescription
session_infoSessionInfoSession key public key + expiration
session_signatureByteArray<64>SignDoc signed by the session key (hex-encoded)
authorizationStandardCredentialSessionInfo signed by the user’s master key

2.6 SignDoc

The SignDoc is the data structure that gets signed. It mirrors the transaction but replaces the credential with the structured Metadata:

{
  "data": {
    "chain_id": "dango-1",
    "expiry": null,
    "nonce": 42,
    "user_index": 0
  },
  "gas_limit": 1500000,
  "messages": [ ... ],
  "sender": "0x1234...abcd"
}

Signing process:

  1. Serialize the SignDoc to canonical JSON (fields sorted alphabetically).
  2. Hash the serialized bytes with SHA-256.
  3. Sign the hash with the appropriate key.

For Passkey (WebAuthn), the SHA-256 hash becomes the challenge in the WebAuthn request. For EIP-712, the SignDoc is mapped to an EIP-712 typed data structure and signed via eth_signTypedData_v4.

2.7 Signing flow

The full transaction lifecycle:

  1. Compose messages — build the contract execute message(s).
  2. Fetch metadata — query chain ID, account’s user_index, and next available nonce.
  3. Simulate — send an UnsignedTx to estimate gas (see §2.8).
  4. Set gas limit — use the simulation result, adding ~770,000 for signature verification overhead.
  5. Build SignDoc — assemble {sender, gas_limit, messages, data}.
  6. Sign — sign the SignDoc with the chosen method.
  7. Broadcast — submit the signed Tx via broadcastTxSync (see §2.9).

2.8 Gas estimation

Use the simulate query to dry-run a transaction:

query Simulate($tx: UnsignedTx!) {
  simulate(tx: $tx)
}

Variables:

{
  "tx": {
    "sender": "0x1234...abcd",
    "msgs": [
      {
        "execute": {
          "contract": "PERPS_CONTRACT",
          "msg": {
            "trade": {
              "deposit": {}
            }
          },
          "funds": {
            "bridge/usdc": "1000000000"
          }
        }
      }
    ],
    "data": {
      "user_index": 0,
      "chain_id": "dango-1",
      "nonce": 42,
      "expiry": null
    }
  }
}

Response:

{
  "data": {
    "simulate": {
      "gas_limit": null,
      "gas_used": 750000,
      "result": {
        "ok": [ ... ]
      }
    }
  }
}

Simulation skips signature verification. Add 770,000 gas (Secp256k1 verification cost) to gas_used when setting gas_limit in the final transaction.

2.9 Broadcasting

Submit a signed transaction:

mutation BroadcastTx($tx: Tx!) {
  broadcastTxSync(tx: $tx)
}

Variables:

{
  "tx": {
    "sender": "0x1234...abcd",
    "gas_limit": 1500000,
    "msgs": [ ... ],
    "data": {
      "user_index": 0,
      "chain_id": "dango-1",
      "nonce": 42,
      "expiry": null
    },
    "credential": {
      "standard": {
        "key_hash": "...",
        "signature": { ... }
      }
    }
  }
}

The mutation returns the transaction outcome as JSON.

3. Account management

Dango uses smart accounts instead of externally-owned accounts (EOAs). A user profile is identified by a UserIndex and may own 1 master account and 0-4 subaccounts. Keys are associated with the user profile, not individual accounts.

3.1 Register user

Creating a new user profile is a two-step process:

Step 1 — Register. Call register_user on the account factory: use the the account factory address itself as sender, and null for the data and credential fields.

{
  "sender": "ACCOUNT_FACTORY_CONTRACT",
  "gas_limit": 1500000,
  "msgs": [
    {
      "execute": {
        "contract": "ACCOUNT_FACTORY_CONTRACT",
        "msg": {
          "register_user": {
            "key": {
              "secp256r1": "02abc123...33bytes_hex"
            },
            "key_hash": "a1b2c3d4...64hex",
            "seed": 12345,
            "signature": {
              "passkey": {
                "authenticator_data": "<base64>",
                "client_data": "<base64>",
                "sig": "0102...40hex"
              }
            }
          }
        },
        "funds": {}
      }
    }
  ],
  "data": null,
  "credential": null
}
FieldTypeDescription
keyKeyThe user’s initial public key (see §10.3)
key_hashHash256Client-chosen hash identifying this key
seedu32Arbitrary number for address variety
signatureSignatureSignature over {"chain_id": "dango-1"} proving key ownership

A master account is created in the inactive state (for the purpose of spam prevention). The new account address is returned in the transaction events.

Step 2 — Activate. Send at least the minimum_deposit (10 USDC = 10000000 bridge/usdc on mainnet) to the new master account address. The transfer can either come from an existing Dango account, or from another chain via Hyperlane bridging. Upon receipt, the account activates itself and becomes ready to use.

3.2 Register subaccount

Create an additional account for an existing user (maximum 5 accounts per user):

{
  "execute": {
    "contract": "ACCOUNT_FACTORY_CONTRACT",
    "msg": {
      "register_account": {}
    },
    "funds": {}
  }
}

Must be sent from an existing account owned by the user.

3.3 Update key

Associate or disassociate a key with the user profile.

Add a key:

{
  "execute": {
    "contract": "ACCOUNT_FACTORY_CONTRACT",
    "msg": {
      "update_key": {
        "key_hash": "a1b2c3d4...64hex",
        "key": {
          "insert": {
            "secp256k1": "03def456...33bytes_hex"
          }
        }
      }
    },
    "funds": {}
  }
}

Remove a key:

{
  "execute": {
    "contract": "ACCOUNT_FACTORY_CONTRACT",
    "msg": {
      "update_key": {
        "key_hash": "a1b2c3d4...64hex",
        "key": "delete"
      }
    },
    "funds": {}
  }
}

3.4 Update username

Set the user’s human-readable username (one-time operation):

{
  "execute": {
    "contract": "ACCOUNT_FACTORY_CONTRACT",
    "msg": {
      "update_username": "alice"
    },
    "funds": {}
  }
}

Username rules: 1–15 characters, lowercase a-z, digits 0-9, and underscore _ only.

The username is cosmetic only — used for human-readable display on the frontend. It is not used in any business logic of the exchange.

3.5 Query user

query {
  user(userIndex: 0) {
    userIndex
    createdBlockHeight
    createdAt
    publicKeys {
      keyHash
      publicKey
      keyType
      createdBlockHeight
      createdAt
    }
    accounts {
      accountIndex
      address
      createdBlockHeight
      createdAt
    }
  }
}

The keyType enum values are: SECP256R1, SECP256K1, ETHEREUM.

3.6 Query accounts

query {
  accounts(userIndex: 0, first: 10) {
    nodes {
      accountIndex
      address
      createdBlockHeight
      createdTxHash
      createdAt
      users {
        userIndex
      }
    }
    pageInfo {
      hasNextPage
      endCursor
    }
  }
}

Filter by userIndex to get all accounts for a specific user, or by address for a specific account.

4. Market data

4.1 Global parameters

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        param: {}
      }
    }
  })
}

Response:

{
  "max_unlocks": 5,
  "max_open_orders": 50,
  "maker_fee_rates": {
    "base": "0.000000",
    "tiers": {}
  },
  "taker_fee_rates": {
    "base": "0.001000",
    "tiers": {}
  },
  "protocol_fee_rate": "0.100000",
  "liquidation_fee_rate": "0.010000",
  "funding_period": "3600000000000",
  "vault_total_weight": "10.000000",
  "vault_cooldown_period": "604800000000000",
  "referral_active": true,
  "min_referrer_volume": "0.000000",
  "referrer_commission_rates": {
    "base": "0.000000",
    "tiers": {}
  }
}
FieldTypeDescription
max_unlocksusizeMax concurrent vault unlock requests per user
max_open_ordersusizeMax resting limit orders per user (all pairs)
maker_fee_ratesRateScheduleVolume-tiered maker fee rates
taker_fee_ratesRateScheduleVolume-tiered taker fee rates
protocol_fee_rateDimensionlessFraction of trading fees routed to treasury
liquidation_fee_rateDimensionlessInsurance fund fee on liquidations
funding_periodDurationInterval between funding collections (nanoseconds)
vault_total_weightDimensionlessSum of all pairs’ vault liquidity weights
vault_cooldown_periodDurationWaiting time before vault withdrawal release (nanoseconds)
referral_activeboolWhether the referral commission system is active
min_referrer_volumeUsdValueMinimum lifetime volume to become a referrer
referrer_commission_ratesRateScheduleVolume-tiered referrer commission rates

A RateSchedule has two fields: base (the default rate) and tiers (a map of volume threshold to rate; highest qualifying tier wins).

For fee mechanics, see Order matching §8.

4.2 Global state

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        state: {}
      }
    }
  })
}

Response:

{
  "last_funding_time": "1700000000000000000",
  "vault_share_supply": "500000000",
  "insurance_fund": "25000.000000",
  "treasury": "12000.000000"
}
FieldTypeDescription
last_funding_timeTimestampLast funding collection time
vault_share_supplyUint128Total vault share tokens
insurance_fundUsdValueInsurance fund balance
treasuryUsdValueAccumulated protocol fees

4.3 Pair parameters

All pairs:

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        pair_params: {
          start_after: null,
          limit: 30
        }
      }
    }
  })
}

Single pair:

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        pair_param: {
          pair_id: "perp/btcusd"
        }
      }
    }
  })
}

Response (single pair):

{
  "tick_size": "1.000000",
  "min_order_size": "10.000000",
  "max_abs_oi": "1000000.000000",
  "max_abs_funding_rate": "0.000500",
  "initial_margin_ratio": "0.050000",
  "maintenance_margin_ratio": "0.025000",
  "impact_size": "10000.000000",
  "vault_liquidity_weight": "1.000000",
  "vault_half_spread": "0.001000",
  "vault_max_quote_size": "50000.000000",
  "bucket_sizes": ["1.000000", "5.000000", "10.000000"]
}
FieldTypeDescription
tick_sizeUsdPriceMinimum price increment for limit orders
min_order_sizeUsdValueMinimum notional value (reduce-only exempt)
max_abs_oiQuantityMaximum open interest per side
max_abs_funding_rateFundingRateDaily funding rate cap
initial_margin_ratioDimensionlessMargin to open (e.g. 0.05 = 20x max leverage)
maintenance_margin_ratioDimensionlessMargin to stay open (liquidation threshold)
impact_sizeUsdValueNotional for impact price calculation
vault_liquidity_weightDimensionlessVault allocation weight for this pair
vault_half_spreadDimensionlessHalf the vault’s bid-ask spread
vault_max_quote_sizeQuantityMaximum vault resting size per side
bucket_sizes[UsdPrice]Price bucket granularities for depth queries

For the relationship between margin ratios and leverage, see Risk §2.

4.4 Pair state

All pairs:

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        pair_states: {
          start_after: null,
          limit: 30
        }
      }
    }
  })
}

Single pair:

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        pair_state: {
          pair_id: "perp/btcusd"
        }
      }
    }
  })
}

Response:

{
  "long_oi": "12500.000000",
  "short_oi": "10300.000000",
  "funding_per_unit": "0.000123"
}
FieldTypeDescription
long_oiQuantityTotal long open interest
short_oiQuantityTotal short open interest
funding_per_unitFundingPerUnitCumulative funding accumulator

For funding mechanics, see Funding.

4.5 Order book depth

Query aggregated order book depth at a given price bucket granularity:

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        liquidity_depth: {
          pair_id: "perp/btcusd",
          bucket_size: "10.000000",
          limit: 20
        }
      }
    }
  })
}
ParameterTypeDescription
pair_idPairIdTrading pair
bucket_sizeUsdPricePrice aggregation granularity (must be in bucket_sizes)
limitu32?Max number of price levels per side

Response:

{
  "bids": {
    "64990.000000": {
      "size": "12.500000",
      "notional": "812375.000000"
    },
    "64980.000000": {
      "size": "8.200000",
      "notional": "532836.000000"
    }
  },
  "asks": {
    "65010.000000": {
      "size": "10.000000",
      "notional": "650100.000000"
    },
    "65020.000000": {
      "size": "5.500000",
      "notional": "357610.000000"
    }
  }
}

Each level contains:

FieldTypeDescription
sizeQuantityAbsolute order size in the bucket
notionalUsdValueUSD notional (size × price)

4.6 Pair statistics

All pairs:

query {
  allPerpsPairStats {
    pairId
    currentPrice
    price24HAgo
    volume24H
    priceChange24H
  }
}

Single pair:

query {
  perpsPairStats(pairId: "perp/btcusd") {
    pairId
    currentPrice
    price24HAgo
    volume24H
    priceChange24H
  }
}
FieldTypeDescription
pairIdString!Pair identifier
currentPriceBigDecimalCurrent price (nullable)
price24HAgoBigDecimalPrice 24 hours ago (nullable)
volume24HBigDecimal!24h trading volume in USD
priceChange24HBigDecimal24h price change percentage (e.g. 5.25 = +5.25%)

4.7 Historical candles

query {
  perpsCandles(
    pairId: "perp/btcusd",
    interval: ONE_HOUR,
    laterThan: "2026-01-01T00:00:00Z",
    earlierThan: "2026-01-02T00:00:00Z",
    first: 24
  ) {
    nodes {
      pairId
      interval
      open
      high
      low
      close
      volume
      volumeUsd
      timeStart
      timeStartUnix
      timeEnd
      timeEndUnix
      minBlockHeight
      maxBlockHeight
    }
    pageInfo {
      hasNextPage
      endCursor
    }
  }
}
ParameterTypeDescription
pairIdString!Trading pair (e.g. "perp/btcusd")
intervalCandleInterval!Candle interval
laterThanDateTimeCandles after this time (inclusive)
earlierThanDateTimeCandles before this time (exclusive)

CandleInterval values: ONE_SECOND, ONE_MINUTE, FIVE_MINUTES, FIFTEEN_MINUTES, ONE_HOUR, FOUR_HOURS, ONE_DAY, ONE_WEEK.

PerpsCandle fields:

FieldTypeDescription
openBigDecimalOpening price
highBigDecimalHighest price
lowBigDecimalLowest price
closeBigDecimalClosing price
volumeBigDecimalVolume in base units
volumeUsdBigDecimalVolume in USD
timeStartStringPeriod start (ISO 8601)
timeStartUnixIntPeriod start (Unix timestamp)
timeEndStringPeriod end (ISO 8601)
timeEndUnixIntPeriod end (Unix timestamp)
minBlockHeightIntFirst block in this candle
maxBlockHeightIntLast block in this candle

5. User state and orders

5.1 User state

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        user_state: {
          user: "0x1234...abcd"
        }
      }
    }
  })
}

Response:

{
  "margin": "10000.000000",
  "vault_shares": "0",
  "positions": {
    "perp/btcusd": {
      "size": "0.500000",
      "entry_price": "64500.000000",
      "entry_funding_per_unit": "0.000100"
    }
  },
  "unlocks": [],
  "reserved_margin": "500.000000",
  "open_order_count": 2
}
FieldTypeDescription
marginUsdValueDeposited margin (USD)
vault_sharesUint128Vault liquidity shares owned
positionsMap<PairId, Position>Open positions by pair
unlocks[Unlock]Pending vault withdrawals
reserved_marginUsdValueMargin reserved for resting limit orders
open_order_countusizeNumber of resting limit orders

Position:

FieldTypeDescription
sizeQuantityPosition size (positive = long, negative = short)
entry_priceUsdPriceAverage entry price
entry_funding_per_unitFundingPerUnitFunding accumulator at last modification

Unlock:

FieldTypeDescription
end_timeTimestampWhen cooldown completes
amount_to_releaseUsdValueUSD value to release

Returns null if the user has no state.

Enumerate all user states (paginated):

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        user_states: {
          start_after: null,
          limit: 10
        }
      }
    }
  })
}

Returns: { "<address>": <UserState>, ... }

5.2 Open orders

Query all resting limit orders and conditional (TP/SL) orders for a user:

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        orders_by_user: {
          user: "0x1234...abcd"
        }
      }
    }
  })
}

Response:

{
  "42": {
    "pair_id": "perp/btcusd",
    "size": "0.500000",
    "kind": {
      "limit": {
        "limit_price": "63000.000000",
        "reduce_only": false,
        "reserved_margin": "1575.000000"
      }
    },
    "created_at": "1700000000000000000"
  },
  "43": {
    "pair_id": "perp/btcusd",
    "size": "-0.500000",
    "kind": {
      "conditional": {
        "trigger_price": "70000.000000",
        "trigger_direction": "above"
      }
    },
    "created_at": "1700000100000000000"
  }
}

The response is a map of OrderId → order details. The kind field is either limit or conditional.

5.3 Single order

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        order: {
          order_id: "42"
        }
      }
    }
  })
}

Response:

{
  "user": "0x1234...abcd",
  "pair_id": "perp/btcusd",
  "size": "0.500000",
  "kind": {
    "limit": {
      "limit_price": "63000.000000",
      "reduce_only": false,
      "reserved_margin": "1575.000000"
    }
  },
  "created_at": "1700000000000000000"
}

Returns null if the order does not exist.

5.4 Trading volume

query {
  queryApp(request: {
    wasmSmart: {
      contract: "PERPS_CONTRACT",
      msg: {
        volume: {
          user: "0x1234...abcd",
          since: null
        }
      }
    }
  })
}
ParameterTypeDescription
userAddrAccount address
sinceTimestamp?Start time (nanoseconds); null for lifetime volume

Returns a UsdValue string (e.g. "1250000.000000").

5.5 Trade history

Query historical perps events such as fills, liquidations, and order lifecycle:

query {
  perpsEvents(
    userAddr: "0x1234...abcd",
    eventType: "order_filled",
    pairId: "perp/btcusd",
    first: 50,
    sortBy: BLOCK_HEIGHT_DESC
  ) {
    nodes {
      idx
      blockHeight
      txHash
      eventType
      userAddr
      pairId
      data
      createdAt
    }
    pageInfo {
      hasNextPage
      endCursor
    }
  }
}
ParameterTypeDescription
userAddrStringFilter by user address
eventTypeStringFilter by event type (see §9)
pairIdStringFilter by trading pair
blockHeightIntFilter by block height

The data field contains the event-specific payload as JSON. For example, an order_filled event:

{
  "order_id": "42",
  "pair_id": "perp/btcusd",
  "user": "0x1234...abcd",
  "fill_price": "65000.000000",
  "fill_size": "0.100000",
  "closing_size": "0.000000",
  "opening_size": "0.100000",
  "realized_pnl": "0.000000",
  "fee": "6.500000"
}

6. Trading operations

Each message is wrapped in a Tx as described in §2 and broadcast via broadcastTxSync.

6.1 Deposit margin

Deposit USDC into the trading margin account:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "deposit": {}
      }
    },
    "funds": {
      "bridge/usdc": "1000000000"
    }
  }
}

The deposited USDC is converted to USD at a fixed rate of $1 per USDC and credited to user_state.margin. In this example, 1000000000 base units = 1,000 USDC = $1,000.

6.2 Withdraw margin

Withdraw USD from the trading margin account:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "withdraw": {
          "amount": "500.000000"
        }
      }
    },
    "funds": {}
  }
}
FieldTypeDescription
amountUsdValueUSD amount to withdraw

The USD amount is converted to USDC at the fixed rate of $1 per USDC (floor-rounded) and transferred to the sender.

6.3 Submit market order

Buy or sell at the best available prices with a slippage tolerance:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "submit_order": {
          "pair_id": "perp/btcusd",
          "size": "0.100000",
          "kind": {
            "market": {
              "max_slippage": "0.010000"
            }
          },
          "reduce_only": false
        }
      }
    },
    "funds": {}
  }
}
FieldTypeDescription
pair_idPairIdTrading pair (e.g. "perp/btcusd")
sizeQuantityContract size — positive = buy, negative = sell
max_slippageDimensionlessMaximum slippage as a fraction of oracle price (0.01 = 1%)
reduce_onlyboolIf true, only the position-closing portion executes

Market orders execute immediately (IOC behavior). Any unfilled remainder is discarded. If nothing fills, the transaction reverts.

For order matching mechanics, see Order matching.

6.4 Submit limit order

Place a resting order on the book:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "submit_order": {
          "pair_id": "perp/btcusd",
          "size": "-0.500000",
          "kind": {
            "limit": {
              "limit_price": "65000.000000",
              "post_only": false
            }
          },
          "reduce_only": false
        }
      }
    },
    "funds": {}
  }
}
FieldTypeDescription
limit_priceUsdPriceLimit price — must be aligned to tick_size
post_onlyboolIf true, rejected if it would match immediately (maker-only)
reduce_onlyboolIf true, only position-closing portion is kept

Limit orders are GTC (good-till-cancelled). The matching portion fills immediately; any unfilled remainder is stored on the book. Margin is reserved for the unfilled portion.

6.5 Cancel order

Cancel a single order:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "cancel_order": {
          "one": "42"
        }
      }
    },
    "funds": {}
  }
}

Cancel all orders:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "cancel_order": "all"
      }
    },
    "funds": {}
  }
}

Cancellation releases reserved margin and decrements open_order_count.

6.6 Submit conditional order (TP/SL)

Place a take-profit or stop-loss order that triggers when the oracle price crosses a threshold:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "submit_conditional_order": {
          "pair_id": "perp/btcusd",
          "size": "-0.100000",
          "trigger_price": "70000.000000",
          "trigger_direction": "above",
          "max_slippage": "0.020000"
        }
      }
    },
    "funds": {}
  }
}
FieldTypeDescription
pair_idPairIdTrading pair
sizeQuantitySize to close — sign must oppose the position
trigger_priceUsdPriceOracle price that activates this order
trigger_directionTriggerDirection"above" or "below" (see below)
max_slippageDimensionlessSlippage tolerance for the market order at trigger

Trigger direction:

DirectionTriggers whenUse case
aboveoracle_price >= trigger_priceTake-profit for longs, stop-loss for shorts
beloworacle_price <= trigger_priceStop-loss for longs, take-profit for shorts

Conditional orders are always reduce-only with zero reserved margin. When triggered, they execute as market orders.

6.7 Cancel conditional order

Cancel a single conditional order:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "cancel_conditional_order": {
          "one": "43"
        }
      }
    },
    "funds": {}
  }
}

Cancel all conditional orders:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "trade": {
        "cancel_conditional_order": "all"
      }
    },
    "funds": {}
  }
}

6.8 Liquidate (permissionless)

Force-close all positions of an undercollateralized user. This message can be sent by anyone (liquidation bots):

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "maintain": {
        "liquidate": {
          "user": "0x5678...ef01"
        }
      }
    },
    "funds": {}
  }
}

The transaction reverts if the user is not below the maintenance margin. Unfilled positions are ADL’d against counter-parties at the bankruptcy price. For mechanics, see Liquidation & ADL.

7. Vault operations

The counterparty vault provides liquidity for the exchange. Users can deposit margin into the vault to earn trading fees, and withdraw with a cooldown period.

7.1 Add liquidity

Transfer margin from the trading account to the vault:

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "vault": {
        "add_liquidity": {
          "amount": "1000.000000",
          "min_shares_to_mint": "900000"
        }
      }
    },
    "funds": {}
  }
}
FieldTypeDescription
amountUsdValueUSD margin amount to transfer to the vault
min_shares_to_mintUint128?Revert if fewer shares are minted (slippage guard)

Shares are minted proportionally to the vault’s current NAV. For vault mechanics, see Vault.

7.2 Remove liquidity

Request a withdrawal from the vault (initiates cooldown):

{
  "execute": {
    "contract": "PERPS_CONTRACT",
    "msg": {
      "vault": {
        "remove_liquidity": {
          "shares_to_burn": "500000"
        }
      }
    },
    "funds": {}
  }
}
FieldTypeDescription
shares_to_burnUint128Number of shares to burn

Shares are burned immediately. The corresponding USD value enters a cooldown queue. After vault_cooldown_period elapses, funds are automatically credited back to the user’s trading margin.

8. Real-time subscriptions

All subscriptions use the WebSocket transport described in §1.2.

8.1 Perps candles

Stream OHLCV candlestick data for a perpetual pair:

subscription {
  perpsCandles(pairId: "perp/btcusd", interval: ONE_MINUTE) {
    pairId
    interval
    open
    high
    low
    close
    volume
    volumeUsd
    timeStart
    timeStartUnix
    timeEnd
    timeEndUnix
    minBlockHeight
    maxBlockHeight
  }
}

Pushes updated candle data as new trades occur. Fields match the PerpsCandle type in §4.7.

8.2 Perps trades

Stream real-time trade fills for a pair:

subscription {
  perpsTrades(pairId: "perp/btcusd") {
    orderId
    pairId
    user
    fillPrice
    fillSize
    closingSize
    openingSize
    realizedPnl
    fee
    createdAt
    blockHeight
    tradeIdx
  }
}

Behavior: On connection, cached recent trades are replayed first, then new trades stream in real-time.

FieldTypeDescription
orderIdStringOrder ID that produced this fill
pairIdStringTrading pair
userStringAccount address
fillPriceStringExecution price
fillSizeStringFilled size (positive = buy, negative = sell)
closingSizeStringPortion that closed existing position
openingSizeStringPortion that opened new position
realizedPnlStringPnL realized from the closing portion
feeStringTrading fee charged
createdAtStringTimestamp (ISO 8601)
blockHeightIntBlock in which the trade occurred
tradeIdxIntIndex within the block

8.3 Contract query polling

Poll any contract query at a regular block interval:

subscription {
  queryApp(
    request: {
      wasmSmart: {
        contract: "PERPS_CONTRACT",
        msg: {
          user_state: {
            user: "0x1234...abcd"
          }
        }
      }
    },
    blockInterval: 5
  ) {
    response
    blockHeight
  }
}
ParameterTypeDefaultDescription
requestGrugQueryInputAny valid queryApp request
blockIntervalInt10Push updates every N blocks

Common use cases:

  • User state — monitor margin, positions, and order counts.
  • Order book depth — track bid/ask levels.
  • Pair states — monitor open interest and funding.

8.4 Block stream

Subscribe to new blocks as they are finalized:

subscription {
  block {
    blockHeight
    hash
    appHash
    createdAt
  }
}

8.5 Event stream

Subscribe to events with optional filtering:

subscription {
  events(
    sinceBlockHeight: 100000,
    filter: [
      {
        type: "order_filled",
        data: [
          {
            path: ["user"],
            checkMode: EQUAL,
            value: ["0x1234...abcd"]
          }
        ]
      }
    ]
  ) {
    type
    method
    eventStatus
    data
    blockHeight
    createdAt
  }
}
Filter fieldTypeDescription
typeStringEvent type name
data[FilterData]Conditions on the event’s JSON data
path[String]JSON path to the field
checkModeCheckValueEQUAL (exact match) or CONTAINS (substring)
value[JSON]Values to match against

9. Events reference

The perps contract emits the following events. These can be queried via perpsEvents (§5.5) or streamed via the events subscription (§8.5).

Margin events

EventFieldsDescription
depositeduser, amountMargin deposited
withdrewuser, amountMargin withdrawn

Vault events

EventFieldsDescription
liquidity_addeduser, amount, shares_mintedLP deposited to vault
liquidity_unlockinguser, amount, shares_burned, end_timeLP withdrawal initiated (cooldown)
liquidity_releaseduser, amountCooldown completed, funds released

Order events

EventFieldsDescription
order_filledorder_id, pair_id, user, fill_price, fill_size, closing_size, opening_size, realized_pnl, feeOrder partially or fully filled
order_persistedorder_id, pair_id, user, limit_price, sizeLimit order placed on book
order_removedorder_id, pair_id, user, reasonOrder removed from book

Conditional order events

EventFieldsDescription
conditional_order_placedorder_id, pair_id, user, trigger_price, trigger_direction, size, max_slippageTP/SL order created
conditional_order_triggeredorder_id, pair_id, user, trigger_price, oracle_priceTP/SL triggered by price move
conditional_order_removedorder_id, pair_id, user, reasonTP/SL removed

Liquidation events

EventFieldsDescription
liquidateduser, pair_id, adl_size, adl_price, adl_realized_pnlPosition liquidated in a pair
deleverageduser, pair_id, closing_size, fill_price, realized_pnlCounter-party hit by ADL
bad_debt_coveredliquidated_user, amount, insurance_fund_remainingInsurance fund absorbed bad debt

ReasonForOrderRemoval

ValueDescription
filledOrder fully filled
canceledUser voluntarily canceled
position_closedPosition was closed (conditional orders only)
self_trade_preventionOrder crossed user’s own order on the opposite side
liquidatedUser was liquidated
deleveragedUser was hit by auto-deleveraging
slippage_exceededConditional order triggered but could not fill within slippage

For liquidation and ADL mechanics, see Liquidation & ADL.

10. Types reference

10.1 Numeric types

All numeric types are signed fixed-point decimals with 6 decimal places, built on dango_types::Number. They are serialized as strings:

Type aliasDimensionExample usageExample value
Dimensionless(pure scalar)Fee rates, margin ratios, slippage"0.050000"
QuantityquantityPosition size, order size, OI"-0.500000"
UsdValueusdMargin, PnL, notional, fees"10000.000000"
UsdPriceusd / quantityOracle price, limit price, entry price"65000.000000"
FundingPerUnitusd / quantityCumulative funding accumulator"0.000123"
FundingRateper dayFunding rate cap"0.000500"

Additional integer types:

TypeEncodingDescription
Uint128StringLarge integer (e.g. vault shares)
u64Number or StringGas limit, timestamps
u32NumberUser index, account index, nonce

10.2 Identifiers

TypeFormatExample
PairIdperp/<base><quote>"perp/btcusd", "perp/ethusd"
OrderIdUint64 (string)"42"
ConditionalOrderIdUint64 (shared counter)"43"
AddrHex address"0x1234...abcd"
Hash25664-char hex"a1b2c3d4e5f6..."
UserIndexu320
AccountIndexu321
Username1–15 chars, [a-z0-9_]"alice"
TimestampNanoseconds since epoch (u64)"1700000000000000000"
DurationNanoseconds (u64)"3600000000000" (1 hour)

10.3 Enums

OrderKind:

{
  "market": {
    "max_slippage": "0.010000"
  }
}
{
  "limit": {
    "limit_price": "65000.000000",
    "post_only": false
  }
}

TriggerDirection:

"above"
"below"

CancelOrderRequest:

{
  "one": "42"
}
"all"

Key:

{
  "secp256r1": "02abc123...33bytes_hex"
}
{
  "secp256k1": "03def456...33bytes_hex"
}
{
  "ethereum": "0x1234...abcd"
}

Credential:

{
  "standard": {
    "key_hash": "...",
    "signature": { ... }
  }
}
{
  "session": {
    "session_info": { ... },
    "session_signature": "...",
    "authorization": { ... }
  }
}

CandleInterval (GraphQL enum):

ONE_SECOND | ONE_MINUTE | FIVE_MINUTES | FIFTEEN_MINUTES | ONE_HOUR | FOUR_HOURS | ONE_DAY | ONE_WEEK

10.4 Response types

Param (global parameters) — see §4.1 for all fields.

PairParam (per-pair parameters) — see §4.3 for all fields.

PairState:

FieldTypeDescription
long_oiQuantityTotal long open interest
short_oiQuantityTotal short open interest
funding_per_unitFundingPerUnitCumulative funding accumulator

State (global state) — see §4.2 for all fields.

UserState — see §5.1 for all fields.

Position:

FieldTypeDescription
sizeQuantityPositive = long, negative = short
entry_priceUsdPriceAverage entry price
entry_funding_per_unitFundingPerUnitFunding accumulator at last update

Unlock:

FieldTypeDescription
end_timeTimestampWhen cooldown completes
amount_to_releaseUsdValueUSD value to release

QueryOrderResponse:

FieldTypeDescription
userAddrOrder owner
pair_idPairIdTrading pair
sizeQuantityOrder size
kindLimitOrConditionalOrderOrder type and parameters
created_atTimestampCreation time

LimitOrConditionalOrder:

{
  "limit": {
    "limit_price": "65000.000000",
    "reduce_only": false,
    "reserved_margin": "1575.000000"
  }
}
{
  "conditional": {
    "trigger_price": "70000.000000",
    "trigger_direction": "above"
  }
}

LiquidityDepthResponse:

FieldTypeDescription
bidsMap<UsdPrice, LiquidityDepth>Bid-side depth by price
asksMap<UsdPrice, LiquidityDepth>Ask-side depth by price

LiquidityDepth:

FieldTypeDescription
sizeQuantityAbsolute order size in bucket
notionalUsdValueUSD notional (size × price)

User (account factory):

FieldTypeDescription
indexUserIndexUser’s numerical index
nameUsernameUser’s username
accountsMap<AccountIndex, Addr>Accounts owned (index → address)
keysMap<Hash256, Key>Associated keys (hash → key)

Account:

FieldTypeDescription
indexAccountIndexAccount’s unique index
ownerUserIndexOwning user’s index

11. Constants

Endpoints

NetworkHTTPWebSocket
Mainnethttps://api-mainnet.dango.zone/graphqlwss://api-mainnet.dango.zone/graphql
Testnethttps://api-testnet.dango.zone/graphqlwss://api-testnet.dango.zone/graphql

Chain IDs

NetworkChain ID
Mainnetdango-1
Testnetdango-testnet-1

Contract addresses

These addresses are the same on both mainnet and testnet.

NameAddress
ACCOUNT_FACTORY_CONTRACT0x18d28bafcdf9d4574f920ea004dea2d13ec16f6b
PERPS_CONTRACT0xd04b99adca5d3d31a1e7bc72fd606202f1e2fc69
ORACLE_CONTRACT0xcedc5f73cbb963a48471b849c3650e6e34cd3b6d

Passive liquidity on Dango DEX

Dango DEX is a fully onchain limit order book (LOB) exchange. It uses frequent batch auctions (FBAs), executed at the end of each block, to match orders. Otherwise, it’s not dissimilar to other LOB exchanges, e.g. Hyperliquid.

A major downside of LOBs vs AMMs is that market making on LOBs requires a high level of sophistication, making it infeasible to average retail investors. From the perspective of an unsophisticated investor who wishes to provide liquidity complete passively on major spot pairs (BTC-USD, ETH-USD, etc.), as of this time, their only options are Uniswap V3 (full range) and Curve V2. However, LP’ing on these AMMs have proven to be generally not profitable due to arbitrage trades.

Loss from arbitrage trades, measured by loss-versus-rebalancing (LVR), occurs when there’s another, more liquid venue for trading the same pair, where price discovery primarily takes place. In crypto, this is typically the CEXs: Binance, Coinbase, Kraken, etc. Suppose BTC-USD is trading at 95,000. Then, upon a favorable news, it jumps to 96,000 on Binance. However, AMMs are completely passive–they never actively adjust quotes based on the news. As such, an arbitrageur can buy BTC at the stale price of 95,000 from the AMM, then sell on Binance for 96,000. LPs in the AMM takes the worse side of the trade. Over time, such losses accumulate and more often than not, outpace the gains from fees.

The objective

Create a passive liquidity pool that provides liquidity on Dango DEX, with the following properties:

  • It will place limit orders in the LOB following a predefined strategy, such as an oracle-informed AMM curve.
  • It aims to be the backstop liquidity. Meaning, it doesn’t need to quote aggressively with super tight spreads. We anticipate professional MMs will take that role. The pool will quote wider (thus taking less risk), and be the backstop in case a big trade eats up all the orders from MMs.
  • It targets majors (BTC, ETH, SOL, etc.) and should be LVR-resistant. At Dango, we want to maximize the benefit of LPs by discouraging arbitrage flow.
  • It doesn’t aim to be resistant to impermanent loss (IL). However, once we ship perpetual futures trading on Dango, we imagine there will be actively managed “vaults” that combine the LP pool and hedging strategies using perps.

Order placement

Let’s discuss how the pool may determine what orders to place in the LOB. Let’s think of the simplest strategy: the constant product curve (“xyk curve”).

Consider a BTC-USD pool that currently contains units of BTC (the “base asset”) and units of USD (the “quote asset”). The state of the pool can be considered a point on the curve , where is a constant that quantifies how much liquidity there is in the pool. When a trade happens, the state moves to a different point on the same curve (that is, without considering any fee).

Generally, for any AMM curve , we define the concept of marginal price as:

For the xyk curve, this is:

is the price, denoted as the units of quote asset per one unit of base asset (that is, over ), of trading an infinitesimal amount of one asset to the other. On a graph, it is the slope of the tangent line that touches the curve at the point .

1

Let’s imagine the pool starts from the state of ; marginal price USD per BTC.

At this time, if a trader swaps 2 units of USD to 2.5 units of BTC, the state would move to the point , marginal price USD per BTC.

We interpret this as follows: under the state of and following the strategy defined by the xyk curve, the pool offers to sell 2.5 units of BTC over the price range of 0.4–1.6 USD per BTC.

Translating this to the context of orderbook, this means the pool would place SELL orders of sizes totalling 2.5 units of BTC between the prices 0.4 and 1.6.

Following this logic, we can devise the following algorithm to work out all the SELL orders that the pool would place:

  • The pool is parameterized by spread and “bin size” .
  • Start from the marginal price (we denote this simply as from here on).
    • The pool would not place any order here. We say the total order size here is zero.
  • Move on to the “bin” at price (marginal price plus the half spread).
    • This is the price at which the pool will place its first SELL order.
    • Using the approach discussed above, find the total order size between the prices and . This is the size of the order to be place here.
  • Move on to the next “bin”, at price .
    • Using the approach discussed above, find the total order size between the prices and .
    • Subtract the total order size between and , this is the order size to be placed here.
  • Do the same for , , … until liquidity runs out (total order size ).

With the same approach, we can work out all the BUY orders for prices below .

For the xyk curve, the orders are visualized as follows (based on the state , and parameters , ):

xyk

We see the pool places orders of roughtly the same size across a wide price range. That is, the liquidity isn’t concentrated.

As example for a concentrated liquidity curve, the Solidly curve results in the following orders:

solidly

As we see, liquidity is significantly more concentrated here.

Tackling arbitrage loss

In order to discourage arbitrage flow, the pool needs to actively adjusts its quote based on the prices trading at other more liquid venues (the CEXs, in our case).

To achieve this, we simply introduce an oracle price term into the AMM invariant. Suppose the oracle price is . Instead of , we simply use the curve:

The xyk and Solidly curves become the following, respectively:

Following the same example with Solidly above, but set oracle price to (higher than the margin price of 200), the orders become:

solidly-price-jump

As we see, the pool now quotes around the price of 210. It places bigger orders on the SELL side than the BUY side, demonstrating that it has a tendency to reduce its holding of the base asset, so that its inventory goes closer the ratio of 1:210 as the oracle indicates.

Oracle risk

The biggest risk of an oracle-informed AMM is that the oracle reports incorrect prices. For example, if BTC is trading at 95,000, but the oracle says the price is 0.0001, then traders are able to buy BTC from Dango at around 0.0001, resulting in almost total loss for our LPs.

To reduce the chance of this happening, we plan to employ the following:

  • Use a low latency oracle, specifically Pyth’s 10 ms or 50 ms feed.
  • Pyth prices come with a confidence range, meaning a range it thinks there’s a 95% probability the true price is in. Our parameter should be configured to be similar or larger than this.
  • Make the oracle a part of our block building logic. A block is invalid if it doesn’t contain an oracle price. The block producer must submit the price in a transaction on the top of the block.
  • The LP pool is given priority to adjust its orders in response to the oracle price before anyone else. Specifically, since we use FBA, the LP pool is allowed to adjust its orders prior to the auction.
  • Implement circuit breakers, that if triggered, the LP pool would cancel all its orders and do nothing, until the situation goes back to normal. These can include:
    • Oracle price is too old (older than a given threshold).
    • Oracle price makes too big of a jump (e.g. it goes from 95,000 to 0.0001).

Open questions

  • A market maker usually doesn’t place orders around the oracle price, but rather computes a “reservation price” based on the oracle price as well as his current inventory. Additionally, he usually doesn’t use equal spread on both sides, but rather skewed spreads based on inventory. A classic model for computing these is that by Avellaneda and Stoikov. Our model do not do these.
  • Whereas Solidly is the simplest concentrated liquidity curve (simpler than Curve V1 or V2), it’s still quite computationally heavy. We need to solve a quartic (4th degree polynomial) equation using Newton’s method, for each “bin”, each block. We would like to explore simpler concentrated liquidity curves.

Audits

A list of audits we have completed so far:

TimeAuditorSubjectLinks
2025-04-07ZellicHyperlanereport
2025-04-02ZellicAccount and authentication systemreport
2024-10-25ZellicJellyfish Merkle Tree (JMT)report
Q4 2024Informal SystemsFormal specification of JMT in Quintblogspec

Bounded values

Bounded values

A common situation developers find themselves in is their contract needs to take a value that must been within a certain bound.

For example, a fee rate should be within the range of 0~1. It doesn’t make sense to charge more than 100% fee. Whenever a fee rate is provided, the contract needs to verify it’s within the bounds, throwing error if not:

#![allow(unused)]
fn main() {
#[grug::derive(Serde)]
struct InstantiateMsg {
    pub fee_rate: Udec256,
}

#[grug::export]
fn instantiate(ctx: MutableCtx, msg: InstantiateMsg) -> anyhow::Result<Response> {
    ensure!(
        Udec256::ZERO <= fee_rate && fee_rate < Udec256::ONE,
        "fee rate is out of bounds"
    );

    Ok(Response::new())
}
}

We call this an imperative approach for working with bounded values.

The problem with this is that the declaration and validation of fee_rate are in two places, often in two separate files. Sometimes developers simply forget to do the validation.

Instead, Grug encourages a declarative approach. We declare the valid range of a value at the time we define it, utilizing the Bounded type and Bounds trait:

#![allow(unused)]
fn main() {
use grug::{Bounded, Bounds};
use std::ops::Bound;

struct FeeRateBounds;

impl Bounds<Udec256> for FeeRateBounds {
    const MIN: Bound<Udec256> = Bound::Inclusive(Udec256::ZERO);
    const MAX: Bound<Udec256> = Bound::Exclusive(Udec256::ONE);
}

type FeeRate = Bounded<Udec256, FeeRateBounds>;

#[grug::derive(Serde)]
struct InstantiateMsg {
    pub fee_rate: FeeRate,
}

#[grug::export]
fn instantiate(ctx: MutableCtx, msg: InstantiateMsg) -> anyhow::Result<Response> {
    // No need to validate the fee rate here.
    // Its bounds are already verified when `msg` is deserialized!

    Ok(Response::new())
}
}

Chain upgrades

There are three dimensions in which to evaluate whether a chain upgrade is a breaking change:

  • Consensus-breaking: a change in the chain’s business logic. Given the same finalized state as of block N - 1 and the same block N, executing the block N using the old and the new software would yield different results, resulting in a consensus failure.
  • State-breaking: a change in the format in which the chain’s state is stored in the DB.
  • API-breaking: a change in the chain’s transaction or query API.

For example, PR #1217 is breaking in all three dimensions; PR #1299 however, is state-breaking, but not consensus- or API-breaking.

Generally speaking, an upgrade that is breaking in any dimension requires a coordinated upgrade, meaning all validating nodes should halt at exactly the same block height, upgrade the software, run the upgrade logic (if any), and resume block production.

Coordinated upgrade

The typical procedure of a coordinated upgrade is as follows, in chronological order:

  1. The chain owner sends a transaction containing the a message in the following schema:

    {
      "upgrade": {
        "height": 12345,
        "cargo_version": "1.2.3",
        "git_tag": "v1.2.3",
        "url": "https://github.com/left-curve/left-curve/releases/v1.2.3"
      }
    }
    

    This signals to node operators at which block the chain will be upgraded, and the proper version of node software they should upgrade to. The node operators should not upgrade the software at this point yet.

  2. The chain finalizes the block right before the upgrade height (12344 in this example). At the upgrade height (12345), during FinalizeBlock, Grug app notices the upgrade height is reached, but the chain isn’t using the correct version (1.2.3), so it performs a graceful halt of the chain by returning an error in ABCI FinalizeBlockResponse. The upgrade height (12345) is not finalized, with no state change committed.

  3. The node operator replaces the node software on the server with the correct version (1.2.3), and restart the service.

  4. CometBFT attempts FinalizeBlock of the upgrade height (12345) again. Grug app notices the upgrade height is reached, and the software is of the correct version. It runs the upgrade logic specified in App::upgrade_handler (if any), and then resumes processing blocks.

Automation

Cosmos SDK chains uses a similar approach to coordinate upgrades, with the x/upgrade module. There exists a tool, cosmovisor, that automates the step (3) discussed in the previous section, without the node operator having to manually do anything. Such a tool doesn’t exist for Grug chains yet, but we’re working on it.

Entry points

Each Grug smart contract presents several predefined Wasm export functions known as entry points. The state machine (also referred to as the host) executes or makes queries at contracts by calling these functions. Some of the entry points are mandatory, while the others are optional. The Grug standard library provides an #[grug::export] macro which helps defining entry points.

This page lists all supported entry points, in Rust pseudo-code.

Memory

These two are auto-implemented. They are used by the host to load data into the Wasm memory. The contract programmer should not try modifying them.

#![allow(unused)]
fn main() {
#[unsafe(no_mangle)]
extern "C" fn allocate(capacity: u32) -> u32;

#[unsafe(no_mangle)]
extern "C" fn deallocate(region_ptr: u32);
}

Basic

These are basic entry points that pretty much every contract may need to implement.

#![allow(unused)]
fn main() {
#[grug::export]
fn instantiate(ctx: MutableCtx, msg: InstantiateMsg) -> Result<Response>;

#[grug::export]
fn execute(ctx: MutableCtx, msg: ExecuteMsg) -> Result<Response>;

#[grug::export]
fn migrate(ctx: MutableCtx, msg: MigrateMsg) -> Result<Response>;

#[grug::export]
fn receive(ctx: MutableCtx) -> Result<Response>;

#[grug::export]
fn reply(ctx: SudoCtx, msg: ReplyMsg, result: SubMsgResult) -> Result<Response>;

#[grug::export]
fn query(ctx: ImmutableCtx, msg: QueryMsg) -> Result<Binary>;
}

Fee

In Grug, gas fees are handled by a smart contract called the taxman. It must implement the following two exports:

#![allow(unused)]
fn main() {
#[grug::export]
fn withhold_fee(ctx: AuthCtx, tx: Tx) -> Result<Response>;

#[grug::export]
fn finalize_fee(ctx: AuthCtx, tx: Tx, outcome: Outcome) -> Result<Response>;
}

Authentication

These are entry points that a contract needs in order to be able to initiate transactions.

#![allow(unused)]
fn main() {
#[grug::export]
fn authenticate(ctx: AuthCtx, tx: Tx) -> Result<Response>;

#[grug::export]
fn backrun(ctx: AuthCtx, tx: Tx) -> Result<Response>;
}

Bank

In Grug, tokens balances and transfers are handled by a contract known as the bank. It must implement the following two exports:

#![allow(unused)]
fn main() {
#[grug::export]
fn bank_execute(ctx: SudoCtx, msg: BankMsg) -> Result<Response>;

#[grug::export]
fn bank_query(ctx: ImmutableCtx, msg: BankQuery) -> Result<BankQueryResponse>;
}

Cronjobs

The chain’s owner can appoint a number of contracts to be automatically invoked at regular time intervals. Each such contract must implement the following entry point:

#![allow(unused)]
fn main() {
#[grug::export]
fn cron_execute(ctx: SudoCtx) -> Result<Response>;
}

IBC

Contracts that are to be used as IBC light clients must implement the following entry point:

#![allow(unused)]
fn main() {
#[grug::export]
fn ibc_client_query(ctx: ImmutableCtx, msg: IbcClientQuery) -> Result<IbcClientQueryResponse>;
}

Contracts that are to be used as IBC applications must implement the following entry points:

#![allow(unused)]
fn main() {
#[grug::export]
fn ibc_packet_receive(ctx: MutableCtx, msg: IbcPacketReceiveMsg) -> Result<Response>;

#[grug::export]
fn ibc_packet_ack(ctx: MutableCtx, msg: IbcPacketAckMsg) -> Result<Response>;

#[grug::export]
fn ibc_packet_timeout(ctx: MutableCtx, msg: IbcPacketTimeoutMsg) -> Result<Response>;
}

Extension traits

In Grug, we make use of the extension trait pattern, which is well explained by this video.

To put it simply, a Rust library has two options on how to ship a functionality: to ship a function, or to ship a trait.

For instance, suppose our library needs to ship the functionality of converting Rust values to strings.

Shipping a function

The library exports a function:

#![allow(unused)]
fn main() {
pub fn to_json_string<T>(data: &T) -> String
where
    T: serde::Serialize,
{
    serde_json::to_string(data).unwrap_or_else(|err| {
        panic!("failed to serialize to JSON string: {err}");
    })
}
}

The consumer imports the function:

#![allow(unused)]
fn main() {
use grug::to_json_string;

let my_string = to_json_string(&my_data)?;
}

Shipping a trait

The library exports a trait, and implements the trait for all eligible types.

The trait is typically named {...}Ext where “Ext” stands for extension, because the effectively extends the functionality of types that implement it.

#![allow(unused)]
fn main() {
pub trait JsonSerExt {
    fn to_json_string(&self) -> String;
}

impl<T> JsonSerExt for T
where
    T: serde::Serialize,
{
    fn to_json_string(&self) -> String {
        serde_json::to_string(data).unwrap_or_else(|err| {
            panic!("failed to serialize to JSON string: {err}");
        })
    }
}
}

The consumer imports the trait:

#![allow(unused)]
fn main() {
use grug::JsonSerExt;

let my_string = my_data.to_json_string()?;
}

Extension traits in Grug

We think the consumer’s syntax with extension traits is often more readable than with functions. Therefore we use this pattern extensively in Grug.

In grug-types, we define functionalities related to hashing and serialization with following traits:

  • Borsh{Ser,De}Ext
  • Proto{Ser,De}Ext
  • Json{Ser,De}Ext
  • HashExt

Additionally, there are the following in grug-apps, which provides gas metering capability to storage primitives including Item and Map, but they are only for internal use and not exported:

  • MeteredStorage
  • MeteredItem
  • MeteredMap
  • MeteredIterator

Gas

Some thoughts on how we define gas cost in Grug.

The Wasmer runtime provides a Metering middleware that measures how many “points” a Wasm function call consumes.

The question is how to associate Wasmer points to the chain’s gas units.

CosmWasm’s approach

As documented here, CosmWasm’s approach is as follows:

  1. Perform a benchmark to measure how many “points” Wasmer can execute per second. Then, set a target amount of gas per second (they use 10^12 gas per second). Between these two numbers, CosmWasm decides that 1 Wasmer point is to equal 170 gas units.

  2. Perform another benchmark to measure how much time it takes for the host to execute each host function (e.g. addr_validate or secp256k1_verify). Based on this, assign a proper gas cost for each host function.

  3. Divide CosmWasm gas by a constant factor of 100 to arrive at Cosmos SDK gas.

Our approach

For us, defining gas cost is easier, because we don’t have a Cosmos SDK to deal with.

  1. We skip step 1, and simply set 1 Wasmer point = 1 Grug gas unit.

  2. We perform the same benchmarks to set proper gas costs for host functions.

  3. We skip this step as well.

In summary,

  • 1 Cosmos SDK gas = 100 CosmWasm gas
  • 1 Wasmer point = 170 CosmWasm gas
  • 1 Wasmer point = 1 Grug gas

Benchmark results

Benchmarks were performed on a MacBook Pro with the M2 Pro CPU.

Relevant code can be found in crates/vm/wasm/benches and crates/crypto/benches.

Wasmer points per second

This corresponds to the step 1 above. This benchmark is irrelevant for our decision making (as we simply set 1 Wasmer point = 1 Grug gas unit), but we still perform it for good measure.

IterationsPointsTime (ms)
200,000159,807,11915.661
400,000319,607,11931.663
600,000479,407,11947.542
800,000639,207,11962.783
1,000,000799,007,15478.803

Extrapolating to 1 second, we arrive at that WasmVm executes 10,026,065,176 points per second. Let’s round this to 10^10 points per second, for simplicity.

If we were to target 10^12 gas units per second as CosmWasm does (we don’t), this would mean 10^12 / 10^10 = 100 gas units per Wasmer point.

This is roughly in the same ballpark as CosmWasm’s result (170 gas units per Wasmer point). The results are of course not directly comparable because they were done using different CPUs, but the numbers being within one degree of magnitude suggests the two VMs are similar in performance.

As said before, we set 1 Wasmer point = 1 gas unit, so we’re doing 10^10 gas per second.

Single signature verification

Time for verifying one signature:

VerifierTime (ms)Gas Per Verify
secp256r1_verify0.1881,880,000
secp256k1_verify0.077770,000
secp256k1_pubkey_recover0.1581,580,000
ed25519_verify0.041410,000

We have established that 1 second corresponds to 10^10 gas units. Therefore, secp256k1_verify costing 0.188 millisecond means it should cost = 770,000 gas.

This is comparable to CosmWasm’s value.

Batch signature verification

ed25519_batch_verify time for various batch sizes:

Batch SizeTime (ms)
250.552
501.084
751.570
1002.096
1252.493
1502.898

Linear regression shows there’s a flat cost 0.134 ms (1,340,000 gas) plus 0.0188 ms (188,000 gas) per item.

Hashes

Time (ms) for the host to perform hashes on inputs of various sizes:

Hasher200 kB400 kB600 kB800 kB1,000 kBGas Per Byte
sha2_2560.5441.0861.6272.2012.71827
sha2_5120.3300.6780.9961.3291.70116
sha3_2560.2980.6060.9181.2201.54315
sha3_5120.6141.1291.7192.3282.89228
keccak2560.3120.6050.9041.2221.53415
blake2s_2560.3050.6320.9071.2121.52615
blake2b_5120.1800.3640.5520.7190.9179
blake30.1050.2210.3210.4110.5125

Generate dependency graph

Dependency relations of the crates in this repository are described by the following Graphviz code:

digraph G {
  node [fontname="Helvetica" style=filled fillcolor=yellow];

  account -> ffi;
  account -> storage;
  account -> types;

  bank -> ffi;
  bank -> storage;
  bank -> types;

  taxman -> bank;
  taxman -> ffi;
  taxman -> storage;
  taxman -> types;

  testing -> app;
  testing -> account;
  testing -> bank;
  testing -> crypto;
  testing -> "db/memory";
  testing -> taxman;
  testing -> types;
  testing -> "vm/rust";

  app -> storage;
  app -> types;

  client -> jmt;
  client -> types;

  "db/disk" -> app;
  "db/disk" -> jmt;
  "db/disk" -> types;

  "db/memory" -> app;
  "db/memory" -> jmt;
  "db/memory" -> types;

  ffi -> types;

  jmt -> storage;
  jmt -> types;

  std -> client;
  std -> ffi;
  std -> macros;
  std -> storage;
  std -> testing;
  std -> types;

  storage -> types;

  "vm/rust" -> app;
  "vm/rust" -> crypto;
  "vm/rust" -> types;

  "vm/wasm" -> app;
  "vm/wasm" -> crypto;
  "vm/wasm" -> types;
}

Install Graphviz CLI on macOS:

brew install graphviz

Generate SVG from a file:

dot -Tsvg input.dot

Generate SVG from stdin:

echo 'digraph { a -> b }' | dot -Tsvg > output.svg

Alternatively, use the online visual editor.

Indexed map

An IndexedMap is a map where each record is indexed not only by the primary key, but also by one or more other indexes.

For example, consider limit orders in an oracle-based perpetual futures protocol. For simplicity, let’s just think about buy orders:

#![allow(unused)]
fn main() {
struct Order {
    pub trader: Addr,
    pub limit_price: Udec256,
    pub expiration: Timestamp,
}
}

For each order, we generate a unique OrderId, which can be an incrementing number, and store orders in a map indexed by the IDs:

#![allow(unused)]
fn main() {
const ORDERS: Map<OrderId, Order> = Map::new("order");
}

During the block, users submit orders. Then, at the end of the block (utilizing the after_block function), a contract is called to do two things:

  • Find all buy orders with limit prices below the oracle price; execute these orders.
  • Find all orders with expiration time earlier than the current block time; delete these orders.

To achieve this, the orders need to be indexed by not only the order IDs, but also their limit prices and expiration times.

For this, we can convert Orders to the following IndexedMap:

#![allow(unused)]
fn main() {
#[index_list]
struct OrderIndexes<'a> {
    pub limit_price: MultiIndex<'a, OrderId, Udec256, Order>,
    pub expiration: MultiIndex<'a, OrderId, Timestamp, Order>,
}

const ORDERS: IndexedMap<OrderId, Order, OrderIndexes> = IndexedMap::new("orders", OrderIndexes {
    limit_price: MultiIndex::new(
        |order| *order.limit_price,
        "owners",
        "orders__price",
    ),
    expiration: MultiIndex::new(
        |order| *order.expiration,
        "owners",
        "orders__exp",
    ),
});
}

Here we use MultiIndex, which is an index type where multiple records in the map can have the same index. This is the appropriate choice here, since surely it’s possible that two orders have the same limit price or expiration.

However, in cases where indexes are supposed to be unique (no two records shall have the same index), UniqueIndex can be used. It will throw an error if you attempt to save two records with the same index.

To find all orders whose limit prices are below the oracle price:

#![allow(unused)]
fn main() {
fn find_fillable_orders(
    storage: &dyn Storage,
    oracle_price: Udec256,
) -> StdResult<Vec<(OrderId, Order)>> {
    ORDERS
        .idx
        .limit_price
        .range(storage, None, Some(oracle_price), Order::Ascending)
        .map(|item| {
            // This iterator includes the limit price, which we don't need.
            let (_limit_price, order_id, order) = item?;
            Ok((order_id, order))
        })
        .collect()
}
}

Similarly, find and purge all orders whose expiration is before the current block time:

#![allow(unused)]
fn main() {
fn purge_expired_orders(
    storage: &mut dyn Storage,
    block_time: Timestamp,
) -> StdResult<()> {
    // We need to first collect order IDs into a vector, because the iteration
    // holds an immutable reference to `storage`, while the removal operations
    // require a mutable reference to it, which can't exist at the same time.
    let order_ids = ORDERS
        .index
        .expiration
        .range(storage, None, Some(block_time), Order::Ascending)
        .map(|item| {
            let (_, order_id, _) = item?;
            Ok(order_id)
        })
        .collect::<StdResult<Vec<OrderId>>>()?;

    for order_id in order_ids {
        ORDERS.remove(storage, order_id);
    }

    Ok(())
}
}

Liquidity provision

Given a liquidity pool consisting of two assets, A and B, and the invariant , where and are the amounts of the two assets in the pool (the “pool reserve”). For simplicity, we denote this as .

Suppose a user provides liquidity with amounts and . After the liquidity is added, the invariant value is . For simplicity, we denote this as .

Suppose before adding the liquidity, the supply of LP token is . We mint user new LP tokens of the following amount:

Here, is a fee rate we charge on the amount of LP tokens mint. Without this fee, the following exploit would be possible: provide unbalanced liquidity, then immediately withdraw balanced liquidity. This effectively achieves a fee-less swap.

The fee rate should be a function over , reflecting how unbalance the user liquidity is:

  • If user liquidity is prefectly balanced, that is, , fee rate should be zero: .
  • If user liquidity is prefertly unbalanced, that is, one-sided (e.g. but ), then the fee rate should be a value such that if the attack is carried out, the output is equal to doing a swap normally.

Our objective for the rest of this article, is to work out the expression of the fee function .

Fee rate

Consider the case where the user liquidity is unbalanced. Without losing generality, let’s suppose . That is, the user provides a more than abundant amount of A, and a less than sufficient amount of B.

Scenario 1

In the first scenario, the user withdraws liquidity immediately after provision. He would get:

Here, is the portion of the pool’s liquidity owned by the user. We can work out its expression as:

where , which represents how much the invariant increases as a result of the added liquidity.

Scenario 2

In the second scenario, the user does a swap of amount of A into amount of B, where is the swap fee rate, which is a constant. The swap must satisfy the invariant:

The user now has amount of A and amount of B.

As discussed in the previous section, we must choose a fee rate such that the two scenarios are equivalent. This means the user ends up with the same amount of A and B in both scenarios:

We can rearrange these into a cleaner form:

We can use the first equation to work out either or , and put it into the second equation to get .

Xyk pool

The xyk pool has the invariant:

Our previous system of equations takes the form:

TODO…

Margin account: health

The dango-lending contract stores a collateral power for each collateral asset, and a Market for each borrowable asset:

#![allow(unused)]
fn main() {
const COLLATERAL_POWERS: Item<BTreeMap<Denom, Udec128>> = Item::new("collateral_power");

const MARKETS: Map<&Denom, Market> = Map::new("market");
}
  • An asset may be a collateral asset but not a borrowable asset, e.g. wstETH, stATOM, LP tokens. But typically all borrowable assets are also collateral assets, such that when a margin account borrows an asset, this asset counts both as collateral and debt.
  • Collateral powers are to be bounded in the range [0, 1). An asset with lower volatility and more abundant liquidity gets bigger collateral power, vise versa.
  • We may store all collateral powers in a single Item<BTreeMap<Denom, Udec128>> if we don’t expect to support too many collateral assets.

Suppose:

  • a margin account has collateral assets and debts
  • the price of an asset is
  • the collateral power of an asset is

The account’s utilization is:

In the backrun function, the margin account asserts . If not true, it throws an error to revert the transaction.

The frontend should additionally have a max_ltv, somewhat smaller than 1, such as 95%. It should warn or prevent users from doing anything that results in their utilization going bigger than this, such that their account isn’t instantly liquidated.

Math

Rust’s primitive number types are insufficient for smart contract use cases, for three main reasons:

  1. Rust only provides up to 128-bit integers, while developers often have to deal with 256- or even 512-bit integers. For example, Ethereum uses 256-bit integers to store ETH and ERC-20 balances, so if a chain has bridged assets from Ethereum, their amounts may need to be expressed in 256-bit integers. If the amounts of two such asset are to be multiplied together (which is common in AMMs), 512-bit integers may be necessary.

  2. Rust does not provide fixed-point decimal types, which are commonly used in financial applications (we don’t want to deal with precision issues with floating-point numbers such as 0.1 + 0.2 = 0.30000000000000004). Additionally, there are concerns over floating-point non-determinism, hence it’s often disabled in blockchains.

  3. Grug uses JSON encoding for data that go in or out of contracts. However, JSON specification (RFC 7159) only guarantees support for integer numbers up to (2**53)-1. Any number type that may go beyond this limit needs to be serialized to JSON strings instead.

Numbers in Grug

Grug provides a number of number types for use in smart contracts. They are built with the following two primitive types:

typedescription
Int<U>integer
Dec<U>fixed-point decimal

It is, however, not recommended to use these types directly. Instead, Grug exports the following type alises:

aliastypedescription
Uint64Int<u64>64-bit unsigned integer
Uint128Int<u128>128-bit unsigned integer
Uint256Int<U256>256-bit unsigned integer
Uint512Int<U512>512-bit unsigned integer
Int64Int<i64>>64-bit signed integer
Int128Int<i128>>128-bit signed integer
Int256Int<I256>>256-bit signed integer
Int512Int<I512>>512-bit signed integer
Udec128Dec<i128>128-bit unsigned fixed-point number with 18 decimal places
Udec256Dec<I256>256-bit unsigned fixed-point number with 18 decimal places
Dec128Dec<i128>>128-bit signed fixed-point number with 18 decimal places
Dec256Dec<I256>>256-bit signed fixed-point number with 18 decimal places

where {U,I}{256,512} are from the bnum library.

Traits

Uint64Uint128Uint256Uint512Int64Int128Int256Int512Udec128Udec256Dec128Dec256
Bytable
Decimal
FixedPoint
Fraction
Inner
Integer
IntoDec
IntoInt
IsZero
MultiplyFraction
MultiplyRatio
NextNumber
Number
NumberConst
PrevNumber
Sign
Signed
Unsigned

Nonces and unordered transactions

Nonce is a mechanism to prevent replay attacks.

Suppose Alice sends 100 coins to Bob on a blockchain that doesn’t employ such a mechanism. An attacker can observe this transaction (tx) confirmed onchain, then broadcasts it again. Despite the second time this tx is not broadcasted by Alice, it does contain a valid signature from Alice, so it will be accepted again. Thus, total 200 coins would leave Alice’s wallet, despite she only consents to sending 100. This can be repeated until all coins are drained from Alice’s wallet.

To prevent this,

  • each tx should include a nonce, and
  • the account should internally track the nonce it expects to see from the next tx.

The first time an account sends a tx, the tx should include a nonce of ; the second time, ; so on. Suppose Alice’s first tx has a nonce of . If the attacker attempts to broadcast it again, the tx would be rejected by the mempool, because Alice’s account expects a nonce of .

The above describes same-chain replay attack. There is also cross-chain replay attack, where an attacker observes a tx on chain A, and broadcasts it again on chain B. To prevent this, transactions include a chain ID besides nonce.

The problem

The drawback of this naïve approach to handling nonces is it enforces a strict ordering of all txs, which doesn’t do well in use cases where users are expected to submit txs with high frequency. Consider this situation:

  • Alice’s account currently expects a nonce of ;
  • Alice sends a tx (let’s call this tx A) with nonce ;
  • Alice immediately sends another tx (B) with nonce ;
  • due to network delays, tx B arrives on the block builder earlier than A.

Here, the block builder would reject tx B from entering the mempool, because it expects a nonce of , while tx B comes with . When tx A later arrives, it will be accepted. The result is Alice submits two txs, but only one makes it into a block.

Imagine Alice is trading on an order book exchange and wants to cancel two active limit orders. These actions are not correlated – there’s no reason we must cancel one first then the other. So Alice click buttons to cancel the two in quick succession. However, only one ends up being canceled; she has to retry canceling the other one. Bad UX!

HyperLiquid’s solution

As described here.

In HyperLiquid, an account can have many session keys, each of which has its own nonce. In our case, to simplify things, let’s just have one nonce for each account (across all session keys).

Instead of tracking a single nonce, the account tracks the most recent nonces it has seen (let’s call these the SEEN_NONCES). HyperLiquid uses , while for simplicity in the discussion below let’s use .

Suppose Alice’s account has the following SEEN_NONCES: . is missing because it got lost due to network problems.

Now, Alice broadcasts two txs in quick succession, with nonces and . Due to network delays, arrives at the block builder first.

The account will carry out the following logic:

  • accept the tx if its nonce is newer than the oldest nonce in SEEN_NONCES, and not already in SEEN_NONCES;
  • insert the tx’s nonce into SEEN_NONCES.

When arrives first, it’s accepted, and SEEN_NONCES is updated to: . ( is removed because we only keep the most recent nonces.)

When arrives later, it’s also accepted, with SEEN_NONCES updated to: .

This solves the UX problem we mentioned in the previous section.

Transaction expiry

Now suppose tx finally arrives. Since it was created a long while ago, it’s most likely not relevant any more. However, following the account’s logic, it will still be accepted.

To prevent this, we should add an expiry parameter into the tx metadata. If the expiry is earlier than the current block time, the tx is rejected, regardless of the nonce rule.

expiry can be either a block height or timestamp. For Dango’s use case, timestamp probably makes more sense.

Transaction lifecycle

A Grug transaction (tx) is defined by the following struct:

#![allow(unused)]
fn main() {
struct Tx {
    pub sender: Addr,
    pub gas_limit: u64,
    pub msgs: Vec<Message>,
    pub data: Json,
    pub credential: Json,
}
}

Explanation of the fields:

Sender

The account that sends this tx, who will perform authentication and (usually) pay the tx fee.

Gas limit

The maximum amount of gas requested for executing this tx.

If gas of this amount is exhausted at any point, execution is aborted and state changes discarded.1

Messages

A list of Messages to be executed.

They are executed in the specified order and atomically, meaning they either succeed altogether, or fail altogether; a single failed message failing leads to the entire tx aborted.2

Data

Auxilliary data to attach to the tx.

An example use case of this is if the chain accepts multiple tokens for fee payment, the sender can specify here which denom to use:

{
  "data": {
    "fee_denom": "uatom"
  }
}

The taxman contract, which handles fees, should be programmed to deserialize this data, and use appropriate logics to handle the fee (e.g. swap the tokens on a DEX).

Credential

An arbitrary data to prove the tx was composed by the rightful owner of the sender account. Most commonly, this is a cryptographic signature.

Note that data is an opaque grug::Json (which is an alias to serde_json::Value) instead of a concrete type. This is because Grug does not attempt to intrepret or do anything about the credential. It’s all up to the sender account. Different accounts may expect different credential types.

Next we discuss the full lifecycle of a transaction.

Simulation

The user have specified sender, msgs, and data fields by interacting with a webapp. The next step now is to determine an appropriate gas_limit.

For some simple txs, we can make a reasonably good guess of gas consumption. For example, a tx consisting of a single Message::Transfer of a single coin should consume just under 1,000,000 gas (of which 770,000 is for Secp256k1 signature verification).

However, for more complex txs, it’s necessary to query a node to simulate its gas consumption.

To do this, compose an UnsignedTx value:

#![allow(unused)]
fn main() {
struct UnsignedTx {
    pub sender: Addr,
    pub msgs: Vec<Message>,
    pub data: Json,
}
}

which is basically Tx but lacks the gas_limit and credential fields.

Then, invoke the ABCI Query method with the string "/simulate" as path:

  • Using the Rust SDK, this can be done with the grug_sdk::Client::simulate method.
  • Using the CLI, append the --simulate to the tx subcommand.

The App will run the entire tx in simulation mode, and return an Outcome value:

#![allow(unused)]
fn main() {
struct Outcome {
    pub gas_limit: Option<u64>,
    pub gas_used: u64,
    pub result: GenericResult<Vec<Event>>,
}
}

This includes the amount of gas used, and if the tx succeeded, the events that were emitted; or, in case the tx failed, the error message.

Two things to note:

  • In simulation mode, certain steps in authentication are skipped, such as signature verification (we haven’t signed the tx yet at this point). This means gas consumption is underestimated. Since we know an Secp256k1 verification costs 770,000 gas, it’s advisable to add this amount manually.
  • The max amount of gas the simulation can consume is the node’s query gas limit, which is an offchain parameter chosen individually by each node. If the node has a low query gas limit (e.g. if the node is not intended to serve heavy query requests), the simulation may fail.

CheckTx

Now we know the gas limit, the user will sign the tx, and we create the Tx value and broadcast it to a node.

Tendermint will now call the ABCI CheckTx method, and decide whether to accept the tx into mempool or not, based on the result.

When serving a CheckTx request, the App doesn’t execute the entire tx. This is because while some messages may fail at this time, they may succeed during FinalizeBlock, as the chain’s state would have changed.

Therefore, instead, the App only performs the first two steps:

  1. Call the taxman’s withhold_fee method. This ensures the tx’s sender has enough fund to afford the tx fee.
  2. Call the sender’s authenticate method in normal (i.e. non-simulation) mode. Here the sender performs authentication (which is skipped in simulation mode).

Tendermint will reject the tx if CheckTx fails (meaning, either withfold_fee or authenticate fails), or if the tx’s gas limit is bigger than the block gas limit (it can’t fit in a block). Otherwise, it’s inserted into the mempool.

FinalizeBlock

In FinalizeBlock, the entire tx processing flow is performed, which is:

  1. Call taxman’s withhold_fee method.

    This MUST succeed (if it would fail, it should have failed during CheckTx such that the tx is rejected from entering mempool). If does fail for some reason (e.g. a previous tx in the block drained the sender’s wallet, so it can no longer affored the fee), the processing is aborted and all state changes discarded.

  2. Call sender’s authenticate method.

    If fails, discard state changes from step 2 (keeping those from step 1), then jump to step 5.

  3. Loop through the messages, execute one by one.

    If any fails, discard state changes from step 2-3, then jump to step 5.

  4. Call sender’s backrun method.

    If fails, discard state changes from step 2-4, then jump to step 5.

  5. Call taxman’s finalize_fee method.

    This MUST succeed (the bank and taxman contracts should be programmed in a way that ensures this). If it does fail for some reason, discard all state changes for all previous steps and abort.

TODO: make a flow chart

Summary

SimulateCheckTxFinalizeBlock
Input typeUnsignedTxTxTx
Call taxman withhold_feeYesYesYes
Call sender authenticateYes, in simulation modeYes, in normal modeYes, in normal mode
Execute messagesYesNoYes
Call sender backrunYesNoYes
Call taxman finalize_feeYesNoYes

  1. Transaction fee is still deducted. See the discussion on fee handling later in the article.

  2. This said, a SubMessage can fail without aborting the tx, if it’s configured as such (with SubMessage::reply_on set to ReplyOn::Always or ReplyOn::Error).

Networks

Dango mainnet, testnets, and devnets.

How to spin up a devnet

Prerequisites

  • Linux (we use Ubuntu 24.04)
  • Docker
  • Rust 1.80+
  • Go

Steps

  1. Compile dango:

    git clone https://github.com/left-curve/left-curve.git
    cd left-curve
    cargo install --path dango/cli
    dango --version
    
  2. Compile cometbft:

    git clone https://github.com/cometbft/cometbft.git
    cd cometbft
    make install
    cometbft version
    
  3. Initialize the ~/.dango directory:

    dango init
    
  4. Initialize the ~/.cometbft directory:

    cometbft init
    
  5. Create genesis state. Provide chain ID and genesis time as positional arguments:

    cd left-curve
    cargo run -p dango-genesis --example build_genesis -- dev-5 2025-02-25T21:00:00Z
    

    Genesis should be written into ~/.cometbft/config/genesis.json

  6. Create systemd service for postgresql:

    [Unit]
    Description=PostgreSQL
    After=network.target
    
    [Service]
    Type=simple
    User=larry
    Group=docker
    WorkingDirectory=/home/larry/workspace/left-curve/indexer
    ExecStart=/usr/bin/docker compose up db
    ExecStop=/usr/bin/docker compose down db
    
    [Install]
    WantedBy=multi-user.target
    

    Save this as /etc/systemd/system/postgresql.service.

    Notes:

    • WorkingDirectory should be the directory where the docker-compose.yml is located.

    • The User should be added to the docker group:

      sudo usermod -aG docker larry
      
  7. Create systemd service for dango:

    [Unit]
    Description=Dango
    After=network.target
    
    [Service]
    Type=simple
    User=larry
    ExecStart=/home/larry/.cargo/bin/dango start
    
    [Install]
    WantedBy=multi-user.target
    

    Save this as /etc/systemd/system/dango.service.

  8. Create systemd service for cometbft:

    [Unit]
    Description=CometBFT
    After=network.target
    
    [Service]
    Type=simple
    User=larry
    ExecStart=/home/larry/.go/bin/cometbft start
    
    [Install]
    WantedBy=multi-user.target
    

    Save this as /etc/systemd/system/cometbft.service.

  9. Refresh systemd:

    sudo systemctl daemon-reload
    
  10. Start postgresql:

    sudo systemctl start postgresql
    
  11. Create database for the indexer:

    cd left-curve/indexer
    createdb -h localhost -U postgres grug_dev
    
  12. Start dango:

    sudo systemctl start dango
    
  13. Start cometbft:

    sudo systemctl start cometbft
    

    Note: when starting, start in this order: postgresql, dango, cometbft. When terminating, do it in the reverse order.

Killing existing devnet and start a new one

  1. Stop dango and cometbft services (no need to stop postgresql):

    sudo systemctl stop cometbft
    sudo systemctl stop dango
    
  2. Reset cometbft:

    cometbft unsafe-reset-all
    
  3. Reset dango:

    dango db reset
    
  4. Reset indexer DB:

    dropdb -h localhost -U postgres grug_dev
    createdb -h localhost -U postgres grug_dev
    
  5. Delete indexer saved blocks:

    rm -rfv ~/.dango/indexer
    
  6. Restart the services:

    sudo systemctl start dango
    sudo systemctl start cometbft
    

Test accounts

Each devnet comes with 10 genesis users: owner and user{1-9}. They use Secp256k1 public keys derived from seed phrases with derivation path m/44'/60'/0'/0/0.

Do NOT use these keys in production!!!

Usernameowner
Private8a8b0ab692eb223f6a2927ad56e63c2ae22a8bc9a5bdfeb1d8127819ddcce177
Public0278f7b7d93da9b5a62e28434184d1c337c2c28d4ced291793215ab6ee89d7fff8
Mnemonicsuccess away current amateur choose crystal busy labor cost genius industry cement rhythm refuse whale admit meadow truck edge tiger melt flavor weapon august

Usernameuser1
Privatea5122c0729c1fae8587e3cc07ae952cb77dfccc049efd5be1d2168cbe946ca18
Public03bcf89d5d4f18048f0662d359d17a2dbbb08a80b1705bc10c0b953f21fb9e1911
Mnemonicauction popular sample armed lecture leader novel control muffin grunt ceiling alcohol pulse lunch eager chimney quantum attend deny copper stumble write suggest aspect

Usernameuser2
Privatecac7b4ced59cf0bfb14c373272dfb3d4447c7cd5aea732ea6ff69e19f85d34c4
Public02d309ba716f271b1083e24a0b9d438ef7ae0505f63451bc1183992511b3b1d52d
Mnemonicnoodle walk produce road repair tornado leisure trip hold bomb curve live feature satoshi avocado ask pitch there decrease guitar swarm hybrid alarm make

Usernameuser3
Privatecf6bb15648a3a24976e2eeffaae6201bc3e945335286d273bb491873ac7c3141
Public024bd61d80a2a163e6deafc3676c734d29f1379cb2c416a32b57ceed24b922eba0
Mnemonicalley honey observe various success garbage area include demise age cash foster model royal kingdom section place lend frozen loyal layer pony october blush

Usernameuser4
Private126b714bfe7ace5aac396aa63ff5c92c89a2d894debe699576006202c63a9cf6
Public024a23e7a6f85e942a4dbedb871c366a1fdad6d0b84e670125991996134c270df2
Mnemonicfoot loyal damp alien better first glue supply claw author jar know holiday slam main siren paper transfer cram breeze glow forest word giant

Usernameuser5
Privatefe55076e4b2c9ffea813951406e8142fefc85183ebda6222500572b0a92032a7
Public03da86b1cd6fd20350a0b525118eef939477c0fe3f5052197cd6314ed72f9970ad
Mnemoniccliff ramp foot thrive scheme almost notice wreck base naive warfare horse plug limb keep steel tone over season basic answer post exchange wreck

Usernameuser6
Private4d3658519dd8a8227764f64c6724b840ffe29f1ca456f5dfdd67f834e10aae34
Public03428b179a075ff2142453c805a71a63b232400cc33c8e8437211e13e2bd1dec4c
Mnemonicspring repeat dog spider dismiss bring media orphan process cycle soft divorce pencil parade hill plate message bamboo kid fun dose celery table unknown

Usernameuser7
Private82de24ba8e1bc4511ae10ce3fbe84b4bb8d7d8abc9ba221d7d3cf7cd0a85131f
Public028d4d7265d5838190842ada2573ef9edfc978dec97ca59ce48cf1dd19352a4407
Mnemonicindoor welcome kite echo gloom glance gossip finger cake entire laundry citizen employ total aim inmate parade grace end foot truly park autumn pelican

Usernameuser8
Privateca956fcf6b0f32975f067e2deaf3bc1c8632be02ed628985105fd1afc94531b9
Public02a888b140a836cd71a5ef9bc7677a387a2a4272343cf40722ab9e85d5f8aa21bd
Mnemonicmoon inmate unique oil cupboard tube cigar subway index survey anger night know piece laptop labor capable term ivory bright nice during pattern floor

Usernameuser9
Privatec0d853951557d3bdec5add2ca8e03983fea2f50c6db0a45977990fb7b0c569b3
Public0230f93baa8e1dbe40a928144ec2144eed902c94b835420a6af4aafd2e88cb7b52
Mnemonicbird okay punch bridge peanut tonight solar stereo then oil clever flock thought example equip juice twenty unfold reform dragon various gossip design artefact

dev-1

ContractAddress
account_factory0xc4a812037bb86a576cc7a672e23f972b17d02cfe
amm0x1d4789f7ad482ac101a787678321460662e7c4da
bank0x420acd39b946b5a7ff2c2d0153a545abed26014a
fee_recipient0x1e94e30f113f0f39593f5a509898835720882674
ibc_transfer0xf5ade15343d5cd59b3db4d82a3af328d78f68fb5
owner0x1e94e30f113f0f39593f5a509898835720882674
taxman0x3b999093832cbd133c19fa16fe6d9bbc7fdc3dd3
token_factory0x01006b941f3a2fdc6c695d6a32418db19892730d
user10x64b06061df3518a384b83b56e025cbce1d522ea9
user20xf675f9827a44764facb06e64c212ad669368c971
user30x712c90c5eac193cd9ff32d521c26f46e690cde59

dev-2

ContractAddress
account_factory0x49713b307b964100357bc58284afe3d267def819
amm0x28b4ad941e8c54e5d4096b37770d9416507a3b2d
bank0x73414af7dd7af63f0ece9a39fc0a613502893d88
fee_recipient0x239c425f1f55ee8c5b41fc4553e2e48736f790be
ibc_transfer0xac45408f2c78997a4402fc37b55d75e5364f559b
owner0x239c425f1f55ee8c5b41fc4553e2e48736f790be
taxman0x6fae8b4dceda6e93fe203d10bd20a531e93ef2c0
token_factory0x78f06530a0cc8f68f0e628f7d42943ae08fe66f1
user10x28aa381993107c2df23c710e7de29dded8ade20f
user20x9774355e46c76821387e79f1f14d8bd93e8136c4
user30xf51bd88758d51c67c92ad8ec5abfe3e64df9c954

dev-3

We’re no longer using Docker for devnets starting from this one.

ContractAddress
account_factory0x7f3a53d1f240e043a105fb59eac2cc10496bfb92
amm0xd32f60aadbd34057dd298dfb6ff2f9c3ee7af25b
bank0x929a99d0881f07e03d5f91b5ad2a1fc188f64ea1
ibc_transfer0xfd802a93e35647c5cbd3c85e5816d1994490271e
lending0x5981ae625871c498afda8e9a52e3abf5f5486578
oracle0x9ec674c981c0ec87a74dd7c4e9788d21003a2f79
owner0xb86b2d96971c32f68241df04691479edb6a9cd3b
taxman0xc61778845039a412574258694dd04976064ec159
token_factory0x1cc2f67b1a73e59e1be4d9c4cf8de7a93088ea79
user10x384ba320f302804a0a03bfc8bb171f35d8b84f01
user20x0d0c9e26d70fdf9336331bae0e0378381e0af988
user30x0addd2dd7f18d49ce6261bc4431ad77bd9c46336

dev-4

We’re no longer using Docker for devnets starting from this one.

ContractAddress
account_factory0x18d28bafcdf9d4574f920ea004dea2d13ec16f6b
amm0xd68b93d22f71d44ee2603d70b8901e00197f601a
bank0x2f3d763027f30db0250de65d037058c8bcbd3352
hyperlane/fee0x1820557f629fa72caf0cab710640e44c9826deb2
hyperlane/ism0xaea4d5d40d19601bb05a49412de6e1b4b403c5a7
hyperlane/mailbox0x51e5de0593d0ea0647a93925c91dafb98c36738f
hyperlane/merkle0x0f4f47e2bd07b308bd1b3b4b93d72412e874ca8a
hyperlane/warp0x6c7bb6ed728a83469f57afa1000ca7ecd67652c3
ibc_transfer0x9dab5ef15ecc5ac2a26880ee0a40281745508a74
lending0x21a3382e007b2b1bc622ffad0782abaec6cf34c7
oracle0x5008fe31cf74b45f7f67c4184804cd6fe75ddeb2
owner0x695c7afd829abae6fe6552afec2f8e88d10b65e4
taxman0xe14d4b7bfca615e47a0f6addaf166b7fe0816c68
token_factory0x62ae059a9e0f15f3899538e2f2b4befc8b35fb97
user10xcf8c496fb3ff6abd98f2c2b735a0a148fed04b54
user20x36e8118e115302889d538ae58c111ba88a2a715b
user30x653d34960867de3c1dbab7052f3e0508d50a8f9c
user40xf69004c943cbde86bfe636a1e82035c15b81ba23
user50x10864a72a54c1674f24594ec2e6fed9f256512f5
user60x9ee6bcecd0a7e9b0b49b1e4049f35cb366f8c42d
user70x8639d6370570161d9d6f9470a93820da915fa204
user80x4f60cb4f5f11432f1d825bafd6498986e5f1521b
user90x5017dae9b68860f36aae72f653efb1d63d632a97

dev-5

ContractAddress
account_factory0x18d28bafcdf9d4574f920ea004dea2d13ec16f6b
bank0xb75a9c68d94f42c65287e0f9529e387ce133b3dc
dex0x8dd37b7e12d36bbe1c00ce9f0c341bfe1712e73f
hyperlane/fee0x1820557f629fa72caf0cab710640e44c9826deb2
hyperlane/ism0xaea4d5d40d19601bb05a49412de6e1b4b403c5a7
hyperlane/mailbox0x51e5de0593d0ea0647a93925c91dafb98c36738f
hyperlane/merkle0x0f4f47e2bd07b308bd1b3b4b93d72412e874ca8a
hyperlane/va0x938f2cab274baff29ed1515f205df1c58464afc9
lending0x53373c59e508bd6cb903e3c00b7b224d2180982f
oracle0x37e32bfe0cc6d70bea6d146f6ee10b29c307f68b
owner0x33361de42571d6aa20c37daa6da4b5ab67bfaad9
taxman0x29ddd3dbf76f09d8a9bc972a3004bf7c6da54176
vesting0x69ee3f5f2a8300c96d008865c2b2df4e40ec48cc
user10x01bba610cbbfe9df0c99b8862f3ad41b2f646553
user20x0fbc6c01f7c334500f465ba456826c890f3c8160
user30xf75d080e41925c12bff714eda6ab74330482561b
user40x5a7213b5a8f12e826e88d67c083be371a442689c
user50xa20a0e1a71b82d50fc046bc6e3178ad0154fd184
user60x365a389d8571b681d087ee8f7eecf1ff710f59c8
user70x2b83168508f82b773ee9496f462db9ebd9fca817
user80xbed1fa8569d5a66935dea5a179b77ac06067de32
user90x6f95c4f169f38f598dd571084daa5c799c5743de
warp0x00d4f0a556bfeaa12e1451d74830cf483153af91