Skip to content

Reliability Pools

A Reliability Pool is the engine of certainty for a specific network.

It is a logic cluster of RPC nodes that Backpac perpetually vets, prunes, and orchestrates to ensure a zero-failure settlement path.

The Pruning Loop

Backpac operates a continuous vetting circuit. When a settlement intent enters a pool, the following deterministic pruning occurs:

  1. Failure Extraction: The engine immediately evicts any node with a non-zero circuit-breaker state or high slot-lag.
  2. Law Enforcement: The pool-specific algorithm (e.g., Latency EMA) scores the remaining high-fidelity nodes.
  3. Orchestration: Traffic is dispatched to the optimal node with sub-millisecond precision.

Orchestration

1. Dynamic Latency Profiling

Prioritizes the fastest verifiable path using an Exponential Moving Average (EMA).

  • Exploration Logic: The engine periodically probes "cold" nodes to refresh the latency landscape, ensuring the fastest path is always known.
  • Mandate: Use this for high-frequency READ operations where speed is the primary metric.

2. State Pinning (State-Aware)

Eliminates the "Mempool Shadow" by pinning a user's wallet address to a consistent high-fidelity node.

  • Deterministic Hashing: Uses FNV-1a hashing to ensure a user always sees their own transaction state, preventing reorg-induced UI confusion.
  • Mandate: Necessary for dApps where transaction visibility must be atomic and consistent.

3. Solana Slot-Aware

Physically pins your application to the network tip.

  • Lag Penalization: Nodes are scored based on proximity to the latest slot tip (e.g., $Lag \times 20,000$). This prunes any node that is not "tip-of-line."
  • Mandate: Mandatory for Solana trading and high-fidelity state indexing.

4. Block Height Enforcement (EVM)

Guaranteed interaction with the latest global state.

  • Strict Tolerance: Define your block height window (e.g., 0-2 blocks). The engine will prune any node that has fallen behind.
  • Mandate: The standard for EIP-1559 settlement on Ethereum and L2s.

The Deterministic Write Path (DIN)

For high-value settlement, pools can be hardened using the Decentralized Infrastructure Network (DIN). Select Sticky Sessions as the Algorithm in your Reliability Pool settings.

  • Transaction Finality: DIN pools enable transaction-level failover. If a provider drops a broadcast mid-flight, Backpac automatically re-propagates the intent through secondary survivors.
  • Zero-Drop Guarantee: This is the infrastructure requirement for institutional money movement.

Region Aware Compliance

For regulated institutions operating under the GENIUS Act of 2025 or EU data residency mandates, Reliability Pools support jurisdiction-locked routing via the region_aware algorithm.

Configuration

When creating or updating a Reliability Pool, set the algorithm to region_aware and define your authorized regions:

json
{
  "algorithm_type": "region_aware",
  "algorithm_config": {
    "allowed_regions": ["eu-west", "eu-central"]
  }
}
  • allowed_regions: An array of region identifiers. Only nodes tagged with a matching metadata.region will receive traffic.
  • Default Behavior: If allowed_regions is omitted, the pool defaults to the worker's physical deployment region.
  • Works with DIN: Dynamic DIN providers are filtered at routing time — the Engine evaluates each provider's region metadata before selection, ensuring compliance even with dynamically discovered infrastructure.

Endpoint Region Tagging

For Sovereign (BYO) endpoints, tag each endpoint with its geographic region:

json
{
  "metadata": {
    "region": "eu-west"
  }
}

The Engine reads this metadata at routing time to enforce the Reliability Pool's allowed_regions constraint.

Strict Compliance

For absolute jurisdictional guarantees, ensure your Reliability Pool contains only endpoints in your authorized regions. This creates a mathematically provable compliance boundary — if no compliant node is healthy, the Engine will reject the request rather than route out-of-jurisdiction.

Global Verification (Health Sync)

Reliability is synchronized across the entire cluster.

  • Health Propagation: When one worker prunes a failed node, it broadcasts the state change via Redis Pub/Sub. All other workers immediately trip their local circuit, preventing "Cascading Failure" across your infrastructure.
  • Circuit States:
    • CLOSED: Optimal health. Traffic flows.
    • OPEN: Failure detected. Path pruned.
    • HALF-OPEN: Controlled recovery testing.

Reliability Pools turn fragmented network noise into a deterministic signal.

Settlement Certainty Layer