lowkey

    Distributed lock service on Raft with fencing tokens for consistency.

    Distributed SystemsRaftConsensusLocking
    January 2026
    90% completed
    YM
    Yashaswi Mishra
    January 2026

    Tech Stack

    Backend

    Go

    Completion Status

    Project completion90%
    This project is still under active development.

    lowkey is a distributed lock service built on Raft consensus. It provides strongly consistent locks with fencing tokens to prevent split-brain scenarios and stale writes.


    Overview

    Distributed locking is hard. Multiple service instances need coordination, but networks partition, processes pause, and clients crash. lowkey solves these problems using:

    1. Raft consensus → Only majority partition can acquire locks
    2. Fencing tokens → Resources reject operations from stale lock holders
    3. Leases → Locks auto-release when clients stop heartbeating

    Core Guarantees

    • Strong consistency - CP in CAP theorem, no split-brain under network partitions
    • Fencing tokens - Monotonically increasing counters prevent stale writes
    • Automatic cleanup - Lease-based locks release automatically on client failure
    • Observability - Prometheus metrics and Grafana dashboards built-in

    Use Cases

    • Distributed cron jobs - Only one instance executes at a time
    • Database migrations - Ensure single execution across clusters
    • Leader election - Elect primary nodes in distributed systems
    • Critical section protection - Coordinate access across multiple processes

    Architecture

    Consensus Layer

    • Raft protocol for distributed consensus
    • Log replication across cluster nodes
    • Leader election with automatic failover
    • Snapshot & compaction for log management

    Locking Layer

    • Lease-based locks with configurable TTL
    • Fencing tokens for ordering guarantees
    • Lock queuing with FIFO fairness
    • Deadlock prevention with timeout handling

    API Layer

    • gRPC for high-performance RPC
    • HTTP/REST for easy integration
    • Go SDK with ergonomic client library
    • Health checks and readiness probes

    Observability

    Built-in monitoring:

    • Raft metrics (leader elections, log entries, snapshots)
    • Lock metrics (acquisitions, releases, timeouts)
    • Lease metrics (active leases, expirations)
    • System metrics (goroutines, memory, latency)

    Grafana dashboards included for visualization.


    Quick Start

    bash
    # Single node (development)
    ./lowkey --bootstrap --data-dir ./data
    
    # Create lease (60 second TTL)
    curl -X POST http://localhost:8080/v1/lease \
      -d '{"owner_id":"client-1","ttl_seconds":60}'
    
    # Acquire lock with fencing token
    curl -X POST http://localhost:8080/v1/lock/acquire \
      -d '{"lock_name":"my-job","owner_id":"client-1","lease_id":1}'
    
    # Release lock
    curl -X POST http://localhost:8080/v1/lock/release \
      -d '{"lock_name":"my-job","lease_id":1}'

    Why Raft?

    Comparison with alternatives:

    SystemConsensusFencing TokensSplit-brain Protection
    lowkeyRaft
    etcdRaft
    ConsulRaft
    Redis RedlockNone

    Testing

    • Unit tests for core logic
    • Integration tests for Raft consensus
    • Chaos testing for partition tolerance
    • Benchmark tests for performance validation

    Technical Deep Dive

    Key challenges solved:

    1. Split-brain prevention - Raft quorum ensures only one partition holds locks
    2. Stale lock detection - Fencing tokens reject outdated operations
    3. Lease expiration - Background workers clean up abandoned locks
    4. Network partitions - Read/write operations fail-safe to majority partition
    5. Leader failover - Automatic re-election maintains availability

    Repository

    GitHub: https://github.com/pixperk/lowkey

    Follow my journey
    Buy me a coffeeSupport

    Explore More Projects