-
Notifications
You must be signed in to change notification settings - Fork 194
-
|
I've been thinking of building a cluster-wide, approximate rate limiter using Pekko Distributed Data (PNCounterMap). Basic idea
Why
Questions for the community
|
Beta Was this translation helpful? Give feedback.
All reactions
-
1
Replies: 1 comment 7 replies
-
|
I think better go with redis |
Beta Was this translation helpful? Give feedback.
All reactions
-
|
On second thought, I may not even need cluster-wide rate limiting. Each serve URL is already HMAC-signed and contains a valid-until timestamp. That means: expired URLs are automatically rejected, and replays outside the window can't pass. So I could just accept impressions/clicks within that short validity window, tolerate duplicates there and rely on probabilistic aggregation downstream for analytics like HLL. This would keep the system much simpler and avoid gossip/tombstone overhead altogether. |
Beta Was this translation helpful? Give feedback.
All reactions
-
1
-
|
I'm experimenting with a dedicated project that uses JMH to see how it goes. I will share it when I am done, if anyone is interested. |
Beta Was this translation helpful? Give feedback.
All reactions
-
1
-
|
It's just a simulation harness to see how a DData-based rate limiter could work in practice. |
Beta Was this translation helpful? Give feedback.
All reactions
-
1 -
1
-
|
What about Bloom filter based rate limiter? The idea is to implement a distributed rate-limiter and replay-guard without keeping explicit counters, but instead by windowing with Bloom filters: each shard entity manages a current and previous time window, recording whether a given nonce has been seen; if it appears again in the same window it is denied, and anything older than the previous window is automatically rejected. This approach makes grants/denials a function of time-bucketed membership rather than precise counting, and it scales cleanly across a cluster. To survive shard passivation or rebalancing, each entity periodically publishes a compact Bloom snapshot into Pekko DData, so that a new incarnation can immediately warm-start with the recent windows instead of forgetting its history. The result is a fast, memory-efficient limiter that tolerates rebalances and skew while providing strong deduplication semantics. Each shard manages a disjoint partition of the input key-space (e.g., part = crc32(key) % parts) and therefore holds the dedup state only for its slice. |
Beta Was this translation helpful? Give feedback.
All reactions
-
1
-
|
You can test it with this! |
Beta Was this translation helpful? Give feedback.
All reactions
-
1