Fast ID Generation Part I

Update: Hello HN!, Please use this for discussion

tl;dr

In a distributed storage environment you may no longer generate IDs in traditional ways. You need a fast service that generates IDs, which, given constraints such as bit size, isn’t obvious to build.

Twitter’s Snowflake does it following a set of their own constraints, but given more lenient constraints there are other ways such as compositing an ID with a time component and random jitter. There are also important risks and guarantees to take into consideration.

Parting Ways with Auto Increments

You want to generate IDs when you’re working in a sharded setup; in which non of the shards can take responsibility for generating IDs on its own.

Further, you might want an independent service or component to be responsible for ID generation, in a complex transactional usecases, where multiple systems might be affected.

In other cases you’ll want to perform intricate tracking which requires generation at the client side, or far from the server.

ID Generation Constraints

ID generation may seem trivial. Just UUID.new and you’re done, right? nope.
Lets take a survey of some important constraints, that make this thing quite complex.

  1. Must have reasonable amount of bits ( < 64)
  2. Fast incremental ID generation, oredered by date
  3. Ability to be deployed to multiple machines
  4. No common storage

When constraint (1) is relieved, we can go with TIME+UUID id; this will be great - given a reliable time service, ofcourse.

When constraint (4) is relieved, we can use a common storage for synchronization. A solution based on Redis is benchmarked later in the next post.

But hey! If you’re kinda lost, you might want to become familiar with what is used in the wild, like Twitter’s Snowflake, Flickr’s ticketing, Boundary’s Flake.

I’m going to ease constraint (1) in this post, which Boundary also did, and which Twitter enforced in Snowflake due to internal reasons as claimed by them.

Challenges

Lets take a look at a handful of challenges that aren’t initially obvious.

  1. Time sync within a single machine. NTP can manipulate clock on its own, which is a Good Thing, but for this case may be dangerous
  2. Time sync cross machines
  3. Observation: when running on multiple cores (multiple workers/threads), same challenges exist as with multi-machine
  4. Fast generation, at least 1000 IDs/sec
  5. IDs must be strongly unique (no margin) cross machines with no synchronization mechanism other than NTP (no storage)
  6. IDs should be time-(k)-sortable

Risks

  • When single machine – SPOF
  • When multiple machines – duplicate IDs.

How Snowflake-like Services Work

Here is how ID composition works in principle (number of bits and algorithm simplified for brevity).

millis = get_system_millis

millis = millis * Math.pow(2, 12);
# current ID: 
# "[111011100000001100111110011010001111][000000000000]"  
# -> time bits + 12 free bits. 48 total bits.
var id2 = id * Math.pow(2, 8);
# current ID: 
# "[111011100000001100111110011010001111][0011][00000000]"  
# ->  8 free bits. 48 total bits. our node-id is 0011 (3)
#     4 bits allow us to have up to 16 node values.
var uid = millis + id2 + counter;
# current ID:
# "[111011100000001100111110011010001111][0011][00000110]" 
# ->  0 free bits. 48 total bits. our node-id is 0011 (3)
#     the 8 bits allow us to keep 256 values for clock
#     resolution.

Each generating node can be allocated a ‘node-id’ from the 4-bit space. This ID can be given ‘by hand’ per process, on its startup.
In Twitter’s Snowflake, for example, Zookeeper is used to manage the distributed nature of correct node-id allocation.

Clock Resolution

Something that isn’t quite trivial to bump into is this issue: since system clock is millis (may be microsecs), it is possible that in a high performance server, we’ll get several requests within a single clock. For this purpose, we keep a counter which we increase “manually”.

The next problem is being able to detecting multiple requests coming in on a single clock. That is done by keeping the last clock value that the last request bumped into. If the next clock value is the same, then we’re in a multiple-requests-per-clock scenario we should then increase the internal counters. Sadly, we can only have 8-bits for the counter which makes only 256 increments; if we bump into more requests per clock - we’re going to cancel the request.

System Performance

The above issue (Clock Resolution) clearly outlines that we can only do X requests per clock. From this we can extract that we can only do K * X requests per seconds; where K is number of clocks/sec. It is important to see that this has nothing to do with the application’s ability to optimize code / serve requests better.

Multi Core / Multi Instance Service

We keep an “id” per node; this is true for multicore systems aswell. This id field discriminates between facilities generating IDs in order to keep IDs unique, by keeping generated ID spaces physically (bits) apart.

Picking a Solutions & Benchmarks

I’ll publish another post within a few days with reasoning for solutions (code), benchmarks, and notes about k-sortability.