-
-
Save kjmph/5bd772b2c2df145aa645b837da7eca74 to your computer and use it in GitHub Desktop.
-- Based off IETF draft, https://datatracker.ietf.org/doc/draft-peabody-dispatch-new-uuid-format/ | |
create or replace function uuid_generate_v7() | |
returns uuid | |
as $$ | |
begin | |
-- use random v4 uuid as starting point (which has the same variant we need) | |
-- then overlay timestamp | |
-- then set version 7 by flipping the 2 and 1 bit in the version 4 string | |
return encode( | |
set_bit( | |
set_bit( | |
overlay(uuid_send(gen_random_uuid()) | |
placing substring(int8send(floor(extract(epoch from clock_timestamp()) * 1000)::bigint) from 3) | |
from 1 for 6 | |
), | |
52, 1 | |
), | |
53, 1 | |
), | |
'hex')::uuid; | |
end | |
$$ | |
language plpgsql | |
volatile; | |
-- Generate a custom UUID v8 with microsecond precision | |
create or replace function uuid_generate_v8() | |
returns uuid | |
as $$ | |
declare | |
timestamp timestamptz; | |
microseconds int; | |
begin | |
timestamp = clock_timestamp(); | |
microseconds = (cast(extract(microseconds from timestamp)::int - (floor(extract(milliseconds from timestamp))::int * 1000) as double precision) * 4.096)::int; | |
-- use random v4 uuid as starting point (which has the same variant we need) | |
-- then overlay timestamp | |
-- then set version 8 and add microseconds | |
return encode( | |
set_byte( | |
set_byte( | |
overlay(uuid_send(gen_random_uuid()) | |
placing substring(int8send(floor(extract(epoch from timestamp) * 1000)::bigint) from 3) | |
from 1 for 6 | |
), | |
6, (b'1000' || (microseconds >> 8)::bit(4))::bit(8)::int | |
), | |
7, microseconds::bit(8)::int | |
), | |
'hex')::uuid; | |
end | |
$$ | |
language plpgsql | |
volatile; |
PERFORMANCE: Move from pgcrypto to built-in gen_random_uuid(): | |
Curtis Summers (https://github.com/csummers) | |
PERFORMANCE: Use set_bit to upgrade v4 to v7, not set_byte: | |
PERFORMANCE: Reduce local variable use while still being maintainable | |
Rolf Timmermans (https://github.com/rolftimmermans) |
Copyright 2023 Kyle Hubert <[email protected]> (https://github.com/kjmph) | |
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: | |
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. | |
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
Thanks @kjmph for the summary and sorry for not making this clearer in my comment. I was surprised to see that removing local variables made a small but significant difference on our production PostgreSQL 15 server, but of course YMMV.
And thanks for clearing up that removing floor() is not standards compliant. Keep the changes you like and definitely feel free to ignore the rest. :)
Thanks @rolftimmermans, it is incorporated now. Cheers!
(EDIT: So much for test machines.. The same thing happened to me on production, I see a marked difference when removing local variables. I took that change as well. Let's see what everyone else thinks).
I made the switch to pg_uulidv7 since I saw that is supported on the database provider I use neon database, and I thought I'll share my strategy here with uuid v7 enthusiast to migrate to this extension, if desired.
I got a conflict while enabling this extension CREATE EXTENSION IF NOT EXISTS pg_uuidv7;
, since this extensions shares / adds the same function signature we all have in the plpgsql function uuid_generate_v7()
.
In case you just use this as a default value, here's the quickest way to migrate:
## Rename old function to a new function function name
ALTER FUNCTION uuid_generate_v7() RENAME TO uuid_generate_v7_fn;
## Creating the extension should no longer conflict
CREATE EXTENSION IF NOT EXISTS pg_uuidv7;
## Replace uuid_generate_v7_fn with a wrapper that now uses uuid_generate_v7
CREATE OR REPLACE FUNCTION uuid_generate_v7_fn() RETURNS uuid AS $$
BEGIN
RETURN uuid_generate_v7();
END;
$$ LANGUAGE plpgsql;
Here's also a gist to a Dockerfile I've setup that adds some custom extensions functions to alpine base containers of postgres that I use for local development that has e.g. the pg_uuidv7 extension installed:
https://gist.github.com/ItsWendell/af2e2b4c93bb2f5d73f34b87406af435
Thanks for sharing @ItsWendell! I chose uuid_generate_v7
since it followed the convention. Not surprised other projects are also using that name. I appreciate you posting this for people who want to back out and switch to a C implementation. Not everyone can use an extension, and some will benefit from a PL/pgSQL version, so I'll continue to leave this up.
(Note: great Dockerfile too)
With regard to performance, once the function is reduced to a one-liner in the plpgsql language, it should be converted to the SQL language instead of plpgsql. This avoids the overhead of the plpsql interpreter. The contents of SQL functions are typically inlined into the calling query (see https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions).
In a quick test generating 1 million values, with Postgres 15, the following version appears to be 13-15% faster than the plpgsql version:
Code:
create or replace function uuid_generate_v7()
returns uuid
as $$
-- use random v4 uuid as starting point (which has the same variant we need)
-- then overlay timestamp
-- then set version 7 by flipping the 2 and 1 bit in the version 4 string
select encode(
set_bit(
set_bit(
overlay(uuid_send(gen_random_uuid())
placing substring(int8send(floor(extract(epoch from clock_timestamp()) * 1000)::bigint) from 3)
from 1 for 6
),
52, 1
),
53, 1
),
'hex')::uuid;
$$
language SQL
volatile;
Thanks @dverite! You all are so great, this gist is amazing. Can I ask what everyone is using to benchmark? I'm running \timing
with this query:
$ select true from (select uuid_generate_v7() from generate_series(1, 1000000000)) a limit 1;
However, for this change I don't see a 13-15% improvement. Maybe I'm missing the inlining advantage. Could you, @dverite, please share what your benchmark is?
Hi @kjmph, I'm using select count(uuid_generate_v7()) from generate_series(1,1000000);
in psql with \timing on
On my desktop PC (Ubuntu 22..04, Postgres 15.4, AMD Ryzen 7 5800X3D), executing this query 4 consecutive times, I typically get these durations:
-
SQL version:
Time: 1667.722 ms (00:01.668)
Time: 1666.580 ms (00:01.667)
Time: 1662.223 ms (00:01.662)
Time: 1666.470 ms (00:01.666). -
plpgsql version:
Time: 2109.898 ms (00:02.110)
Time: 2087.937 ms (00:02.088)
Time: 2089.504 ms (00:02.090)
Time: 2090.521 ms (00:02.091)
On my laptop PC (Ubuntu 20.04, Postgres 15.4, Intel core I5-8265U), I get those:
-
SQL version:
Time: 2958,489 ms (00:02,958)
Time: 2969,462 ms (00:02,969)
Time: 2961,334 ms (00:02,961)
Time: 2971,450 ms (00:02,971) -
plpgsql version:
Time: 3587,606 ms (00:03,588)
Time: 3429,490 ms (00:03,429)
Time: 3379,035 ms (00:03,379)
Time: 3402,240 ms (00:03,402)
Great, thanks for sharing @dverite. It took a bit of digging, but it appears the SQL version of the function is more performant for a large number of invocations, and less performant for a small number of invocations. The cross-over on my machine is ~50 invocations. Thus, would you agree that the predominant use case is to insert a new record with an UUIDv7? If so, it seems better to bias towards the single invocation case and keep the current PL/pgSQL version.
Looking at the explain analyze output shows that the reason I wasn't seeing a performance improvement with my benchmark query was most likely because of the subquery scan with the limit. I see that the aggregate (count) performs better at those larger invocation counts.
If you want to try to reproduce, use the bench
function located at this site. Try executing:
$ select * from bench('select uuid_generate_v7_dverite()', 100000); -- the version posted by @dverite
$ select * from bench('select uuid_generate_v7_kjmph()', 100000); -- the version posted in the gist
$ select * from bench('select uuid_generate_v7_dverite() from generate_series(1, 10)', 50000);
$ select * from bench('select uuid_generate_v7_kjmph() from generate_series(1, 10)', 50000);
$ select * from bench('select uuid_generate_v7_dverite() from generate_series(1, 100)', 10000);
$ select * from bench('select uuid_generate_v7_kjmph() from generate_series(1, 100)', 10000);
If you prefer to work up an example query for a typical insert load, we can explore that as well.
Thoughts?
Ahh
No
The given uuidv7 implementations here are not sorted in my tests. Isn't uuidv7 supposed to be sorted?
Simply running following query and checking "generate_series" integer column reveals it.
select uuid_generate_v7(), generate_series from generate_series(1,200) order by uuid_generate_v7 asc
On the other hand the function from this gist works correct.
@ardabeyazoglu If you need that level of granularity you should use UUIDv8, which has microsecond precision. UUIDv7 uses milliseconds only. The functions given here easily generate 200 UUIDs within the same millisecond, which will cause UUIDv7s to be in random order with respect to each other.
@rolftimmermans I see. However, the one provided in gist i sent fulfils that level of granularity. It gives me 500 rows instantly and sorted. But, there is a slight performance drop comparing to this one.
@ardabeyazoglu I guess the function in your link does not implement UUIDv7 correctly (to the extent we can call any implementation of a draft RFC "correct"). There is an implementation of uuid_generate_v8()
given in this gist. I suggest using that; it's fast and has microsecond precision.
Edit: Seems like the draft RFC is changing the level of precision allowed in UUIDv7: https://www.ietf.org/archive/id/draft-ietf-uuidrev-rfc4122bis-14.html#name-uuid-version-7
Hello @ardabeyazoglu, it is a bit of a subtle answer. Earlier drafts of UUIDv7 contained sub-second precision bits in the format, that an implementation MAY use. UUIDv8 was for all custom usage that was implementation controlled. Current versions of the draft made UUIDv7 only for millisecond precision, and all sub-millisecond precision was moved to UUIDv8 for custom formats. The implementation attached to this draft for UUIDv7 conforms to the current drafts, while the UUIDv8 in this gist conforms to old UUIDv7 with microsecond precision.
The gist you linked to was an old UUIDv7 implementation with microsecond precision. If you want to compare apples to apples, please compare uuid_generate_v8
in this gist to the other implementation for performance analysis.
Note, uuid_generate_v7
in this gist is sorted in your example query. It is generating that many UUIDs per millisecond that they look unordered in the test query. As Rolf indicated.
Thanks @rolftimmermans for answering these questions, thought I would provide more color if this helpful.
Thanks for detailed clarification @kjmph, I also saw the difference after reading the codes carefully.
@kjmph, FYI
According to current RFC (was 4122 now it's RFC 9562 - Universally Unique IDentifiers (UUIDs) ):
uuidv7 may contain:
- An OPTIONAL sub-millisecond timestamp fraction (12 bits at maximum) as per Section 6.2 (Method 3).
IMHO, it's ok to fill rand_a
with 12 bit of microseconds in uuid v7
Ah, my apologies, the floor wasn't left over from the earlier draft, it is there for two reasons.
Thus, we can't remove the floor for correctness reasons, unless there is a faster way to only retrieve the millisecond bits from the clock_timestamp. However, good news, it seems all the performance gains on my test machine were due to change #3, which is the set_bit instead of the set_byte calls. So, I still think that change should be accepted.