?

Log in

No account? Create an account

Myths, GUID vs Autoincrement

« previous entry | next entry »
Mar. 10th, 2007 | 05:56 pm

I am a fan of using GUIDs as primary keys. They are database
independent, object independent and I can either create them in my
application or in the database (MySQL since 5.0 has had a UUID()
function but since it does not replicate I don't use it normally...
this has been fixed in 5.1 though).

What I have heard for years was that Innodb didn't handle them very
well. I have had my doubts, but never taken the time to see if this
was true or not.

This week I finally got around to putting this to a test.

/mysqlslap --concurrency=1,50,100,150,200,250,300 --iterations=10 --
number-int-cols=2 --number-char-cols=3 --auto-generate-sql --auto-
generate-sql-add-autoincrement --auto-generate-sql-load-type=update --
auto-generate-sql-execute-number=10000 --auto-generate-sql-unique-
write-number=10 --auto-generate-sql-write-number=100000 --csv=/tmp/
innodb-auto.csv --engine=blackhole,innodb

So 10K of rows, with increasing numbers of threads making
modifications. Unlike a lot of tests I do this is a growth test...
aka as a site would grow this is what you would see. Normally when I
look at how SMP machines handle problems I am more interested in
seeing how it would split the problem with more threads and more CPU.
In this case I wanted to look at just how the database would behave
as a site grew.

The test I am looking at is primary key focused, and its all updates.
So a read is occurring and a write follows up.

As you can tell from the graph, the overhead, as seen via blackhole,
is nearly identical in cases where UUID() was used and Autoincrement
was used. The only issue I ran into was that Innodb started getting
deadlocks when I cranked up the connections to 300 on my quad PPC
running Ubuntu. I am going to play with this a bit further and see if
Innodb would have similar issues on a dual processor. The binlog was
not enabled during any of these tests so the row locks that we see
for updates when running with statement based replication won't be
seen (but I really want to see how that effects things!).

Just to confirm results I am going run the same test but just do key
lookups and see if any difference pops up. I am doubting it will but
you never know until you try :)
Update Auto vs GUID Innodb.jpg

Link | Leave a comment |

Comments {32}

Re: Don't use UUID... use SHA1

from: burtonator
date: Mar. 11th, 2007 10:52 am (UTC)
Link

Seriously. you have tables with 4B rows?

If you're not worried about UUID collision then there's no reason you should worry about hash collision.

At least with hashes you can route the data.

And yes... with Live journal and Technorati it's enough. Google uses this technique.

But yeah.... using a centralized sequence generator is a REALLY bad idea.

Reply | Parent | Thread

Brian "Krow" Aker

Re: Don't use UUID... use SHA1

from: krow
date: Mar. 11th, 2007 04:50 pm (UTC)
Link

Yes, I do, and yes it works. I would need to double check this, but if you look at archive_test in storage/archive you will see that it generates this many rows on its long test.

I can think of a couple of MySQL customers who hit the 4 billion mark sometime ago (since I remember fixing the bugs a few years ago!). I've never tested MySQL to the full 2^64 though. The crash-me framework might, I've not looked at it in a while.

LJ I believe uses a central key generator, not sure though but I thought I remember this being the case.

You really think that 2^32 is enough? Well the one thing with that though is that you won't be able to mix your data sets from multiple machines and not fear a class.

Reply | Parent | Thread