independent, object independent and I can either create them in my
application or in the database (MySQL since 5.0 has had a UUID()
function but since it does not replicate I don't use it normally...
this has been fixed in 5.1 though).
What I have heard for years was that Innodb didn't handle them very
well. I have had my doubts, but never taken the time to see if this
was true or not.
This week I finally got around to putting this to a test.
/mysqlslap --concurrency=1,50,100,150,200,250,300 --iterations=10 --
number-int-cols=2 --number-char-cols=3 --auto-generate-sql --auto-
generate-sql-add-autoincrement --auto-generate-sql-load-type=update --
write-number=10 --auto-generate-sql-write-number=100000 --csv=/tmp/
So 10K of rows, with increasing numbers of threads making
modifications. Unlike a lot of tests I do this is a growth test...
aka as a site would grow this is what you would see. Normally when I
look at how SMP machines handle problems I am more interested in
seeing how it would split the problem with more threads and more CPU.
In this case I wanted to look at just how the database would behave
as a site grew.
The test I am looking at is primary key focused, and its all updates.
So a read is occurring and a write follows up.
As you can tell from the graph, the overhead, as seen via blackhole,
is nearly identical in cases where UUID() was used and Autoincrement
was used. The only issue I ran into was that Innodb started getting
deadlocks when I cranked up the connections to 300 on my quad PPC
running Ubuntu. I am going to play with this a bit further and see if
Innodb would have similar issues on a dual processor. The binlog was
not enabled during any of these tests so the row locks that we see
for updates when running with statement based replication won't be
seen (but I really want to see how that effects things!).
Just to confirm results I am going run the same test but just do key
lookups and see if any difference pops up. I am doubting it will but
you never know until you try :)