Looking through his blog entry I thought I would comment a bit on some of his points in context for Drizzle:
LOCK_mdl We never had this lock in the first place. It must have been added in the last year. I suspect this is some sort of meta-data lock. If we had a good MVCC in-memory engine sitting around I suspect you could get rid of this lock and similar cache locks (Drizzle's design doesn't currently require this sort of shared information, but we are looking for an in-memory style engine for when we do).
LOCK_plugin We don't have this lock. We also cannot load new modules into the database, and then eject them while being online. Since none of the current modules can do this... we will wait to add this sort functionality when we think it is needed. The way we consider how we will be deployed I am not so sure this matters. I have hard time seeing people load new code to production servers on the fly. I would think you would plan these sorts of upgrades on your network (and if you have scaled at all, you can probably pick times for pieces to come on and offline). A good feature for doing development of modules though, I just don't see how we want to incur this cost for production systems.
LOCK_event_loop We have this one. Recently we took our scheduler and turned it into a plugin. Want to get rid of this lock? Write your own solution. We have abstracted out this problem now so you can focus on solving this problem if you want. While our solution is a little more refined then what is in the original server, we know there is a long way to go on this.
LOCK_open This one is better currently, but we are on the road to removing it completely. This sort of lock exists because of the lack of clear boundaries between some of the legacy engines, like MyISAM and the rest of the kernel. We have stripped a lot of this from the server, but it took a long time to put it in... so it is taking quite some effort to toss it out.
Its good that he brings up tcmalloc, we compile with it by default.
MyISAM... what to do with MyISAM is a question I ask a lot. There is still a group of folks to want it, but its design is pretty old at this point. We have thought to relegate it to just being used for internal temporary tables, but for the moment it is still user accessible. We aren't sure when Maria will be done and we may have to take some action on it before then. Its current design does not fit well with us moving toward using a value object over the current Field object design. We need to push its locking design out of the kernel, and down into the engine.
The HEAP engine needs a rewrite. We breathed a little more light into ours by using a combination of Google and Ebay derived patches, but the truth is... it needs to be replaced. Paul from PBXT has had a number of good ideas on this, and I am hoping he explores them.
When it comes to pool of threads I think the current design misses the point of using libevent. Currently is does not yield on IO block, so in essence all it is doing it keeping you from overwhelming the operating system's scheduler and providing a completion for a given action. For small queries this is fine, but for longer running queries this is not very good (though... most queries we see are pretty short so this part is not a huge concern). It needs to be redesigned to make better use of IO, and this is something we will work on soon. Our replication will not run on the same port as our clients, so we don't have the sort of IO usage clash Mark is referring to in MySQL (we will be using a different port for our replication, which is a structured set of Google Protobuffs BTW with its own library for communication).
I am not really sure what sort of point he is making on GET_LOCK() or LOCK TABLES. Yes they lock... that is the point to both of them. Personally I would prefer to just get rid of LOCK TABLES, but that is because I have never had much use for it (I've been using Innodb ever since we helped Heikki get it stable for Slashdot). GET_LOCK() was a cute hack for turning a database into a network locking daemon. Personally? I would rather pick an open source locking daemon and use it (and no... I am not one of the people who are a fan of adding this to Memcached... but maybe Gearman...).
It is good to see more people talking about the problems around scaling open source databases. The current state of open source databases running on SMP hardware is poor at the moment (with Postgres being the only production database doing it well right now).
We are at an inflection point for open source databases, where all of the projects will need to learn to evolve quickly to really work well on the hardware that will be delivered in the next few years.