?

Log in

No account? Create an account

Parallel Restore with mysqlimport

« previous entry | next entry »
May. 23rd, 2007 | 12:21 am

From Peter Z's Blog: http://www.mysqlperformanceblog.com/2007/05/22/wishes-for-mysqldump/

I noticed this:

Parallel restore This is absolutely required if time is the issue as serious systems may perform much better in such case.


In 5.1 I added the following option to mysqlimport "--use-threads".

With it you can now tell mysqlimport to use multiple threads while doing an import of a backup made with mysqldump where you specified dumping the database in tab format. I don't believe I have a graph anywhere showing the speedup, but its pretty damn impressive.

Oh, and what is especially nifty about this?

Just take a copy of mysqlimport from the 5.1 distribution and use it on your previous versions of MySQL. I can't promise it will work with anything earlier then 3.23 but I suspect it will :)

Link | Leave a comment | Share

Comments {2}

awfief

(no subject)

from: awfief
date: May. 23rd, 2007 11:52 pm (UTC)
Link

BTW, the 2nd point, "Dump each table in its own file" isn't hard. using the information_schema is a trivial solution. But I simply use a shell script wrapper -- make arguments to the wrapper (or set them in the file to run unattended) for

USER
PASS
HOST
DB
(and PORT if you prefer)

and then run:
for i in `mysql -u $USER -p$PASS -h $HOST $DB -e "show tables" | grep -v Tables_in_$DB`
do
mysqldump -u $USER -p$PASS -h $HOST $DB $i > $i.sql
done

Of course you can also put a SAVETOPATH variable for the $i.sql file. You should probably put full paths for all the binaries in there.

Reply | Thread

peter_zaitsev

(no subject)

from: peter_zaitsev
date: May. 24th, 2007 09:12 pm (UTC)
Link

Yes I'm not saying it is hard. this is obviously what we ended up doing.
But instead of reinventing the bicycle it might be good thing to have in core tool set.

Reply | Parent | Thread