?

Log in

No account? Create an account

Building a Storage Engine: Writing Data

« previous entry | next entry »
May. 7th, 2007 | 02:26 pm

While there is quite a bit that can be done with read only engines, writing data is entirely more fun :)

For our next lesson we are going to be doing exactly that. Unless you are writing a blob store only engine you will need to deal with fields, aka the columns you declare with a create table. Placing these fields into some sort of format is required. Different engines implement different written forms to disk. Most transactional engines work in block formats, while stream designs are common for engines which need high write performance.

There is no one right way to implement storage on disk, every design has a trade off.

Let us look at XML. It is slow to parse and slow to read. XML though is adored by many because it is a simple format that can be ready by many applications.

For our example we are going to put together an XML engine, and to that end we will use the libxml2 library for writing a XML storage engine.

We will implement a very simple XML schema based upon the schema outputted by the mysqldump application.

Implementing INSERT for MySQL means implementing the write_row() method. The example engine will also implement the optional start_bulk_insert() and end_bulk_insert() methods. We won't worry about concurrent reads and writes just yet, so we will leave in place table level locks.

To store the XML, we will update the "share" with a filename that we will use for reading and writing. The share concept is found throughout almost all of MySQL's engines. The folks at Nitro Security have made available an OOP version of it which you can find on MySQL Forge, but for our example we will stick with the common C structure form.

The idea for the SHARE is simple, when multiple handlers need to share the same resource to communicate these needs they use a central piece of shared memory. A great number of engines support this through the get_share() and free_share() methods. These methods are not a part of the handler class, but instead are just common naming conventions. Each call to get_share either inserts a new SHARE representing the shared memory into a hash with a key against the table name, or increments a use count integer in the SHARE.

For our needs we have extended the skeleton share to hold a character string to the path of the filename, data_file_name, that we will use.


typedef struct st_skeleton_share {
char *table_name;
char data_file_name[FN_REFLEN];
uint table_name_length,use_count;
pthread_mutex_t mutex;
THR_LOCK lock;
} SKELETON_SHARE;


Then we have updated get_share() to store the path to the file in data_file_name. The mysys function fn_format() will make sure that the path is correctly set.



fn_format(share->data_file_name, table_name, "", ".XML",
MY_REPLACE_EXT|MY_UNPACK_FILENAME);



While end_bulk_insert() only gives you a chance to cleanup after a bulk insert, which we will use to write out the XML file, start_bulk_insert gives you an estimate of the number of rows that will be inserted. We will use it to create a memory container for our XML that we will write to in write_row().


void ha_skeleton::start_bulk_insert(ha_rows rows)
{
DBUG_ENTER("ha_skeleton::start_bulk_insert");

xmlbuf= xmlBufferCreate();
writer= xmlNewTextWriterMemory(xmlbuf, 0);

xmlTextWriterStartDocument(writer, NULL, MY_ENCODING, NULL);
xmlTextWriterStartElement(writer, BAD_CAST "TABLE");
xmlTextWriterSetIndent(writer, 1);

DBUG_VOID_RETURN;
}


The write_row() method passes in one parameter, which is the raw row in its memory, aka UNIREG, format. While some engines do directly operate on this, it is considered best practice to not do this. We use the Field objects to write data into the XML file. Our XML engine supports nulls by place empty FIELD tags into the xml file.


int ha_skeleton::write_row(byte * buf)
{
DBUG_ENTER("ha_skeleton::write_row");

char content_buffer[1024];
String content(content_buffer, sizeof(content_buffer),
&my_charset_bin);
content.length(0);

xmlTextWriterStartElement(writer, BAD_CAST "ROW");
xmlTextWriterSetIndent(writer, 2);

for (Field **field=table->field ; *field ; field++)
{
if ((*field)->is_null())
{
xmlTextWriterStartElement(writer, BAD_CAST "FIELD");
xmlTextWriterEndElement(writer);
}
else
{
(*field)->val_str(&content);
xmlTextWriterWriteElement(writer, BAD_CAST "FIELD",
BAD_CAST content.c_ptr_safe());
}
}
xmlTextWriterEndElement(writer);

DBUG_RETURN(0);
}


Now finally we use end_bulk_insert() to actually write the XML file to disk:


int ha_skeleton::end_bulk_insert()
{
File writer_fd;
DBUG_ENTER("ha_skeleton::end_bulk_insert");

xmlTextWriterEndDocument(writer);
xmlFreeTextWriter(writer);

writer_fd= my_open(share->data_file_name, O_WRONLY|O_CREAT, MYF(0));
my_write(writer_fd, (byte*)xmlbuf->content, xmlbuf->use, MYF(0));
my_close(writer_fd, MYF(0));

xmlBufferFree(xmlbuf);

DBUG_RETURN(0);
}


Now we have written some XML to disk!

If you look at the chapter03 version of the skeleton engine you will also find that I have updated rnd_next() to now read the XML file.

http://hg.tangent.org/writing_engines_for_mysql

So what could be done to extend this? For one it does not convert all of the data being passed into the XML file to UTF8. Also, this interface assumes bulk insert, and it should be extended to append to the XML file, not over write it. There is also very little that has been done to protect against a corrupted XML file.

The previous entries in this series:
Getting The Skeleton to Compile
Reading Data

Link | Leave a comment |

Comments {0}