Happy 4th Birthday, Motive Force!

Today marks 4 years since Motive Force LLC was officially founded. I’m proud of my baby growing up! There are various stats about survival-rates for nascent companies, there but in general they say that about 50% die off in the first year, pregnancy and then 50% of the remaining companies die each year after. If those stats are true, dosage that puts Motive Force at about the 94th percentile so far. Way-to-go, kiddo!

It’s been a blast running it all this time, and by any measure they’ve certainly been four of the most eventful years of my life.

LyricWiki's last year and a half of growth

LyricWiki's last year and a half of growth*
(click thumbnail for large pic)

Although it has released numerous sites and products, Motive Force’s most visible success so far has been LyricWiki. I felt this would be an appropriate time to share a graph of some of the traffic growth since I don’t often get a chance to do so – and we all like to show how our progeny have progressed!

Happy Birthday Motive Force!

* For the curious and/or mathematically inclined: an exponential trend-line has a slightly better R-squared value than a linear trend line for the current data (0.89 versus 0.86). This implies that it’s likely that the growth is exponential, but it isn’t quite enough data to be sure in my unprofessional opinion.

Quick Tip: Do huge MySQL queries in batches when using PHP

When using PHP to make MySQL queries, stomatology it is significantly better for performance to break one extremely large query into smaller queries. In my testing, there was a query which returned 1 million rows and took 19,275 seconds (5 hours, 20 minutes) to traverse the results. By breaking that query up into a series of queries that had about 10,000 results, the total time dropped to 152 seconds… yeah… less than 3 minutes.

While MySQL provides LIMIT and/or OFFSET functionality to do batching, if you have numeric id’s on the table(s) you’re querying, I’d recommend using your own system for offsets (code example below) rather than the default MySQL functionality since the hand-rolled method is much faster. See table in next section for performance comparisons.

Timing table

I’ll provide some example code below to show you how I did the batching (based on potentially sparse, unique, numeric ids). To start, here is a table of query-result-size vs. the total runtime for my particular loop. All timings are for traversing approximately 1,000,000 rows.

Query Batch Size Handrolled method MySQL “LIMIT” syntax
1,000,000+ 19,275 seconds 19,275 seconds
10,000 152 seconds 1,759 seconds
5,000 102 seconds 1,828 seconds
1,000 43 seconds ?
750 40 seconds ?

At the end, it was pretty clear that no more data was needed to continue to demonstrate that the LIMIT method was slow. Each one of those runs was taking about half an hour and about halfway through the 1,000 row test for the LIMIT method, it started causing the database to be backed up. Since this was on a live production system, I decided to stop before it caused any lag for users.

Example Code

This code is an example of querying for all of the pages in a MediaWiki database. I used similar code to this to make a list of all of the pages (and redirects) in LyricWiki. In the code, you’ll notice that the manual way I do the offsets based on id instead of using the MySQL “LIMIT” syntax doesn’t guarantee that each batch is the same size since ids might be sparse (ie: some may be missing if rows were deleted). That’s completely fine in this case and there is a significant performance boost from using this method. This test code just writes out a list of all of the “real” pages in a wiki (where “real” means that they are not redirects and they are in the main namespace as opposed to Talk pages, Help pages, Categories, etc.).


< ?php

$QUERY_BATCH_SIZE = 10000;
$titleFilenamePrefix = "wiki_pageTitles";

// Configure these database settings to use this example code.
$db_host = "localhost";
$db_name = "";
$db_user = "";
$db_pass = "";

$db = mysql_connect($db_host, $db_user, $db_pass);
mysql_select_db($db_name, $db);

$TITLE_FILE = fopen($titleFilenamePrefix."_".date("Ymd").".txt", "w");
$offset = 0;
$done = false;
$startTime = time();
while(!$done){
$queryString = "SELECT page_title, page_is_redirect FROM wiki_page WHERE page_namespace=0 AND page_id > $offset AND page_id < ".($offset+$QUERY_BATCH_SIZE);
if($result = mysql_query($queryString, $db)){
if(($numRows = mysql_num_rows($result)) && $numRows > 0){
for($cnt=0; $cnt < $numRows; $cnt++){
$title = mysql_result($result, $cnt, "page_title");
$isRedirString = mysql_result($result, $cnt, "page_is_redirect");
$isRedirect = ($isRedirString != "0");
if(!$isRedirect){
fwrite($TITLE_FILE, "$title
");
}
}
$offset += $QUERY_BATCH_SIZE;
print "	Done with $offset rows. 
";
} else {
$done = true;
}
}
mysql_free_result($result);
}
$endTime = time();
print "Total time to cache results: ".($endTime - $startTime)." seconds.
";
fclose($TITLE_FILE);

?>

Hope that helps!