“What gets measured gets managed”

That quote is a relatively common business adage which is attributed to a ton of different people – so many that I’ll just consider it public domain. The reason for this post is just that I’d like to draw attention to it and maybe as you read this post, decease you can think of how this can be used to help your own productivity and success.

As with any business advice, ed this tends to be more digestible with some anecdotes so I’ll give a couple of examples of how this has helped me lately with running LyricWiki.org.

Example Uno: Server Uptime

Without a doubt, sickness the biggest problem with LyricWiki up until recently had been uptime. The site was constantly slow or completely unavailable from its inception. The reason is that we were always short on servers since the company had minimal capital, and setting up new servers took a great deal of time and fell on my shoulders during a time in my life where I always had at least one fulltime responsibility other than LyricWiki.

When I cut back my other job a bit to give me more time to devote to LyricWiki, one of the first things I set out to fix was the reliability of the site. In order to know if it was improving, I would need to know how long the site was down each month so I could track whether the number of minutes was going up or down.

I created a small spreadsheet to track outages, and each time the site went down I logged when the outage was, the duration (in minutes) of the outage, which server(s) had the problems, the apparent cause and whether or not the cause was resolved. Much to my surprise, I didn’t really need to graph this over the months. After the first couple of days, it became very apparent that there was one huge problem still lingering and it would be worth my time to automate a fix to it instead of responding ad-hoc each time the problem cropped up.

I whipped up the code to solve the problem while I was on a layover in Philadelphia, uploaded it when I got home – and the site stayed up for almost a month straight! That’s pretty huge. I don’t think that had ever happened in LyricWiki history until just now. Very cool.

Example Dos: Intractable To-do Lists

A major personal time-management problem I had recently was that I couldn’t tell if I was just spinning my wheels or if I was actually making progress on my backlog of LyricWiki tasks. It felt as though I was getting emailed so much work that I barely ever got to break out into the tasks on my talk page let alone the “Mid-term Actions” list I made which was my conscious plan of how to make LyricWiki rock your socks off in the mid-term.

To solve this problem of not having a grasp on my tasks, I whipped up a little widget which you can see in the side-column of the blog, near the bottom. The results were much better than I expected: I actually instantly felt like I was in more control, I felt comfortable removing tasks that were duplicated across lists, and I can finally tell when I’m moving fast enough to make forward progress.

The widget tracks the size of all of my various to-do lists and updates that number hourly. There are two reasons for the caching: 1) checking the lists requires hitting the sites and that takes at least 3 seconds total 2) if it updated in real-time, I’d sit there refreshing the thing all day!

The widget actually has some neat hidden features. Here is the to-do list widget on its own page which also tracks the number of tasks I’ve had at the end of each day and charts my progress (using the Google Charts API). If anyone is interested, maybe I’ll write another post where I give out the source-code to the widget.

Conveniently, the widget is narrow enough to both fit in a blog sidebar and be displayed on a smartphone. It’s currently the homepage on my blackberry and I don’t see that changing any time soon!


The quote “what gets measured gets managed” always seems to ring true for me. I think of it again and again – usually after-the-fact when I’ve just saved time by being OCD about something. I strongly recommend that you take a second (right now) and think of an area of your life or your work which you feel isn’t getting sufficient attention and consider tracking meaningful statistics about it. Please share any similar successful experiences in the comments!

How to restore WordPress categories

After upgrading to the latest version of WordPress, vitamin the text of all of the categories were deleted. It appears this may be my fault for not disabling all of the plugins before doing the upgrade (oops). They’re back now though. It was somewhat annoying to have to do, here but as far as “data loss error”s go, ambulance it wasn’t very bad at all.

I noticed a bunch of other people online had similar problems but nobody mentioned a solution so I figured I might as well throw up a quick description of what I did.

I used google to find a cached version of my page which listed all of the categories in the side-bar. Now I had all of the categories but I didn’t know which id in the database matched which category. I found that out by running this query:
SELECT post_title, term_taxonomy_id FROM wp_posts,wp_term_relationships WHERE wp_term_relationships.object_id=wp_posts.ID ORDER BY term_taxonomy_id;
Which shows what posts were assigned to each category. After reading through the list and figuring it out, I was able to fix the text by typing queries along the lines of:
UPDATE wp_terms SET name='Motive Force', slug='motive-force' WHERE term_id=8;

Since I brushed over it above:
To find a cached version of your site on google, just search for “site:YOURDOMAINNAMEHERE.com”, then click on “Cached” next to any of the results. They clear those out after a while, so don’t put it off!

Quick Tip: Beep when a long process is complete over SSH

When you’re running a long command on a terminal over SSH, illness you may end up wasting a great deal of time checking back repeatedly to see if the process is complete. A quick alternative would be to make the shell beep when it’s complete. Assuming you are running a script called “longScript.sh”, then simply typing a line like:
> longScript.sh; printf \a
Will cause most SSH clients to beep after longScript.sh is finished running.

Hope that helps!

Must-have feature for web-apps: auto-update.

It’s been a pet-peeve of mine for some time now that creators of web-apps (like myself) haven’t been holding ourselves to the same standards as creators of desktop apps. One of the major justifications for calling our products “applications” (as opposed to just “really good web sites”) is that we’re claiming you get the same level of functionality as a comparable downloadable desktop version.

Currently this just isn’t the case. When is the last time you went through the following process to update a program on your desktop:

  1. Check the creator’s website frequently to see if there are updates
  2. If there is an update, cialis 40mg go through a directory of a bunch of different versions and figure out which one you want since “most-recent” and “stable” are almost always different versions
  3. Download the files
  4. Unzip the files into a separate folder
  5. Make backups of your existing installation if desired
  6. Copy the files from the new folder into your existing installation

Seriously? This may look absurd to most users (I hope it does), diagnosis but this is exactly the status quo for updating web-apps. I don’t know of a single web-app with acceptable update abilities. The closest thing is the DreamHost one-click-installs/updates of applications, but that’s something they have to write themselves for each app and isn’t part of the applications themselves.

I recently found out that I’m not alone in desiring this: it is currently the most-requested idea on WordPress Ideas site (which is like Motive Suggest for WordPress).

It’d certainly be complex to write an auto-updater since there are so many different systems the apps can be installed on. However, desktop installers were never easy to write either and we’ve come to expect them. My suggestion is that web-apps have one setting that allows complete auto-updates (but this option is disabled by default). The more normal use-case would be that the app checks a webservice to figure out if it’s up to date. If not, a very visible “upgrade now” button would be shown to the administrators when they log in. This button could be highlighted yellow for normal updates and red for security-critical updates. For extreme security-crucial updates, the app could actually email its administrators to notify them that there is a critical update and should have a link to kick off the upgrade. Since each version of the app itself is sending this alert, the administrators don’t have to join anyone’s mailing list to get these alerts (so they maintain their privacy).

Just a thought. I’m anxious to release a downloadable framework now just so I could include updating and hopefully start a trend (OffhandWay, MotiveSuggest, SiloSync?). Also, if you see instances of this type of feature in the wild, please let me know!

Quick 3-question SiloSync Survey

SiloSync is a sizable undertaking and there are a number of different potential places to start from. I want to make sure I have a decent idea of where the demand is, online so I’ve put together a quick 3-question survey. Please take a minute to fill it out for me! (you can be anonymous if you’d like)

To answer, please just leave a comment. I’ll leave my own answers in a comment as an example.

Question 1: What would you be most anxious to use SiloSync for?
A. Syncing up data & friendships between profiles on different sites so that they are all up-to-date.
B. Backup up data (photos, etc.) & friendship connections so that they never get lost and are all in one place.
C. Changing services if one of them does something unacceptable (along the lines of the Facebook Beacon debacle).
D. Quickly joining new services w/o the trouble of re-finding everyone and re-typing everything.

For this, please just type all of the letters you are interested in from highest-to-lowest

Question 2: What services would you most like to be able to pull data into SiloSync from?
(examples: do you want to pull your data from Facebook, Flickr, LiveJournal, WordPress, Twitter, MySpace?)
Again, please type the most-desired first.

Question 3: What services are most important to export data to?
(examples: do you want to send-data-to/sync-data-with Facebook, LinkedIn, Twitter, or a bunch of new and exciting social networks we don’t know about yet?)

Hopefully that was quick! Thanks for taking some time to help me out :)

Quick Tip: Delete old log-files if you use mySQL replication

There was a bit of a mess over on LyricWiki off and on for a few days. The culprit was a known bug in mySQL which messes up master-slave replication if you run out of hard-disk space (which you will if you’re using master-slave replication).

The preventative solution is to set up a daily cron-job which will find out what log the slave is using and delete all of the binary log-files that are older than that file. The alternative is an immense pile of unneeded files which will eventually cause you to run out of space and completely break your replication. To give you some idea, health we filled up 100gigs of log-files from LyricWiki (which has hundreds of times as many reads as writes) in about 2.5 months.

Hope that helps!

UPDATE: I just wrote this script and figured I’d release it publicly to save others some time. You can get the code from deleteOldBinLogs.txt (that’s just a .txt so you can view the code… save it as a “.pl” file). Once you’ve filled out the “configuration” part at the top and have uploaded the file to the “/root” directory on your Master database server, case add this line to your crontab file (by typing “crontab -e” into the command line):
0 4 * * * perl /root/deleteOldBinLogs.pl
That will make the script run at 4am each morning.

Pitt talk was fun

The talk I gave this week on SiloSync at Pitt was a fun venue. Their Lunch-and-Learn series is a really cool idea and sounds like it’s getting even more interesting. Next month’s talk is going to be done by a VP from Sun Microsystems. Prior to presenting, approved I jumped back into the SiloSync code and wrote the beginnings of the importer for Facebook.

As a side-note: one of the things that’s fascinating about this project is that I get to see all of the half-implemented security that different sites use. LiveJournal had a secure way of sending passwords, but shockingly stores passwords as plain-text (a big security faux-pas). Similarly, I saw some left-over fields in Facebook’s login form, but it appears that they just punted and used https (a secure web connection using SSL encryption) to just encrypt the whole login.

Back to the crux of this post: I’ve been rather tempted lately to actually finish SiloSync – which I had previously shelved in hopes that Open Social and other big-name initiatives would fix the problem (they didn’t). Google, Facebook, and MySpace have all announced fake data portability initiatives in the last week or so, which shows that if we want our data to be free, we’re going to have to take it (see my previous post on freeing the social graph for why this is important).

I decided it would be best to make a habit of posting my slide-decks when I present (I appreciate it when other people do that), here are the PowerPoint and Open Document (Open Office) versions. In the process of making the presentation, I ended up creating a visual representation of SiloSync which I think does a great job of summing up the whole idea for someone who hasn’t been exposed to it yet. That’s the picture above and to the right… click it to see the full-size version.

Interestingly, with these effectively useless announcements from the major Social Networks, a lot of non-technical press has been declaring that data is now free. Okay, cool, let’s all go home.

Fortunately, most of the technical press is calling them on it. Everyone from TechCrunch to David Recordon (of OpenId fame) is telling it like it is.

If you are interested in seeing SiloSync pushed to fruition (more than you’re interested in seeing Motive Suggest or doItLater v2.0), let me know so that I can weigh off the interest between the several projects competing for my time. Also, feel free to leave comments about your thoughts on the various “fake” data portability. This seems to be the topic which always gets the most vocal response on my blog.

New kind of spam: Invite-Spam

I’ve noticed over the past week or so that the spammers have a new trick up their sleeves. Within the last week I’ve gotten invites to iLike.com, information pills IMVU, glaucoma and myYearbook from people I don’t know, viagra sent to an email address that I don’t really use (it’s forwarded to the same place like all the rest, but I don’t give it to anyone).


I’ve had to fight spammers quite a bit on LyricWiki.org, and I’m beginning to realize a little bit more about why things work the way they work. As far as I can tell, the state-of-the-art in spamming is that tech-criminals build up bot-nets and then sell them as spamming machines. They use the zombies to attack popular technology in ways that uses other people’s web-servers to send out spam. This way, they can use the reputation of these servers to assure higher delivery-rates and they can count on the people running the servers to try to keep their reputation w/spam-filters as high as possible.

For a little more background for the uninformed: a bot-net is a vast array of hacked computers (zombies) that can be controlled remotely. Basically these are just everyday people who have been infected and are none-the-wiser. Years ago when your computer got infected, you generally got viruses that caused a ton of popups and eventually you sought help to remove the viruses. But with today’s bot-nets, the infected user generally has no knowledge of the problem and therefore doesn’t clean off their computer. When the bot-herder (who runs the bot-net) wants to do something, they use Trojan Horses which they’ve installed on the computer to send updates with what the computer should silently do.

For instance, I run MediaWiki on LyricWiki.org, and many bots have been trained to vandalize pages with random letters (I’m assuming it’s random… it might actually be a tracking-code) which they later come back and check for. If the wiki is not well-patrolled, then they come back and spam these pages with links. This way, they don’t have to reveal what product they are promoting unless they know it is some small wiki potentially with low resources – this prevents them from being tracked down by huge companies and reported to authorities. An added bonus of the bot-net approach is that each computer has a different IP address, so it’s hard to block all of them.


In this new flavor of spam, it appears bot-nets are signing up for profiles at social networking sites, and sending out invites to victims. This is a great way to use other sites’ reputable servers to send out spam that is highly likely to get delivered and also to make it through contextual spam filtering (since they look like any other invite).

This creates an interesting conflict for the sites who are being used to send the spam: on the one hand, these bots are out promoting them for free, getting new users to sign up out of curiosity (“Do I know this person? The name sounds vaguely familiar…”). On the other hand, these are ill-gotten users, and the spam that’s being sent out probably moves their servers on to more and more blacklists. Both options are a mixed-bag, and in the end I feel that it’s always best in business to do the right thing without immolating yourself. You didn’t earn these new users, so just take a stand and try to solve the spamming issue if you can. Aye, there’s the rub: often, a startup’s most rare asset is time. How much time should a company devote to trying to fix a problem like this? They could be out promoting their site, adding new features, or fixing bugs. They’re always understaffed, and there is always more work to be done.

This is a hard problem to deal with since you’re either protecting strangers from a bunch of spam that’s coming from your servers (which you really had nothing to do with), or you’re adding features for your users. A tough call to make. Hopefully some of these companies can co-operate to come up with a technical solution that they can share amongst each other to make it practicable for them all to implement it. The three companies whose servers spammed me aren’t even direct competitors – one is chat (IMVU), one is youth social-networking (myYearbook), and one is music-focused (iLike).

I’ve emailed a friend at one of the companies and explained the situation. It will be interesting to see how they respond.

PS: Please don’t comment about just adding a CAPTCHA. Those things are horribly useless against talented programmers and have an inherent “economic” flaw. I’ll probably write more about it later, but to put it simply, every time I see a site using “ReCAPTCHA” in a place where they should have actual decent Turing-test security, I cringe. It doesn’t do that!

QuickTip: Numerical comparisons in JavaScript

This is one of those things I use really rarely, obesity then forget about it and have to re-figure-it-out every time. Never again!

If you are using JavaScript and you have a number inside of a variable as a string, the JS engine won’t do numerical comparisons, so you may get some weird results.

For instance:

var nine = '9'; // the quotes make this a string
var ten = '10';
if(ten > nine){
alert('This makes sense, but will NOT get displayed');
} else {
alert('10 is less than 9 because JS uses a string-comparison ' +
'that stops after it finds the "1" character being '+
'less than the "9" character');

… will actually display the second alert. Looks crazy, huh?

To work around this, we can just use arithmetic and subtract the numbers from each other and look at the difference. This chunk of code shows how we can fix this to make more sense:

var nine = '9';
var ten = '10';
if((ten - nine) > 0 ){
alert('This makes more sense!  '+
'Since the difference of the first var minus the '+
'second var is greater than 0, the first var must '+
'be greater than the second.');
} else {
alert('This will NOT be displayed because it would make no sense');

Hope that helps someone!