Archive for the ‘Technology’ Category

Goodbye Java

Sunday, July 1st, 2012

It has been several years since I touched any Java code, and since it’s unlikely that I’ll work in the new Cobol again, I donated all my ancient reference volumes to the local used bookstore today.

Yokaben Macaronics: Read Write Learn

Sunday, June 10th, 2012

Update: November 16, 2012 — Yokaben is now Macaronics

Several years ago, I passed level two (N2) of the 日本語能力試験 or Japanese Language Proficiency Test. N2 means I know (or knew, at the time I took the test, anyway) at least 1,023 kanji written characters, and 5,035 vocabulary words. So in theory, I should be able to read 90% of the text in a typical newspaper article. Still, when I visited Japan recently, I had trouble getting through even the simplest article.

I was traveling through Hiroshima on a Sunday afternoon, on my way to Miyajima, but most people on my train were dressed in Hiroshima Carp colors, and got off at the station directly in front of the stadium.

They proved to be a boisterous bunch that day, and after I got back to the hotel, I wondered how their team had done.

It turned out they won, but there was very little in the English-language sources abut the game, which was in stark contrast to the local Japanese press, including some fanciful word play about the player who hit a home run.1

As Lost in Translation comically exaggerated, having access to the original source makes a difference.

Machine translation was somewhat helpful, but those results left a lot to be desired, especially when dealing with nuance and context (it was interesting, for example, to see that Bing correctly translated バカ as “moron”, but Google rendered it as “docile child” instead).

What I really needed was a human editor, someone at least partially bilingual, who could fill in the gaps and clean up the obvious errors.

Crowd-sourcing, or more specifically, human-based computation, is a possible solution, though it needs hundreds, thousands, or more editors to make it work.

If it does reach that critical mass, it would open up an even larger audience: people would be able to read original texts in full, regardless of whether they are literate in the source language or not, and even if they have no desire to learn that language in the first place.

Yokaben2 Macaronics3 is an experiment to see whether or not it can be done.

[1] One way to pronounce the numbers “2″ and “9″ together is “niku” which is roughly how the Japanese say the first name of Nick Stavinoha. The author speculated that since the 29th is “Nick’s Day”, fans can expect a similar result on May 29 and through the rest of the season on the 29th of every month.

[2] I didn’t know what to call it, but when I was thinking up names, I heard someone talking about PubSub, which is a contraction of the words “publish” and “subscribe”.

Since what I was building was a way to “Read Write Learn”, I tried similar contractions. While it didn’t work in English, I got some unique syllables from the corresponding Japanese words:

Read : 読む (yomu) → yo
Write : 書く (kaku) → ka
Learn : 勉強 (benkyou) → ben

(Yes, I know that 勉強 really means study, and 学ぶ is a better translation of learn, but “yokamana” or “yokabu” didn’t quite have the same ring to it.)

[3] While researching names for another project, I came across the adjective macaronic, whose dictionary meaning seemed perfect for this, especially since I’d like to see it go beyond just two languages.

Also, yokaben as I’d constructed it (読書勉) is too close to dokusho (読書) and thus potentially confusing for native Japanese speakers.

Using Microsoft’s Translator API with Python

Monday, May 7th, 2012

Before Macaronics, I experimented with automated machine translation.

Microsoft provides a Translator API which performs machine translation on any natural language text.

Unlike Google’s paid Translation API, Microsoft offers a free tier in theirs, for up to 2 million characters per month.

I found the signup somewhat confusing, though, since I had to create more than one account and register for a couple of different services:

  1. I had to register for a Windows Live ID
  2. While logged in with my Live ID, I needed to create an account at the Azure Data Market
  3. Next, I had to go to the Microsoft Translator Data Service and pick a plan (I chose the free, 2 million characters per month option)
  4. Finally, I had to register an Azure Application (since I was testing, I didn’t want to use a public url, and fortunately that form accepted ‘localhost’, though it insisted on my using ‘https’ in the definition)

The last form, i.e., the Azure Application registration, provides two critical fields for API access:

  • Client ID — this is any old string I want to use as an identifier (i.e., I choose it)
  • Client Secret — this is provided by the form and cannot be changed

With all the registrations out of the way, it was time to try a few translations.

The technical docs were well-written, but since there was nothing for Python, I’ve included an example for accessing the HTTP Interface.

My code is based on Doug Hellmann’s article on urllib2, enhanced with Michael Foord’s examples for error-handling urllib2 requests.

Here’s a simple usage example from Japanese to English, in the Python REPL:

>>> import msmt
>>> token = msmt.get_access_token(MY_CLIENT_ID, MY_CLIENT_SECRET)
>>> msmt.translate(token, 'これはペンです', 'en', 'ja')
<string xmlns="">This is a pen</string>

The API returns XML, so a final processing step for a real program would be to use something like lxml to parse out the translation result.

Here’s a snippet for getting just the translated result out of the XML object returned by the API.

In the case of the example above, this is just the classic1 phrase:

This is a pen

[1] It’s classic in that “This is a pen” is the first English sentence Japanese students learn in school (or so I’m told)

Rediscovering LaTeX

Thursday, January 5th, 2012

I first used LaTeX while an intern at a very old-school software company that ran only unix workstations.

When I needed to write a letter (that had to be printed on paper and signed, for some bureaucratic task), I was told "try this".

At first, the idea of writing in markup, then compiling it to get final document seemed strange, but I quickly came to love using it. Pretty soon, anything that I used to do in Word I would do in LaTeX instead.

I got away from it entirely these last few years, as most things that used to require a printed letter or memo have succumbed to email, web forms, and the like.

But recently I had the need again, for a new project, and thought: why not?

The only difference now is that instead of printing to paper, I would be sending pdf files by email.

Fortunately, the Ghostscript ps2pdf utility makes that simple, and it was already installed on my computer.

Likewise, LaTeX itself was already installed and available, thanks to the TeX Live package.

The only remaining annoyance was all the commands I needed to run for each document:

$ latex test.tex
$ dvips test.dvi
$ ps2pdf

and, to clean-up all the intermediate files those commands generated:

$ rm test.aux test.dvi test.log

So I wrote this latex2pdf shell script:


if [ $# -ne 1 ]
    echo "usage: [file(.tex)]"
    # split $1 on / to get the path and filename
    path=`echo ${1%/*}`
    file=`echo ${1##*/}`
    if [ $path = $file ]

    # check if the file already has the .tex ext
    suffix=`echo $file | grep ".tex$" | wc -l`
    if [ $suffix -eq 0 ]
        f=`echo "$file.tex"`
        f=`echo "$file"`

    # define the filename base string w/o the .tex ext
    # (what the .aux, .dvi., .ps, .log files will be named)
    s=`echo "$f" | sed -e 's/\.tex$//'`

    # compile the .tex file and convert to pdf
    latex "$path/$f"
    dvips "$s.dvi"
    ps2pdf "$"
    rm -f "$s.aux"
    rm -f "$s.dvi"
    rm -f "$s.log"
    rm -f "$"

Now, with a single command, I can build and view the result immediately:

$ ./ test.tex; xpdf test.pdf &

Who needs WYSIWYG?

Splitting and Extracting MPEG video files with MEncoder

Monday, January 2nd, 2012

One of the nice things about MythTV is that it lets me save any broadcast as an unencrypted, DRM-free mpeg file.

I recently found out how to use MEncoder to split and trim those mpeg files into single or multiple clips.

MEncoder is a good tool to use because it’s free (as in both freedom and beer), and runs on all major platforms (there are even pre-built binaries for Mac OSX).

MEncoder has two command line options, -ss and -endpos, which let you define the start or end position of the clip you want to extract.

Unfortunately, the default command doesn’t work with mpeg files.

The work-around is to convert the mpeg file to avi format first:

$ mencoder original.mpeg -ovc lavc -oac lavc -o original.avi

Then, create a copy starting or ending at a given point in time, defined as hour:minute:second using either the -ss or -endpos options.

For example, to extract a clip from the 17 minute 50 second mark to the 57 minute 47 second mark from a one-hour file, these two commands will do the trick:

$ mencoder -ss 00:17:50 -oac copy -ovc copy original.avi -o clip_start.avi
$ mencoder -endpos 00:39:57 -oac copy -ovc copy clip_start.avi -o clip.avi

Note that the -endpos was recalculated for the second command as 39:57, not 57:47.

That’s because the clip_start.avi file is 17 minutes and 50 seconds shorter than the original, and so we need to recalculate the clip end position in terms of the new length.

The file clip.avi contains the clip from 17:50 to 57:47 extracted from the original file, and we can discard the intermediate clip_start.avi file.

It takes two commands because MEncoder seems to ignore the second -ss or -endpos option it finds, and uses just the first one.

It would be nice if it would just let us do this instead:

$ mencoder -ss 00:17:50 -endpos 00:57:47 -oac copy -ovc copy original.avi -o clip.avi


The Forgotten E-Book Reader: OLPC

Monday, December 26th, 2011

With the plethora of e-book reader devices available these days, it’s easy to overlook perhaps one of the better choices for mobile e-reading: the OLPC.

While it’s a bit heavier than most tablets (but still relatively light at just over 3 pounds), and lacks the “instant-on” feature of other devices (the OLPC is technically a netbook computer, so it needs time to boot), the built-in Read Activity (app) supports several types of file formats, including text, tiff, djvu, pdf, and epub.

At 6 inches x 4.5 inches, the OLPC’s color screen is bigger than most dedicated e-book readers, and almost as large as the iPad. The screen folds flat, which hides the keyboard and makes reading easier, and the screen also remains easy to read, even in sunlight.

So why isn’t it more popular as an e-book reader?

One problem is that it’s not clear how to add new content for the Read app to find.

By default, the Read app can open e-book files in the Journal, but the documentation doesn’t fully explain how to copy new files into the Journal.

In theory, you can drag-and-drop files from a mounted usb stick or external drive, but I found the graphical environment choppy and unreliable.

Fortunately, there’s a built-in Terminal script called copy-to-journal created just for this purpose.

Here’s an example of how to copy a pdf file from a memory stick to the Journal:

copy-to-journal "/media/my-usb-stick/My Book.pdf" -m application/pdf -t "My Book"

The first parameter is the full path to the file (wrapping in quotes is good practice, since it will work for files with spaces in their names and without), the second parameter (-m) specifies the mimetype, and the third parameter (-t) defines the title of the book as it appears in the Journal (it can be completely different from the filename).

Epub files work the same way, except the mimetype is different:

copy-to-journal "/media/my-usb-stick/My Book.epub" -m application/epub+zip -t "My Book"

The script can also attempt to guess the mimetype, using the -g switch instead of -m:

copy-to-journal "/media/my-usb-stick/My Book.epub" -g -t "My Book"


A quick guide to DIY animated videos

Friday, December 9th, 2011

Now that is out in public beta, I wanted to see how difficult it would be to make an intro video, similar to what Google did for its Voice service.

The idea is based on the observation that most people don’t read web pages and would rather watch a video than skim even a brief description.

So with no background in video animation (and no cash to pay anyone to do it for me), I set out to see how far I could get on my own, using free software tools.

I wrote a script consisting of a few frames of stop-motion animation, which I thought would be the simplest to do.

The script starts with someone planning a project, surrounded by a few gantt charts and similar project management paraphernalia. Soon, though, the various charts and forms he needs to process multiply until he’s overwhelmed, and the screen fades to black.

From out of the darkness, a bright light emerges, and the logo emerges.

That’s just part one. The next step would be to explain how it works, but part one was enough on its own to keep me busy for a while.

Fortunately, there are several free tools available for this kind of production.

Free as in Beer

I started with GIMP, the GNU community’s answer to PhotoShop.

GIMP let me create all the images I needed for part one: the charts and forms smothering our hero are easily done incrementally, by just adding more junk on top and saving each edit as a separate file.

Going from dark to light was also fairly simple, since GIMP has a nice selection of effects filters, one of which, Supernova, let me create a small sunburst in the middle of the black field, then expand it slowly, until the field was white.

Looking back at the first draft, I see that I rushed it a bit too much, but that is a problem with stop-motion: updating changes frame-by-frame is tedious, and there’s always the risk of jumping ahead too much in any given snapshot.

Next, I used Pencil to put the individual frames together with sound and create a single movie file.

Pencil is capable of exporting to QuickTime’s .mov format at a default 851×715 screen resolution, so to keep things simple, I made all my GIMP images 851 pixels wide by 715 pixels tall.

It’s not an ideal aspect ratio for YouTube, though, and I noticed black filler bands on both sides after uploading, but it doesn’t get in the way of comprehending the video.

Pencil also let me add a soundtrack and preview the entire composition of moving frames and sound, but somewhat annoyingly, it didn’t export with sound.

This is a long-time bug, apparently, but I was able to get around it using MEncoder (more on that later).

Finally, I used the speech synthesizer built in to Mac OSX to produce the voice-over.

Say, a free tool for capturing the Mac’s Text-to-Speech output as a file, was invaluable for this task.

Say produces .aiff format sound files, which can be imported as-is into Pencil.

As nice as it is to write a script and have it turned into speech immediately, the sound of a computer-generated voice-over is less than ideal.

“Alex”, by far the best-sounding of the synthesized voices, still came out clunky and awkward.

It seems for the final video I need to bite the bullet and use a real human voice.

The Final Draft Cut

Once I was happy with the sequence of still frames in Pencil, and I made sure the sound synched (more or less) with the video, I created a .mov file of the project.

I could play the .mov file in QuickTime, but, as noted earlier, there was no sound.

That’s where MEncoder comes in, since it’s able to add a sound layer to any video file, using a single command line instruction:

$ mencoder -o final.avi \
  -ovc copy -oac copy -audiofile sound.mp3

The only hitch is that it didn’t work with my .aiff file, so I had to convert it to .mp3 format first.

Fortunately, ffmpeg makes this easy:

$ ffmpeg -i sound.aiff -f mp3 -ab 192 \
  -ar 44100 sound.mp3

Here’s the first draft in all its (10 seconds of) glory:

Re-creating Mailinator in Python

Friday, November 11th, 2011

Update: February 21, 2012

I’ve extended this concept into a framework for creating an intelligent email-based agent server, whereby email sent to designated inboxes get dynamic, or custom replies.

It’s the same logic used by the web service and I’ve decided to open source it on github:

Paul Tyma, the creator of Mailinator, once wrote about its architecture. He said that after starting with sendmail, he found it necessary to write his own SMTP server from scratch.

While he never released the Java source code of his server, I wanted to see if I could re-create it using Python, since I also wanted to understand how state machines work in that language.

The Basic Server

To start, I needed some code that would listen on a specific port, and read and respond to clients.

Python’s SocketServer module makes this simple.

Here, in a few lines, is a multi-threaded TCP server that listens on port 8888 of the local machine and echoes back what a connected client sends to it:

import SocketServer
cr_lf = "\r\n"
class SMTPRequestHandler (SocketServer.StreamRequestHandler):
    def handle (self):
            while 1:
                client_msg = self.rfile.readline()
                self.wfile.write(client_msg.rstrip()+cr_lf) # a simple echo
        except Exception, e:
            print e
# server hostname and port to listen on
server_config = ('localhost', 8888) 
if __name__ == '__main__':
    tcpserver = SocketServer.ThreadingTCPServer(server_config, SMTPRequestHandler) 

Start it from a command line prompt (if the port number you choose is less than 1025, then you need to do this as root):

$ python

And test it using telnet:

$ telnet localhost 8888
Connected to localhost.
Escape character is '^]'.
This is an echo
This is an echo
Ok, I get it
Ok, I get it
What next?
What next?

Handling SMTP

Now I needed to be able to understand and reply to SMTP requests. The protocol is fairly simple, with only a handful of commands.

Each command consists of four letters, which appear at the start of the stream sent by the client, and terminated with “\r\n”.

SMTP commands

Tyma did not, however, implement the full list of SMTP commands, since RSET (Reset), VRFY (Verify), NOOP (No operation), and others are used by spammers to abuse or even take over a server, and are rarely required by legitimate email clients.

The server needs to be able to handle the basic interaction, so HELO (Hello) / EHLO (Extended Hello), MAIL (Mail from), RCPT TO (Recipient To), and DATA all need to be supported.

At first glance, it’s tempting to try to implement it like this:

class SMTPRequestHandler (SocketServer.StreamRequestHandler):
    def handle (self):
            data = {}
            while 1:
                client_msg = self.rfile.readline()
                if client_msg.startswith('MAIL FROM:'):
                    data['sender'] = get_email_address(client_msg)
                elif client_msg.startswith('RCPT TO:'):
                    data['recipient'] = = get_email_address(client_msg)
                elif client_msg.startswith('QUIT'):
        except Exception, e:
            print e

Where get_email_address() is defined as, for example, something like this:

def get_email_address (s):
    """Parse out the first email address found in the string and return it"""
    for token in s.split():
        if token.find('@') > -1:
            # token will be in the form:
            # 'FROM:' or 'TO:'
            # and with or without the <>
            for email_part in token.split(':'): 
                if email_part.find('@') > -1:
                    return email_part.strip('<>')

But this gets messy in a hurry. While some commands fit within the neat single-line /^CMND rest of data\r\n/ pattern, others do not.

RCPT, for example, can be repeated multiple times, and once DATA is seen, every subsequent line must be collected until the final /^\.$/ appears.

State Machines to the rescue

A state machine provides a much better way of handling SMTP requests. In his excellent article, David Mertz defines a state machine as:

a directed graph, consisting of a set of nodes and a corresponding set of transition functions. The machine “runs” by responding to a series of events. Each event is in the domain of the transition function belonging to the “current” node, where the function’s range is a subset of the nodes. The function returns the “next” (perhaps the same) node. At least one of these nodes must be an end-state. When an end-state is reached, the machine stops.

And that corresponds exactly to what happens when a client interacts with an SMTP server:

SMTP State Diagram

Brass Tacks

Creating a state machine in Python is simple, since Python allows you to pass functions as higher-order objects. The implementation in Mertz’s article was done in just a few lines of code.

To handle each SMTP node, I defined a series of functions, one for each server response or command.

Here are the function prototypes, where the cargo parameter is a tuple, containing both the stream from/to requests are read and responses written, and a dict of data collected from the request:

def greeting (cargo):
def helo (cargo):
def mail (cargo):
def rcpt (cargo):
def data (cargo):
def process (cargo):

The state machine is defined within the SMTPRequestHandler class like this:

class SMTPRequestHandler (SocketServer.StreamRequestHandler):
    def handle (self):
            m = StateMachine()
            m.add_state('greeting', greeting)
            m.add_state('helo', helo)
            m.add_state('mail', mail)
            m.add_state('rcpt', rcpt)
            m.add_state('data', data)
            m.add_state('process', process)
            m.add_state('done', None, end_state=1)
  , {}))
        except Exception, e:
            print e

So that each function knows how to recognize its assigned command, I defined and compiled these regular expressions. These are created as globals, since it’s more efficient to initiate them once, and have each subsequent method call use the already-existing version.

import re
helo_pattern = re.compile('^HELO', re.IGNORECASE)
ehlo_pattern = re.compile('^EHLO', re.IGNORECASE)
mail_pattern = re.compile('^MAIL', re.IGNORECASE)
rcpt_pattern = re.compile('^RCPT', re.IGNORECASE)
data_pattern = re.compile('^DATA', re.IGNORECASE)
end_pattern = re.compile('^.$')

The greeting() function, which begins the interaction with the client, sends a simple message and passes control to the helo() function. It looks like this:

def greeting (cargo):
    stream = cargo[0]
    stream.wfile.write('220 localhost SMTP'+cr_lf)
    return ('helo', cargo)

Later in the sequence, the mail() function, which is the first node from which data is collected (in this case, the email address of the sender), is the first to save information in the cargo’s dict. It looks like this:

def mail (cargo):
    stream = cargo[0]
    client_msg = stream.rfile.readline()
        sender = get_email_address(client_msg)
        if sender is None:
            return ('done', cargo)
            email_data = cargo[1]
            email_data['sender'] = sender
            return ('rcpt', (stream, email_data))
        return ('done', cargo)        

Here, if the request is not recognized or invalid, the client sees the bad_request message, and the connection is closed, since control passes to the done end-state.

I followed Tyma’s example and defined bad_request as “550 No such user” (which, as he notes, is ironic, since Mailinator accepts email sent to any user).

It also doesn’t conform to the protocol, since I’m supposed to give different error messages at different nodes, but since clients are always disconnected after any type of invalid request, it hardly matters what they see in that scenario.

If a client is well-behaved, the final method called is process() which decides what to do with the client’s email. The data dict will contain three parameters: ‘sender’ (the email address of the sender), ‘recipients’ (a list of email addresses), and ‘data’ (the contents which followed the DATA command ahead of the final ‘.’).

def process (cargo):
    email_data = cargo[1]
    # do something with the email_data dict here
    return ('done', cargo)

Basically, this is where the data can be saved to disk/db (so that it can be served by a web browser later, e.g.), MIME-parsed (to remove attachments, etc.), or just trashed (if you have reason to believe the sender is a spambot or zombie network, e.g.).

Tyma describes various measures for dealing with attacks from spambots and zombies which I haven’t implemented here, but would be relatively easy to add to both the data() and process() functions.

Obtaining the ip address of the client is done using the stream.client_address[0] attribute.


Node.js and MongoDB: A Simple Example

Thursday, September 22nd, 2011

Update, October 14, 2012:
If you are interested in creating highly scalable applications with mongoDB, you should really consider using Go (#golang) instead, and check out the Go version of this post.

I’ve been learning Node.js. After getting through an excellent basic tutorial, I wanted to experiment with connecting to MongoDB, since it would be helpful for some enhancements I have planned for Stealth Mode Watch and BookHunch.

There are several drivers available, but the most commonly used and recommended one is node-mongodb-native.

Despite the examples, though, it wasn’t clear how to collect and return the results of a single query.

Both the simple example and the queries example just dump the result to the console within the cursor, and the best answer on the normally reliable StackOverflow site was problematic, too, because the server code was written in a blocking style without callbacks, which defeats the purpose of using Node.js in the first place.

Fortunately, though, there is a terrific article about control flow on How to Node which explains how to do multiple asynchronous tasks (either in parallel or serially), and collect all the results.

The examples in the control flow article dealt with accessing files from different folders, which was simple enough to change to running MongoDB queries instead.

The first step is to install node-mongodb-native using npm:

npm config set loglevel info
npm install mongodb

The first line essentially sets npm in a verbose mode, to let you know what it’s doing as it runs (a tip from the guys at, and the second actually installs the node-mongodb-native module (it seems that the native parser is obsolete, so there is no need to use the --mongodb:native switch in the second line).

Here’s how to execute multiple queries and collect their results.

var Db = require('./node_modules/mongodb').Db,
    Connection = require('./node_modules/mongodb').Connection,
    Server = require('./node_modules/mongodb').Server;

var host = process.env['MONGO_NODE_DRIVER_HOST'] != null ? 
           process.env['MONGO_NODE_DRIVER_HOST'] : 'localhost';
var port = process.env['MONGO_NODE_DRIVER_PORT'] != null ?
           process.env['MONGO_NODE_DRIVER_PORT'] : Connection.DEFAULT_PORT;

These lines just load the db module and prepare for a connection; no connection has been opened yet.

This function executes the query in the db, collects the result documents in a list, and passes them to a callback function when it’s done iterating through the query cursor:

function runQuery (db, myCollection query, nextFn) {
    // perform the {query} on the collection and invoke the nextFn when done, db) {
	db.collection(myCollection, function(err, collection) {
	    collection.find(query, function(err, cursor) {
		cursor.toArray(function(err, docs) {
		    console.log("Found " + docs.length + " documents");
		    var queryResults = [];
		    for(var i=0; i<docs.length; i++) {
			queryResults[queryResults.length] = docs[i]; 

So that leaves opening the connection, defining the query or queries, and calling runQuery().

Suppose we have a database consisting of two collections, people and companies, and that we want to search for a given string anywhere among a person’s name, a company’s name, or even a company’s address.

Our search function would look like this:

function search (personName, companyName, address, nextFn) {
    var data = [], count = 3;
    var doneFn = function(results) {
	data = data.concat(results);
	if( count <= 0 ) {
	    var uniqueResults = [];
	    for(var i=0; i<data.length; i++) {
		if( ! uniqueResults[data[i]['_id']] ) {
		    uniqueResults[uniqueResults.length] = data[i];
		    uniqueResults[data[i]['_id']] = true;
    runQuery(new Db('mydb', new Server(host, port, {})),
	     {'name':new RegExp('^'+personName, 'i')},
    runQuery(new Db('mydb', new Server(host, port, {})),
	     {'name':new RegExp('^'+companyName, 'i')},
    runQuery(new Db('mydb', new Server(host, port, {})),
	     {'address':new RegExp(address, 'i')},

Just like the control flow article, we use a counter in our callback function — doneFn — to check when there are no more queries to run, and once that’s the case, we iterate through the combined list of results and use the document ObjectId to ensure the list of results is unique.

That unique list of results is then passed to the callback function given to search(), i.e., nextFn, which ultimately decides what to do with the combined results.

Each call to runQuery() needs its own db connection, opened like this:

new Db('mydb', new Server(host, port, {}))

because each query will run asynchronously and independently of each other.

It’s also possible to define a single connection, once, before each of the runQuery() calls happen:

var db = new Db('mydb', new Server(host, port, {}));

and just pass the db variable to runQuery(), but then the db.close() line would have to be removed from inside the runQuery() function, and placed inside the doneFn instead.

As for the queries themselves, the beautiful thing about using Javascript and MongoDB together is that any native Javascript object doubles as the query input to MongoDB.

So any query that works in the MongoDB command line, will work in Javascript without much or any transformation necessary, unlike the hoops you have to jump through in Python+pymongo.

Finally, the function that calls search() has to decide what to do with the results.

If we’re writing an API and want the results sent back over HTTP as json, all we have to do is this (within the larger context of an http server in Node.js, of course):

apiSearch('Jones', 'Jones', 'California', function(results) {
    var replyJson = {"warning":"No matches found"};
    if( results.length > 0 ) {
	replyJson = {"result":{ "matched":results.length, "matches":results}};
    response.writeHead(200, {"Content-Type": "application/json"});

This example searches the database for people or companies named Jones, or companies located in California.

The final callback here gets passed the unique list of results from the search.doneFn — i.e., it is the nextFn passed into search() — and generates an http response in json format.


How Startup Products Evolve

Thursday, August 11th, 2011
[ Edit: Reading this back, it occurred to me that what I described could also be considered a pivot, which is very much en vogue among the lean startup crowd. Since, however, there is some confusion as to what pivoting really means, I'll stick with my evolution analogy. ]

Just over a year ago, I was working on a new marketplace for ebooks, mostly because independent authors were being under-served by Amazon and Apple’s iBookstore.

They still are, actually, but every author I came into contact with really, really wanted to see their work sold there.

As one author put it to me, “If it’s not in iTunes, it’s not real.”

So the site struggled to get content, and without that, it didn’t attract any readers, either.

Would you shop here?

Listening to Clients

The problem (in my mind, anyway) was that the site’s clients (i.e., the authors) were asking for something that didn’t fit into my preconceived notions of what the marketplace should be:

Can you get my book in Amazon?
Why did Apple reject my book?
Do you have connections to Amazon and iTunes?
Can you help me format my book?

Eventually, though, those messages did manage to reach into my brain, and I realized something important.

Brain shown actual size

People want to sell on Amazon and iTunes, but they don’t have the technical know-how to do it.

Both Amazon and iTunes require authors to submit epub files that pass validation.

Since, however, the people who wrote the epub spec insisted on going with a uncompromisingly strict approach, it created a situation where producing an epub file is easy, but getting it to validate was hard.

Mutation #1:

So while the original marketplace didn’t survive, it demonstrated a problem that needed a solution.

That led me to build a different type of site, one where authors could create valid ebooks without knowing how to program.

As I’ve written before, it’s not only popular with users, but also generates revenue.

Those Pesky Off-Topic Requests

But once again, I started getting a stream of suggestions that went against my neat worldview of what was supposed to be.

This time, it was a variation of “Can you help me sell my book?

Unlike before, though, I didn’t have a clear vision of how I could help, or even if I could (and it’s not a problem for just independent writers: mainstream authors and established publishing companies struggle with this as well).

One author who has solved this problem is Amanda Hocking.

She admits to being baffled about why she succeeds while other, similar authors fail, and suggests a combination of “good covers” with “similar [cheap] prices”.

But Amanda’s books stand out to me for a different reason: her books have hundreds of reviews on Amazon, in stark contrast to other ebook-only titles.

And she has a strong presence on both Twitter and Facebook.

So would it be possible to mobilize people to read, review, and tweet about a given book?

Does Harry Potter do this to you, too?

Mutation #2:

Enter, which I recently opened to a small set of beta users.

The basic premise is that authors and publishers submit new or pre-release books to a community of book lovers, who in turn read, review, and share their opinions.

Readers get points for participation, which determine their level of access.

Points mean privileges, including special access to content, and, eventually real-world rewards in the form of gift cards or donations to favorite charities.

And all reading is social: readers can invite their friends, with whom they can make and share notes about the book, right alongside the text.

It’s also possible just to read the book and ignore all the community aspects of the site, so even digital hermits are welcome.

On the other side, authors and publishers not only get social media exposure and explicit feedback, but also analytical reports of readers’ implicit behavior: how many pages people read, where they stopped reading, how long they took on a particular chapter, etc.

And implicit behavior is interesting, in that it probably has more to tell an author than all the written reviews and notes do.

As this notable study of netflix habits shows, people tend to claim that they want to watch highbrow films, but when it comes to choosing what to watch right now, they usually wind up with something less refined.

More likely to survive and reproduce?

The initial response for invite requests has been encouraging, and I’ve already gotten some useful suggestions.

Here’s a preview article on the Digital Reader blog.