Archive for the ‘Technology’ Category

How to extract just the text from html page articles

Saturday, October 27th, 2012

One of the reasons I keep going back to Python is because of the lxml library.

Not only is it terrific in terms of handling xml, it can do wonders with html of all flavors, even badly-formed and specification-invalid html data.

A common task I have these days is to grab the text from an html page or article (e.g., in curating content for Macaronics).

As this gist shows, lxml makes this dead simple, using xpath and the “descendant-or-self::” axis selector.

The only real work is understanding the page structure and creating the correct xpath expression for each site (the readability algorithm is essentially a collection of these rules), and monitoring their changes over time so that the xpath expression can be updated accordingly.

Another bonus is that it works with foreign language sites, too, provided the parser is passed the same encoding as defined in the target page’s Content-Type meta tag.

Here’s an example of grabbing the text from a web article by Facta, a Japanese business magazine, and saving it as a text file, so I can add it to the list of articles in Macaronics:

>>> import urllib, text_grabber
>>> data=urllib.urlopen('http://facta.co.jp/article/201211043-print.html').read()
>>> t=text_grabber.facta_print(data)
>>> import codecs
>>> f=codecs.open('facta-201211043-print.txt', 'w', 'utf-8'); f.write(t); f.close()

Go (#golang) and MongoDB using mgo

Sunday, October 14th, 2012

After working in node.js last year, I’ve switched to learning Go instead, and I wanted to reprise my "Node.js and MongoDB: A Simple Example" post in Go.

Of all the Go drivers available for mongoDB, mgo is the most advanced and well-maintained.

The example on the mgo main page is easy to understand:

  1. Create a struct which matches the BSON documents in the database collection you want to access
  2. Obtain a session using the Dial function, which creates a connection object
  3. Use the connection object to access a particular collection in your database:
    • Searches load documents from the database into the struct
    • Inserts and updates take data defined in a struct and create/update documents in the database

So for a collection named “Person”, where a typical document looks like this:

{
        "_id" : ObjectId("502fbbd6fec1300be858767e"),
        "lastName" : "Seba",
        "firstName" : "Jun",
        "inserted" : ISODate("2012-08-18T15:59:18.646Z")
}

The corresponding Go struct would be:

type Person struct {
    Id         bson.ObjectId   "_id,omitempty"
    FirstName  string          "firstName"
    MiddleName string          "middleName,omitempty"
    LastName   string          "lastName"
    Inserted   time.Time       "inserted"
}

It turns out the third field in each line, the string literal tag which is normally optional in a Go struct, is required here, because mgo won’t find those fields in the database otherwise.

It’s also possible to convert database results directly into json, which is useful for creating API services that output json.

In that case, it’s necessary to define both a bson tag and a json one, surrounded by backticks:

type Person struct {
    Id         bson.ObjectId   `bson:"_id,omitempty" json:"-"`
    FirstName  string          `bson:"firstName" json:"firstName"`
    MiddleName string          `bson:"middleName,omitempty" json:"middleName,omitempty"`
    LastName   string          `bson:"lastName" json:"lastName"`
    Inserted   time.Time       `bson:"inserted" json:"-"`
}

The json tag follows the conventions of the built-in Go json package: “-” means ignore, “omitempty” will exclude the field if its value is empty, etc.

So far so good.

But accessing different collections in a database means that for each one: it has its own struct defined, it has its own connection with the collection name specified, and an access function (Find, Insert, Remove, etc.) which marshals/unmarshals those results.

And the last step in particular can lead to a lot of code repetition.

Inspired by Alexander Luya’s post on mgo-users, I’ve created a framework that allows for multiple access functions with a minimum of repetiton.

First, this function, which creates or clones the call to Dial() as needed (this is very similar to what Alex posted):

var (
    mgoSession     *mgo.Session
    databaseName = "myDB"
)

func getSession () *mgo.Session {
    if mgoSession == nil {
        var err error
        mgoSession, err = mgo.Dial("localhost")
        if err != nil {
             panic(err) // no, not really
        }
    }
    return mgoSession.Clone()
}

Next, a higher-order function which takes a collection name and an access function prepared to act on that collection:

func withCollection(collection string, s func(*mgo.Collection) error) error {
    session := getSession()
    defer session.Close()
    c := session.DB(databaseName).C(collection)
    return s(c)
}

The withCollection() function takes the name of the collection, along with a function that expects the connection object to that collection, and can execute access functions on it.

Here’s how the “Person” collection can be searched, using the withCollection() function:

func SearchPerson (q interface{}, skip int, limit int) (searchResults []Person, searchErr string) {
    searchErr     = ""
    searchResults = []Person{}
    query := func(c *mgo.Collection) error {
        fn := c.Find(q).Skip(skip).Limit(limit).All(&searchResults)
        if limit < 0 {
            fn = c.Find(q).Skip(skip).All(&searchResults)
        }
        return fn
    }
    search := func() error {
        return withCollection("person", query)
    }
    err := search()
    if err != nil {
        searchErr = "Database Error"
    }
    return
}

The skip and limit parameters are optional in that if skip is set to zero, it is effectively asking for all the results, and, similarly, if limit is set to an integer less than zero, it is ignored in the query that gets invoked inside the withCollection() function.

So with that framework in place, making a variety of different queries on the "Person" collection reduces to writing simple (often one-line) BSON queries, as in the following examples.

(1) Get all people whose last name beings with a particular string:

func GetPersonByLastName (lastName string, skip int, limit int) (searchResults []Person, searchErr string) {
    searchResults, searchErr = SearchPerson(bson.M{"lastName": bson.RegEx{"^"+lastName, "i"}}, skip, limit)
    return
}

(2) Get all people whose last name is exactly the given string:

func GetPersonByExactLastName (lastName string, skip int, limit int) (searchResults []Person, searchErr string) {
    searchResults, searchErr = SearchPerson(bson.M{"lastName": lastName}, skip, limit)
    return
}

(3) Find people whose first and last names being with the particular strings:

func GetPersonByFullName (lastName string, firstName string, skip int, limit int) (searchResults []Person, searchErr string) {
    searchResults, searchErr = SearchPerson(bson.M{
        "lastName": bson.RegEx{"^"+lastName, "i"},
        "firstName": bson.RegEx{"^"+firstName, "i"}}, skip, limit)
    return
}

(4) Find people whose first and last names match with first and last names exactly:

func GetPersonByExactFullName (lastName string, firstName string, skip int, limit int) (searchResults []Person, searchErr string) {
    searchResults, searchErr = SearchPerson(bson.M{"lastName": lastName, "firstName": firstName}, skip, limit)
    return
}

et. cetera.

As far as code repetition goes, however, this framework is not that efficient in that each collection requires its own Search[Collection]() function, where the only difference among the different functions is the type of the searchResults variable.

It would be tempting to write something like this:

func Search (collectionName string, q interface{}, skip int, limit int) (searchResults []interface{}, searchErr string) {
    searchErr = ""
    query := func(c *mgo.Collection) error {
        fn := c.Find(q).Skip(skip).Limit(limit).All(&searchResults)
        if limit < 0 {
            fn = c.Find(q).Skip(skip).All(&searchResults)
        }
        return fn
    }
    search := func() error {
        return withCollection(collectionName, query)
    }
    err := search()
    if err != nil {
        searchErr = "Database Error"
    }
    return
}

Except this is where Go's strong typing gets in the way: "there's no magic that would turn an interface{} into a Person", and so each Search[Collection]() function has to be written separately.

The right way to use setInterval() and setTimeout() in Javascript

Saturday, July 14th, 2012

Most of the tutorials and examples for using setInterval() and setTimeout() describe the first parameter (which represents the function to execute) as a string, like this,

setTimeout("count()",1000);

Even the normally reliable O’Reilly men do it this way, too. Stephen Chapman is one of the few who gets it right.

While this technique works, it has two problems.

First, if the function you want to pass has parameters of its own, escaping and formatting them into a string properly is a mess, even in a simple example like this one,

setTimeout('window.alert(\'Hello!\')', 2000);

and it can get even more complicated.

Second, this technique uses eval() to execute the function, which is evil, and to be avoided.1

Using a javascript closure is a better approach:

setTimeout(function () {
    // do some stuff here
  }, 1000);

This makes sending parameters to the underlying function easy,

setTimeout(function (a, b, c) {
    // do some stuff here
  }, 1000);

and it avoids using eval() entirely.

[1] While not exactly related, there’s more in this vein at the hilarious (the (axis-of (eval))) blog, which I found on news.lispnyc.org recently.

Goodbye Java

Sunday, July 1st, 2012

It has been several years since I touched any Java code, and since it’s unlikely that I’ll work in the new Cobol again, I donated all my ancient reference volumes to the local used bookstore today.

Yokaben Macaronics: Read Write Learn

Sunday, June 10th, 2012

Update: November 16, 2012 — Yokaben is now Macaronics


Several years ago, I passed level two (N2) of the 日本語能力試験 or Japanese Language Proficiency Test. N2 means I know (or knew, at the time I took the test, anyway) at least 1,023 kanji written characters, and 5,035 vocabulary words. So in theory, I should be able to read 90% of the text in a typical newspaper article. Still, when I visited Japan recently, I had trouble getting through even the simplest article.

I was traveling through Hiroshima on a Sunday afternoon, on my way to Miyajima, but most people on my train were dressed in Hiroshima Carp colors, and got off at the station directly in front of the stadium.

They proved to be a boisterous bunch that day, and after I got back to the hotel, I wondered how their team had done.

It turned out they won, but there was very little in the English-language sources abut the game, which was in stark contrast to the local Japanese press, including some fanciful word play about the player who hit a home run.1

As Lost in Translation comically exaggerated, having access to the original source makes a difference.

Machine translation was somewhat helpful, but those results left a lot to be desired, especially when dealing with nuance and context (it was interesting, for example, to see that Bing correctly translated バカ as “moron”, but Google rendered it as “docile child” instead).

What I really needed was a human editor, someone at least partially bilingual, who could fill in the gaps and clean up the obvious errors.

Crowd-sourcing, or more specifically, human-based computation, is a possible solution, though it needs hundreds, thousands, or more editors to make it work.

If it does reach that critical mass, it would open up an even larger audience: people would be able to read original texts in full, regardless of whether they are literate in the source language or not, and even if they have no desire to learn that language in the first place.

Yokaben2 Macaronics3 is an experiment to see whether or not it can be done.


[1] One way to pronounce the numbers “2” and “9” together is “niku” which is roughly how the Japanese say the first name of Nick Stavinoha. The author speculated that since the 29th is “Nick’s Day”, fans can expect a similar result on May 29 and through the rest of the season on the 29th of every month.

[2] I didn’t know what to call it, but when I was thinking up names, I heard someone talking about PubSub, which is a contraction of the words “publish” and “subscribe”.

Since what I was building was a way to “Read Write Learn”, I tried similar contractions. While it didn’t work in English, I got some unique syllables from the corresponding Japanese words:

Read : 読む (yomu) → yo
Write : 書く (kaku) → ka
Learn : 勉強 (benkyou) → ben

(Yes, I know that 勉強 really means study, and 学ぶ is a better translation of learn, but “yokamana” or “yokabu” didn’t quite have the same ring to it.)

[3] While researching names for another project, I came across the adjective macaronic, whose dictionary meaning seemed perfect for this, especially since I’d like to see it go beyond just two languages.

Also, yokaben as I’d constructed it (読書勉) is too close to dokusho (読書) and thus potentially confusing for native Japanese speakers.

Using Microsoft’s Translator API with Python

Monday, May 7th, 2012

Before Macaronics, I experimented with automated machine translation.

Microsoft provides a Translator API which performs machine translation on any natural language text.

Unlike Google’s paid Translation API, Microsoft offers a free tier in theirs, for up to 2 million characters per month.

I found the signup somewhat confusing, though, since I had to create more than one account and register for a couple of different services:

  1. I had to register for a Windows Live ID
  2. While logged in with my Live ID, I needed to create an account at the Azure Data Market
  3. Next, I had to go to the Microsoft Translator Data Service and pick a plan (I chose the free, 2 million characters per month option)
  4. Finally, I had to register an Azure Application (since I was testing, I didn’t want to use a public url, and fortunately that form accepted ‘localhost’, though it insisted on my using ‘https’ in the definition)

The last form, i.e., the Azure Application registration, provides two critical fields for API access:

  • Client ID — this is any old string I want to use as an identifier (i.e., I choose it)
  • Client Secret — this is provided by the form and cannot be changed

With all the registrations out of the way, it was time to try a few translations.

The technical docs were well-written, but since there was nothing for Python, I’ve included an example for accessing the HTTP Interface.

My code is based on Doug Hellmann’s article on urllib2, enhanced with Michael Foord’s examples for error-handling urllib2 requests.

Here’s a simple usage example from Japanese to English, in the Python REPL:

>>> import msmt
>>> token = msmt.get_access_token(MY_CLIENT_ID, MY_CLIENT_SECRET)
>>> msmt.translate(token, 'これはペンです', 'en', 'ja')
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">This is a pen</string>

The API returns XML, so a final processing step for a real program would be to use something like lxml to parse out the translation result.

Here’s a snippet for getting just the translated result out of the XML object returned by the API.

In the case of the example above, this is just the classic1 phrase:

This is a pen

[1] It’s classic in that “This is a pen” is the first English sentence Japanese students learn in school (or so I’m told)

Rediscovering LaTeX

Thursday, January 5th, 2012

I first used LaTeX while an intern at a very old-school software company that ran only unix workstations.

When I needed to write a letter (that had to be printed on paper and signed, for some bureaucratic task), I was told "try this".

At first, the idea of writing in markup, then compiling it to get final document seemed strange, but I quickly came to love using it. Pretty soon, anything that I used to do in Word I would do in LaTeX instead.

I got away from it entirely these last few years, as most things that used to require a printed letter or memo have succumbed to email, web forms, and the like.

But recently I had the need again, for a new project, and thought: why not?

The only difference now is that instead of printing to paper, I would be sending pdf files by email.

Fortunately, the Ghostscript ps2pdf utility makes that simple, and it was already installed on my computer.

Likewise, LaTeX itself was already installed and available, thanks to the TeX Live package.

The only remaining annoyance was all the commands I needed to run for each document:

$ latex test.tex
$ dvips test.dvi
$ ps2pdf test.ps

and, to clean-up all the intermediate files those commands generated:

$ rm test.aux test.dvi test.log test.ps

So I wrote this latex2pdf shell script:

#!/bin/sh

if [ $# -ne 1 ]
then
    echo "usage: latex2pdf.sh [file(.tex)]"
else
    # split $1 on / to get the path and filename
    path=`echo ${1%/*}`
    file=`echo ${1##*/}`
    if [ $path = $file ]
    then
        path=`pwd`
    fi

    # check if the file already has the .tex ext
    suffix=`echo $file | grep ".tex$" | wc -l`
    if [ $suffix -eq 0 ]
    then
        f=`echo "$file.tex"`
    else
        f=`echo "$file"`
    fi

    # define the filename base string w/o the .tex ext
    # (what the .aux, .dvi., .ps, .log files will be named)
    s=`echo "$f" | sed -e 's/\.tex$//'`

    # compile the .tex file and convert to pdf
    latex "$path/$f"
    dvips "$s.dvi"
    ps2pdf "$s.ps"
    rm -f "$s.aux"
    rm -f "$s.dvi"
    rm -f "$s.log"
    rm -f "$s.ps"
fi

Now, with a single command, I can build and view the result immediately:

$ ./latex2pdf.sh test.tex; xpdf test.pdf &

Who needs WYSIWYG?

Splitting and Extracting MPEG video files with MEncoder

Monday, January 2nd, 2012

One of the nice things about MythTV is that it lets me save any broadcast as an unencrypted, DRM-free mpeg file.

I recently found out how to use MEncoder to split and trim those mpeg files into single or multiple clips.

MEncoder is a good tool to use because it’s free (as in both freedom and beer), and runs on all major platforms (there are even pre-built binaries for Mac OSX).

MEncoder has two command line options, -ss and -endpos, which let you define the start or end position of the clip you want to extract.

Unfortunately, the default command doesn’t work with mpeg files.

The work-around is to convert the mpeg file to avi format first:

$ mencoder original.mpeg -ovc lavc -oac lavc -o original.avi

Then, create a copy starting or ending at a given point in time, defined as hour:minute:second using either the -ss or -endpos options.

For example, to extract a clip from the 17 minute 50 second mark to the 57 minute 47 second mark from a one-hour file, these two commands will do the trick:

$ mencoder -ss 00:17:50 -oac copy -ovc copy original.avi -o clip_start.avi
$ mencoder -endpos 00:39:57 -oac copy -ovc copy clip_start.avi -o clip.avi

Note that the -endpos was recalculated for the second command as 39:57, not 57:47.

That’s because the clip_start.avi file is 17 minutes and 50 seconds shorter than the original, and so we need to recalculate the clip end position in terms of the new length.

The file clip.avi contains the clip from 17:50 to 57:47 extracted from the original file, and we can discard the intermediate clip_start.avi file.

It takes two commands because MEncoder seems to ignore the second -ss or -endpos option it finds, and uses just the first one.

It would be nice if it would just let us do this instead:

$ mencoder -ss 00:17:50 -endpos 00:57:47 -oac copy -ovc copy original.avi -o clip.avi

 

The Forgotten E-Book Reader: OLPC

Monday, December 26th, 2011

With the plethora of e-book reader devices available these days, it’s easy to overlook perhaps one of the better choices for mobile e-reading: the OLPC.

While it’s a bit heavier than most tablets (but still relatively light at just over 3 pounds), and lacks the “instant-on” feature of other devices (the OLPC is technically a netbook computer, so it needs time to boot), the built-in Read Activity (app) supports several types of file formats, including text, tiff, djvu, pdf, and epub.

At 6 inches x 4.5 inches, the OLPC’s color screen is bigger than most dedicated e-book readers, and almost as large as the iPad. The screen folds flat, which hides the keyboard and makes reading easier, and the screen also remains easy to read, even in sunlight.

So why isn’t it more popular as an e-book reader?

One problem is that it’s not clear how to add new content for the Read app to find.

By default, the Read app can open e-book files in the Journal, but the documentation doesn’t fully explain how to copy new files into the Journal.

In theory, you can drag-and-drop files from a mounted usb stick or external drive, but I found the graphical environment choppy and unreliable.

Fortunately, there’s a built-in Terminal script called copy-to-journal created just for this purpose.

Here’s an example of how to copy a pdf file from a memory stick to the Journal:

copy-to-journal "/media/my-usb-stick/My Book.pdf" -m application/pdf -t "My Book"

The first parameter is the full path to the file (wrapping in quotes is good practice, since it will work for files with spaces in their names and without), the second parameter (-m) specifies the mimetype, and the third parameter (-t) defines the title of the book as it appears in the Journal (it can be completely different from the filename).

Epub files work the same way, except the mimetype is different:

copy-to-journal "/media/my-usb-stick/My Book.epub" -m application/epub+zip -t "My Book"

The script can also attempt to guess the mimetype, using the -g switch instead of -m:

copy-to-journal "/media/my-usb-stick/My Book.epub" -g -t "My Book"

 

Re-creating Mailinator in Python

Friday, November 11th, 2011

Update: February 21, 2012

I’ve extended this concept into a framework for creating an intelligent email-based agent server, whereby email sent to designated inboxes get dynamic, or custom replies.

It’s the same logic used by the TeamWork.io web service and I’ve decided to open source it on github: https://github.com/dpapathanasiou/intelligent-smtp-responder


Paul Tyma, the creator of Mailinator, once wrote about its architecture. He said that after starting with sendmail, he found it necessary to write his own SMTP server from scratch.

While he never released the Java source code of his server, I wanted to see if I could re-create it using Python, since I also wanted to understand how state machines work in that language.

The Basic Server

To start, I needed some code that would listen on a specific port, and read and respond to clients.

Python’s SocketServer module makes this simple.

Here, in a few lines, is a multi-threaded TCP server that listens on port 8888 of the local machine and echoes back what a connected client sends to it:

#!/usr/bin/python
import SocketServer
cr_lf = "\r\n"
class SMTPRequestHandler (SocketServer.StreamRequestHandler):
    def handle (self):
        try:
            while 1:
                client_msg = self.rfile.readline()
                self.wfile.write(client_msg.rstrip()+cr_lf) # a simple echo
        except Exception, e:
            print e
# server hostname and port to listen on
server_config = ('localhost', 8888) 
if __name__ == '__main__':
    tcpserver = SocketServer.ThreadingTCPServer(server_config, SMTPRequestHandler) 
    tcpserver.serve_forever() 

Start it from a command line prompt (if the port number you choose is less than 1025, then you need to do this as root):

$ python server.py

And test it using telnet:

$ telnet localhost 8888
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
This is an echo
This is an echo
Ok, I get it
Ok, I get it
What next?
What next?

Handling SMTP

Now I needed to be able to understand and reply to SMTP requests. The protocol is fairly simple, with only a handful of commands.

Each command consists of four letters, which appear at the start of the stream sent by the client, and terminated with “\r\n”.

SMTP commands

Tyma did not, however, implement the full list of SMTP commands, since RSET (Reset), VRFY (Verify), NOOP (No operation), and others are used by spammers to abuse or even take over a server, and are rarely required by legitimate email clients.

The server needs to be able to handle the basic interaction, so HELO (Hello) / EHLO (Extended Hello), MAIL (Mail from), RCPT TO (Recipient To), and DATA all need to be supported.

At first glance, it’s tempting to try to implement it like this:

class SMTPRequestHandler (SocketServer.StreamRequestHandler):
    def handle (self):
        try:
            data = {}
            while 1:
                client_msg = self.rfile.readline()
                if client_msg.startswith('MAIL FROM:'):
                    data['sender'] = get_email_address(client_msg)
                elif client_msg.startswith('RCPT TO:'):
                    data['recipient'] = = get_email_address(client_msg)
                ...
                elif client_msg.startswith('QUIT'):
                    break
        except Exception, e:
            print e

Where get_email_address() is defined as, for example, something like this:

def get_email_address (s):
    """Parse out the first email address found in the string and return it"""
    for token in s.split():
        if token.find('@') > -1:
            # token will be in the form:
            # 'FROM:' or 'TO:'
            # and with or without the <>
            for email_part in token.split(':'): 
                if email_part.find('@') > -1:
                    return email_part.strip('<>')

But this gets messy in a hurry. While some commands fit within the neat single-line /^CMND rest of data\r\n/ pattern, others do not.

RCPT, for example, can be repeated multiple times, and once DATA is seen, every subsequent line must be collected until the final /^\.$/ appears.

State Machines to the rescue

A state machine provides a much better way of handling SMTP requests. In his excellent article, David Mertz defines a state machine as:

a directed graph, consisting of a set of nodes and a corresponding set of transition functions. The machine “runs” by responding to a series of events. Each event is in the domain of the transition function belonging to the “current” node, where the function’s range is a subset of the nodes. The function returns the “next” (perhaps the same) node. At least one of these nodes must be an end-state. When an end-state is reached, the machine stops.

And that corresponds exactly to what happens when a client interacts with an SMTP server:

SMTP State Diagram

Brass Tacks

Creating a state machine in Python is simple, since Python allows you to pass functions as higher-order objects. The statemachine.py implementation in Mertz’s article was done in just a few lines of code.

To handle each SMTP node, I defined a series of functions, one for each server response or command.

Here are the function prototypes, where the cargo parameter is a tuple, containing both the stream from/to requests are read and responses written, and a dict of data collected from the request:

def greeting (cargo):
def helo (cargo):
def mail (cargo):
def rcpt (cargo):
def data (cargo):
def process (cargo):

The state machine is defined within the SMTPRequestHandler class like this:

class SMTPRequestHandler (SocketServer.StreamRequestHandler):
    def handle (self):
        try:
            m = StateMachine()
            m.add_state('greeting', greeting)
            m.add_state('helo', helo)
            m.add_state('mail', mail)
            m.add_state('rcpt', rcpt)
            m.add_state('data', data)
            m.add_state('process', process)
            m.add_state('done', None, end_state=1)
            m.set_start('greeting')
            m.run((self, {}))
        except Exception, e:
            print e

So that each function knows how to recognize its assigned command, I defined and compiled these regular expressions. These are created as globals, since it’s more efficient to initiate them once, and have each subsequent method call use the already-existing version.

import re
helo_pattern = re.compile('^HELO', re.IGNORECASE)
ehlo_pattern = re.compile('^EHLO', re.IGNORECASE)
mail_pattern = re.compile('^MAIL', re.IGNORECASE)
rcpt_pattern = re.compile('^RCPT', re.IGNORECASE)
data_pattern = re.compile('^DATA', re.IGNORECASE)
end_pattern = re.compile('^.$')

The greeting() function, which begins the interaction with the client, sends a simple message and passes control to the helo() function. It looks like this:

def greeting (cargo):
    stream = cargo[0]
    stream.wfile.write('220 localhost SMTP'+cr_lf)
    return ('helo', cargo)

Later in the sequence, the mail() function, which is the first node from which data is collected (in this case, the email address of the sender), is the first to save information in the cargo’s dict. It looks like this:

def mail (cargo):
    stream = cargo[0]
    client_msg = stream.rfile.readline()
    if mail_pattern.search(client_msg):
        sender = get_email_address(client_msg)
        if sender is None:
            stream.wfile.write(bad_request+cr_lf)
            return ('done', cargo)
        else:
            email_data = cargo[1]
            email_data['sender'] = sender
            return ('rcpt', (stream, email_data))
    else:
        stream.wfile.write(bad_request+cr_lf)
        return ('done', cargo)        

Here, if the request is not recognized or invalid, the client sees the bad_request message, and the connection is closed, since control passes to the done end-state.

I followed Tyma’s example and defined bad_request as “550 No such user” (which, as he notes, is ironic, since Mailinator accepts email sent to any user).

It also doesn’t conform to the protocol, since I’m supposed to give different error messages at different nodes, but since clients are always disconnected after any type of invalid request, it hardly matters what they see in that scenario.

If a client is well-behaved, the final method called is process() which decides what to do with the client’s email. The data dict will contain three parameters: ‘sender’ (the email address of the sender), ‘recipients’ (a list of email addresses), and ‘data’ (the contents which followed the DATA command ahead of the final ‘.’).

def process (cargo):
    email_data = cargo[1]
    # do something with the email_data dict here
    return ('done', cargo)

Basically, this is where the data can be saved to disk/db (so that it can be served by a web browser later, e.g.), MIME-parsed (to remove attachments, etc.), or just trashed (if you have reason to believe the sender is a spambot or zombie network, e.g.).

Tyma describes various measures for dealing with attacks from spambots and zombies which I haven’t implemented here, but would be relatively easy to add to both the data() and process() functions.

Obtaining the ip address of the client is done using the stream.client_address[0] attribute.