cross-domain ajax with easyXDM

While hacking around with easyXDM recently, I learned a few things I thought were worth noting/sharing. I wanted to replace something like a jQuery ajax call, eg
$.ajax({"url":"http://localhost/resource.json", "success":function(data){...}})
with a cross-domain equivalent, but it wasn’t immediately obvious where/how easyXDM would fit in. It was all in the documentation (see the code sample in the shipped /cors/ interface section of the readme file), but not phrased in the way I expected.  Here are the steps I went through to get it working:

  1. Upload the src/cors/index.html easyXDM support file to the domain I wanted to make available to cross-domain requests. For example, I wanted localhost to be the provider of data, so I made the file available at http://localhost/easyXDM/src/cors/index.html.
  2. Edit src/cors/index.html file to set useAccessControl to false, eg var useAccessControl = false;. The code comments state that this stops the code from using response headers to determine access control.  Setting this to false seems like a bad idea, but it’s what I had to do to get it working. I definitely want to learn more about how to establish access control safely.
  3. Edit src/cors/index.html file to pull easyXDM.debug.js and json2.js from the provider’s domain
  4. Wherever I wanted to make an ajax call, I needed to include easyXDM.debug.js and json2.js, and then drop in this code:
  var rpc = new easyXDM.Rpc({
      remote: "http://localhost/easyXDM/src/cors/index.html"
      remote: {
          request: {}

      url: "http://localhost/resource.json",
      method: "GET"
  }, function(response){

Here are some resources I found helpful:

To conclude, if you you’d like to learn more about honey badgers, and you don’t mind profanity, this is worth watching:

Running YETI tests automatically with Watchr

YETI (YUI Easy Testing Interface) provides an easy, automated way to run YUI 3 tests. Watchr provides an easy way to run arbitrary Ruby based on file system events. Putting the two together, we get an easy way to run YETI when a YUI 3 script is saved.


  • Mac 10.6.4.  What follows may work elsewhere, but I haven’t tried it yet

Set up:

  1. Install Watchr. Please refer to the readme file in Watchr’s github repository for installation instructions. I wrote a post the other day about getting started with Watcher on Mac 10.6.4.
  2. Install Node.js.  YETI requires Node.js.  Please refer to the Node.js documentation for downloading and building Node
  3. Install npm.  YETI can be installed easily with npm.  Please refer to the readme file in npm’s github repository for installation instructions.  The Joyent blog also has an informative post on Installing Node and npm.
  4. Install YETI: npm install yeti
  5. Create the following directories: test and lib.  These directory names are completely arbitrary, but they match the watchr script introduced below.  If you want to use different names, please update the watchr script accordingly
  6. Create a file called autotest.watchr and put the following ruby into it:
  7. Create a file called test_example.html in the test directory and put the following html in it:
  8. Create one last file called example.js in the lib directory and put the following javascript in it:

You should now have a file structure like this:

Let’s run this rig:

  1. In your terminal, launch Watchr: watchr autotest.watchr
  2. Edit /lib/example.js so Y.example is no longer set to “foo”, e.g., Y.example = “bar”;
  3. Save /lib/example.js and view your terminal.  You should see YETI’s output of the failing test results
    Screen shot of YUI test failure
  4. Edit /lib/example.js resetting Y.example to “foo”, save, and note YETI’s output of the successful test results
    Screen shot of YETI output showing YUI tests passing
  5. Kill watchr (when you’re ready): Ctrl+C

Going forward:

Using the autotest.watchr script above, any file named /test/test_{lib name}.html will be run when /lib/{lib name}.js is edited.  The test file will also be run when it is edited.  If you add a new lib, but do not define a corresponding test file, watchr will fail silently.  Likewise, if you add a test file, but don’t put YUI tests in it.  In short, add libs and YUI tests in pairs, and you’re all good.

In closing, here’s one of my favorite songs from Drive Like Jehu:

Getting started with Watchr (and trying again to install Node.js on Mac 10.6.4)

I recently started exploring testing options for Node.js. Yesterday, I wrote about my experiences with nodeunit. Today, I found Christian Johansen’s blog post Unit testing node.js apps. (Thanks for the write-up, Christian!) Although I was looking for unit testing options, what really got me excited was his mention of Watchr.

Watchr provides a way to run tests automatically in response to system events, e.g., when a file is saved, much like Autotest. I had fallen in love with Autotest’s functionality after learning about it in Micheal Hartl’s nice Ruby on Rails tutorial. According to Watchr’s docs, Autotest leaves something to be desired, but in any case I very much would like my tests to run without my having to think about it.

Git-ting (ha!) Watchr was easy enough, but to run Node tests on my Mac, which for some reason is an idea I’m hung up on, I need Node, and to date I haven’t been able to build Node on my Mac (10.6.4) successfully, so this is my challenge. After searching here and there, I found an archived thread from the Node mailing list that seemed promising. It mentions that MacPorts can break if I upgrade to Snow Leopard without upgrading MacPorts, which I had, and that this can prevent Node from compiling. After clicking through to the MacPorts migration docs, I followed the steps outlined there and I was able to build Node like this:

  1. I had tried and failed to build Node multiple times, so I blew away the build directory: rm -rf build
  2. ./configure
  3. Clean things up to be thorough: make clean
  4. make
  5. Run tests just in case: make test
  6. sudo make install

Ok, on to the testing. Here’s my folder structure:

    – autotest.watchr
    – lib/
      – example.js
    – test/
       – test_example.js

My autotest.watchr file is a blend of the one on Christian’s blog, and Watchr’s tests.watchr prepackaged script. It contains

watch( 'test/test_.*\.js' )  {|md| system("node #{md[0]}") }
watch( 'lib/(.*)\.js' )      {|md| system("node test/test_#{md[1]}.js") }

# --------------------------------------------------
# Signal Handling
# --------------------------------------------------
# Ctrl-\
Signal.trap('QUIT') do
  puts " --- Running all tests ---\n\n"

# Ctrl-C
Signal.trap('INT') { abort("\n") }

example.js contains = 'bar';

test_example.js contains

var assert = require("assert");
var example = require('../lib/example');

assert.strictEqual(, 'bar', 'var foo should be "bar"');

I fire up watchr like this: watchr autotest.watchr

Watchr then captures the terminal until I enter Ctrl+C. Saving either example.js or test_example.js causes test_example.js to run. At this point the tests are crude, so my output is nothing if the test passes, or an assertion error, e.g., “AssertionError: var foo should be “bar””, if the test fails.

I think this is a good start. Time to listen to some Bonobo and call it a day.

Installing Nginx on Ubuntu 10.04

I want to install Nginx on an Ubuntu 10.04 64-bit server. Luckily, someone named Sam Kleinman put together a great tutorial (“Host Websites with nginx on Ubuntu 10.04 LTS (Lucid)“) over on Linode’s site.  Following his instructions, I was able install Nginx without a hitch.  That was easy.  Thanks, Sam!

To keep things light, I like to wrap up tech posts w/ non-tech content.  Here’s a video of one of my favorite artists, El Mac, painting a mural freehand(!) with another artist, Kofie.

generating webdav propfind xml from yql

E4X support makes YQL is a great XML-generation engine. Here’s some code to create the response xml for a WebDAV PROPFIND request for a directory called webdav containing an empty file called foo.txt.

Note: to initially get a handle on what XML WebDAV outputs, I turned on WebDAV support in apache and made a curl request to it like this:
curl -X PROPFIND –header “Depth:1” {user}:{pass}@{your ip address}/webdav/

You can run the code below in the YQL console.

<?xml version="1.0" encoding="UTF-8"?>
<table xmlns="">
        <author>Erik Eldridge</author>
        <select produces="XML">
                <key id="method" type="xs:string" paramType="variable"/>
                <key id="path" type="xs:string" paramType="variable"/>
                response.object = function () {
                    var xml = <D:multistatus xmlns:D="DAV:">
                        <D:response xmlns:lp1="DAV:" xmlns:lp2="">
                                 <lp1:getlastmodified>Sat, 02 Jan 2010 19:43:01 GMT</lp1:getlastmodified>
                              <D:status>HTTP/1.1 200 OK</D:status>
                        <D:response xmlns:lp1="DAV:" xmlns:lp2="">
                                 <lp1:getlastmodified>Sat, 02 Jan 2010 19:43:01 GMT</lp1:getlastmodified>
                              <D:status>HTTP/1.1 200 OK</D:status>
                    return xml;

standard stack v1: git


we’ll use git to facilitate the process of pushing code to the vm.  because there’s a cardinal rule about not serving files from a repo, we’ll need to create a git host and use a githooks to update the web root when code is pushed to the repo.  i’m using the terms hub and prime introduced by Joe Maller in his post A web-focused Git workflow.

i don’t have a cool picture of the concept, like Maller did, but here’s one of a cute red panda (credit: tambako) to set the mood before we get started:

ok, here we go:


  • prime is the copy of the repo accessible by the web server
  • hub is the bare source of truth repository
  • project refers to the prime/hub pair
  • vm is the vmware vm running centos
  • laptop is the development computer you ultimately want to push files from


  • mac os x 10.5.8
  • vmware 2.0.5
  • centos 5.4
  • git


  1. set up
    1. on the vm, install git as root:
      yum install git
    2. on the vm, create a user to handle git-related activity:
      useradd git
    3. on the vm, get its inet ip address using ifconfig:
    4. on the vm, copy your rsa public key (you’ll be pushing git updates over ssh) from your laptop into the git user’s .ssh/authorize_keys file on the vm
    5. on the vm, make sure the correct permissions are set on the authorized_keys file and .ssh dir:
      chmod 700 /home/git/.ssh; chmod 644 /home/git/.ssh/authorized_keys
    6. on your laptop, run a sanity check by logging into the vm via public key. note: if you’re using an alternate ssh port and/or different pub key file name, define these in your laptop‘s .ssh/config file:
      ssh git@{ip address}
    7. on the vm, in /var/www/, as root, create a directory that git can push content to (note: if the dir isn’t owned by git or isn’t world-writable, git throws an “error: cannot open .git/FETCH_HEAD: Permission denied” error):
      mkdir /var/www/git/; chown git:git /var/www/git/
    8. on the vm, cd into the /var/www/git/ directory and su to the git user:
      cd /var/www/git/; su git
  2. create a new project
    1. on the vm, create a new directory {proj name} for the prime repo and cd into it:
      mkdir proj; cd proj
    2. on the vm, initialize a git repo:
      git init
    3. on the vm, create and add a file so we can clone prime later (git dissallows cloning an empty repo):
      touch readme;
      git add readme;
      git commit -m ‘initial commit’

      Note: if you haven’t already told git who you are, run:
      git config “”
      git config “”
    4. on the vm, define a remote repository for the soon-to-be-created hub:
      git remote add origin /home/git/proj
    5. on the vm, cd into git user’s home directory:
      cd ~
    6. on the vm, create the hub repo by cloning the newly created repo using the –bare flag (that’s a double ‘-‘ before bare):
      git clone –bare /var/www/git/proj
    7. on the vm, create a post-update hook in the hub repo to update the web directory when an update is pushed.  open /home/git/proj/hooks/post-update and add the following:
              # jump into web dir
              cd /var/www/sites/
              # w/o this, git throws "fatal: Not a git repository: '.'" error
              # ref:
              unset GIT_DIR
              # pull in the updates
              git pull origin master
  3. start working
    1. on the laptop, open a terminal on whatever machine your going to develop on and clone the new host repo:
      git clone git@{ip address}:proj
    2. on the laptop, edit the readme file in the repo, check in the change and observe in the output the results of the hook-initiated pull
    3. on the laptop, view http://{ip address}/readme to confirm the new code is displaying


setting up nginx and mochiweb on centos 5

  1. Install nginx on centos using cyberciti’s tutorial
  2. update default iptables to allow http traffic:
    # ref:
    # ref:
    # Firewall configuration written by system-config-securitylevel
    # Manual customization of this file is not recommended.
    :INPUT ACCEPT [0:0]
    :OUTPUT ACCEPT [0:0]
    :RH-Firewall-1-INPUT - [0:0]
    -A INPUT -j RH-Firewall-1-INPUT
    -A FORWARD -j RH-Firewall-1-INPUT
    -A RH-Firewall-1-INPUT -i lo -j ACCEPT
    -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT
    -A RH-Firewall-1-INPUT -p 50 -j ACCEPT
    -A RH-Firewall-1-INPUT -p 51 -j ACCEPT
    -A RH-Firewall-1-INPUT -p udp --dport 5353 -d -j ACCEPT
    -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT
    -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT
    -A RH-Firewall-1-INPUT -m tcp -p tcp --dport 80 -j ACCEPT
    -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
    -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
    -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
  3. install mochiweb using BeeBole’s tutorial.  For ease of use while testing, launch dev server using separate screen, as the mochiweb shell will own the terminal used to launched it by default, and add the following line to iptables so we can hit the server directly:
    -A RH-Firewall-1-INPUT -m tcp -p tcp --dport 8000 -j ACCEPT # allow access to mochiweb

    Test that mochiweb is available to localhost by running the following from the command line on the server:


    You should get something back like:

    <title>It Worked</title>
    MochiWeb running.

  4. Configure nginx to proxy api calls to mochiweb.  Put this in /etc/nginx/nginx.conf:
    user              nginx;
    worker_processes  1;
    error_log         /var/log/nginx/error.log;
    pid               /var/run/;
    events {
        worker_connections  1024;
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] $request '
                          '"$status" $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;
        sendfile        on;
        keepalive_timeout  65;
        include /etc/nginx/conf.d/*.conf;
        server {
            listen       80;
            server_name  localhost;
            location ~ api { # <-- pass requests for 'api...' to mochiweb
            location / {
                root   /usr/share/nginx/html;
                index  index.html index.htm;
            error_page  404              /404.html;
            location = /404.html {
                root   /usr/share/nginx/html;
            error_page   500 502 503 504  /50x.html;
            location = /50x.html {
                root   /usr/share/nginx/html;

    As per BeeBole’s tutorial, edit the mochiweb request handler to handle requests for ‘api’:

    %% @author author <>
    %% @copyright YYYY author.
    %% @doc Web server for myapp.
    -author('author <>').
    -export([start/1, stop/0, loop/2]).
    %% External API
    start(Options) ->
        {DocRoot, Options1} = get_option(docroot, Options),
        Loop = fun (Req) ->
                       ?MODULE:loop(Req, DocRoot)
        mochiweb_http:start([{name, ?MODULE}, {loop, Loop} | Options1]).
    stop() ->
    loop(Req, DocRoot) ->
        "/" ++ Path = Req:get(path),
        case Req:get(method) of
            Method when Method =:= 'GET'; Method =:= 'HEAD' ->
                case Path of
                    "api" -> Req:ok({"text/html", [],["<h1>Congratulation</h1>"]}); % <-- the 'api' request handler
                    _ -> Req:serve_file(Path, DocRoot)
            'POST' ->
                case Path of
                    _ ->
            _ ->
                Req:respond({501, [], []})
    %% Internal API
    get_option(Option, Options) ->
        {proplists:get_value(Option, Options), proplists:delete(Option, Options)}.

    As per James Gardner’s post Streaming File Upload with Erlang and Mochiweb Multipart Post, rebuild the request handler by running make in the myapp directory. The mochiweb server will automatically restart

  5. confirm the proxy is working by hitting http://domain/ and http://domain/api.  The former should return the nginx install confirmation page, and the latter should return the simple “Congratulation” page.

steps for merging changes from a remote clone of a git repo

I’m a fan of github, but I don’t know how to apply changes made to a clone of my repo, usually announced via a pull request. The goal of this post, then, is to define these steps. Note: the steps below pulled in the changes as desired, but also auto-committed them despite the —no-commit flag, so these steps need refinement.


  • a git repo named origin
  • committer has issued a pull request. For this example, I’ll use a committer named FooBaz


  1. add commiter’s repo as a remote
    • copy clone url for pull requester’s repo, eg git://
    • define remote repo: git remote add FooBaz git://
    • view list of remotes as sanity check: git remote show
  2. pull in FooBaz’s changes:
    • run: git pull --no-commit FooBaz master
    • note: this actually committed the changes for me 😐
  3. push changes to origin repo: git push origin master


attempt to restart/stop yaws failed

I’ve got yaws (git hash 5f35f5b7451ea4388c53df9f4e00caad0caa6b45) running on CentOS 5.3.  I just added a virtual host entry in yaws.conf and tried to restart yaws, but the restart failed:

[me@mymachine /]$ sudo etc/init.d/yaws restart
Stopping yaws:                                             [FAILED]

After hunting around a bit, it seems yaws will fail to restart (and stop) if the docroot doesn’t point to an actual directory.  In my case, I added the virtual host entry before actually creating the docroot directory.  Making the directory fixed the problem.