The joy of top-down rendering.


I want to present data, ideally as view = render(data).


I really like the view mechanics provided by choo/yo-yo/bel.

const html = require('bel')
const nanobus = require('nanobus')
const yo = require('yo-yo')

const bus = nanobus()
const render = yo.update.bind(yo, document.body)
const emit = bus.emit.bind(bus)

bus.on('change', (name) => {
  const state = {} = name.toUpperCase()
  render(view(state, emit))

function view(state, emit){
  return html`
      Hello, <input value="${}" placeholder="name" onkeyup=${onKeyUp}>
  function onKeyUp(e){

Object path


I want to reduce conditional assignment when setting nested keys in an object, ideally:

{a:{b:{c:value}}} = set(a/b/c, value)

This is handy for data manipulation and abstracting path-based tools like LevelDB and Firebase Realtime Database.


Use object-path or lodash’s set/get.

Note: the tools mentioned above interpret numeric path segments as array indices, which may cause unexpected results when inserting arbitrary values, eg:

set(store, '', 'Kwan') // store.users.length --> 6

If this is an issue, consider:

function set(obj, path, val){
  path.split('/').reduce((parent, key, i, keys) => {
    if (typeof parent[key] != 'object') {
      if (i === keys.length - 1) {
        parent[key] = val
      } else {
        parent[key] = {}
    return parent[key]
  }, obj)
function get(obj, path){
  return path.split('/').reduce((parent, key) => {
    return typeof parent === 'object' ? parent[key] : undefined
  }, obj)


Inverting an object:

const posts = {1: {tags: {sports: true, news: true}}, 2: {tags: {news: true}}}
const byTag = {}
Object.entries(posts).forEach(([id, post]) => {
  Object.keys(post.tags).forEach(tag => {
    set(byTag, `${tag}/${id}`, true)
// byTag --> { sports: { '1': true }, news: { '1': true, '2': true } }

Creating and querying a prefix tree:

const flatten = require('flat')

// populate tree
const emojis = {
  '🙂': 'smile',
  '😀': 'grinning',
  '😁': 'grin'
const tree = {}
Object.entries(emojis).forEach(([emoji, name]) => {
  let path = name.split('').join('/') + '/' + emoji
  set(tree, path, true)

// lookup prefix
const prefix = 'g'
const path = prefix.split('').join('/')
const subtree = get(tree, path) || {}
const matches = Object.entries(flatten(subtree)).map(([key, val]) => {
  return key.slice(-2)
console.log(matches) // --> ["😀", "😁"]

Praise for the humble bus 🚌


This is a stream-of-consciousness gush for a pattern I like. I start by stating some things I like followed by a pattern that produces these things and then attempt to state the problem being solved (in case other folks like me appreciate a problem statement).

I’m a fan of the unidirectional event flow first brought to my attention by React/Redux. Prakhar mentioned this is also called the yo-yo pattern. (Events bubble up, views render down). yo-yo.js provides a delightfully simple implemention. choo completes yo-yo pattern by building on yo-yo.js and injecting an event bus into the view renderer.

Slightly related, I’m also enamored by the notion of an append-only log, reverently described by Jay Kreps and Martin Kleppmann in The Log and Turning the database inside-out with Apache Samza, respectively. Kleppmann provides additional, wonderful context in Data Intensive Applications.

In my experience, event logging from a client can be tricky to maintain. A couple helpful patterns: enable stdout-logging close to the event source, and explicitly enumerate events.


In this context, I’ve developed deep appreciation for the simple pubsub pattern, and the notion of an "event bus" through which published events flow to subscribers. Although busses and logs (and indices) frequently appear together, the bus seems most primitive.

This pattern is nothing new, but here’s a simplistic implementation I find easy to reason about:

protocol Event {}
struct LikeEvent : Event {}
protocol Subscriber {
  func onEvent(event: Event)
class StdoutSubscriber : Subscriber {
  func onEvent(event: Event) {
class Bus {
  var subscribers: [String:Subscriber] = [:]
  func sub(_ subscriber: Subscriber){
    self.subscribers[key(subscriber)] = subscriber
  func unsub(subscriber: Subscriber){
    self.subscribers[key(subscriber)] = nil
  func pub(_ event: Event){
    for subscriber in subscribers.values {
      subscriber.onEvent(event: event)
  func key(_ subscriber: Subscriber) -> String {
    return String(describing: type(of: subscriber))
let bus = Bus()
// ... on "like" button tap

Events are first-class in Node, so an easy equivalent to the above would be:

var EventEmitter = require('events')
var bus = new EventEmitter()
function stdoutSubscriber(event){
bus.on('event', stdoutSubscriber)
bus.emit('event', 'like')


Given all the above, I think the problem I find the bus solving is: reduce complexity in a distributed system by allowing event sources to publish, and event processors to subscribe, as plainly as possible.


I think decoupling event production from processing does have a cost. We lose locality, which complicates reasoning. In cases where production/consumption can be colocated, eg async operations on a thread that’s safe to block (Finagle’s use of Scala’s composable futures is a great example), I think it’s worth considering.


Node’s event emitter supports the notion of a "channel". Kafka calls them "topics". This concept reminds me of Objective C’s KVO, and Firebase’s realtime database, which allow me to subscribe to the stream of changes for a given "key" (or "path").

cross-domain ajax with easyXDM

While hacking around with easyXDM recently, I learned a few things I thought were worth noting/sharing. I wanted to replace something like a jQuery ajax call, eg
$.ajax({"url":"http://localhost/resource.json", "success":function(data){...}})
with a cross-domain equivalent, but it wasn’t immediately obvious where/how easyXDM would fit in. It was all in the documentation (see the code sample in the shipped /cors/ interface section of the readme file), but not phrased in the way I expected.  Here are the steps I went through to get it working:

  1. Upload the src/cors/index.html easyXDM support file to the domain I wanted to make available to cross-domain requests. For example, I wanted localhost to be the provider of data, so I made the file available at http://localhost/easyXDM/src/cors/index.html.
  2. Edit src/cors/index.html file to set useAccessControl to false, eg var useAccessControl = false;. The code comments state that this stops the code from using response headers to determine access control.  Setting this to false seems like a bad idea, but it’s what I had to do to get it working. I definitely want to learn more about how to establish access control safely.
  3. Edit src/cors/index.html file to pull easyXDM.debug.js and json2.js from the provider’s domain
  4. Wherever I wanted to make an ajax call, I needed to include easyXDM.debug.js and json2.js, and then drop in this code:
  var rpc = new easyXDM.Rpc({
      remote: "http://localhost/easyXDM/src/cors/index.html"
      remote: {
          request: {}

      url: "http://localhost/resource.json",
      method: "GET"
  }, function(response){

Here are some resources I found helpful:

To conclude, if you you’d like to learn more about honey badgers, and you don’t mind profanity, this is worth watching:

children of the node

Suppose you’d like to traverse through a family tree in JavaScript, printing each generation of children on a single line. Why? Who knows, but lets suppose you’re so possessed by the idea that you’re losing sleep over it.

The tree looks like this:



     1       -       2

     |                 |

3    -    4            -    6


The correct output would look like this:


1 2

3 4 6

This sounds a lot like a breadth-first search to me, but let’s forget for a moment that Wikipedia exists and think through this.

A verbal walk-through of the problem might sound like this:

  1. Visit root node
  2. Print root node and <br> tag
  3. Print all the children of the root node and another <br> tag
  4. Print all the children of each child node and another <br> tag …

Ok, that’s a mess.  Seems like recursion might help simplify things, but then I’d end up with a stack-based traversal due to the call stack, an idea I find amazing.  But what I want is something more like a queue; first in, first out; root in, root out, children in children out, children’s children in, children’s children out … breath in, breath out.  I feel like I’m in yoga class.  So soothing.  Here’s a Buddha by a koi pond:

"goldie the fish is blessed by the garden buddha"
golden buddha by Paul Moody

Verbal walk through part deux:

  1. Enqueue root node
  2. Dequeue node, print, and enqueue each child of the node
  3. Repeat from step 2

Supposing we have a queue, Q.  We can depict the tree, T, in code like this:

var T = [
    { left: 1, right: 2 },
    { left: 3, right: 4 },
    { left: null, right: 6 },
    { left: null, right: null },
    { left: null, right: null },
    { left: null, right: null },
    { left: null, right: null }

Following the second approach, we’d get

  1. Q = [node 0]
  2. “node 0”, Q = [node 1, node 2]
  3. “node 1”, Q = [node 2, node 3, node 4]
  4. “node 2”, Q = [node 3, node 4, node 6]
  5. “node 3”, Q = [node 4, node 6]
  6. “node 4”, Q = [node 6]
  7. “node 6”

which would be correct, but the line breaks are off.  We need to print all the children of a generation before printing a line break.

Verbal walk through take three:

  1. Enqueue root node
  2. While there are nodes in the queue, dequeue node, print node, and enqueue children of the node
  3. Print a line break
  4. Repeat from step 2

Following the third approach, we’d get

  1. Q = [node 0]
  2. “node 0”, Q = [node 1, node 2]
  3. “node 1 node 2”, Q = [node 3, node 4, node 6]
  4. “node 3, node 4, node 6”, Q = []

That’s it!  Nice.  Here’s some code:

function printTree(tree){

    var queue = [];

    // enqueue root
    queue.push( 0 );

    do {

        var len = queue.length;

        // for each node in the queue
        for( var i = 0; i < len; i++ ){
            // dequeue
            var index = queue.shift();

            // print node
            document.writeln( index );

            var node = tree[ index ];

            // enqueue children of the node
            if( node.left ) {
                queue.push( node.left );
            if( node.right ) {
                queue.push( node.right );

        // print a line break
    // repeat
    } while( 0 !== queue.length );

// run it

re-scoping javascript callback functions using call and apply

jQuery assigns the this object in an event handler to the element that the event handler is bound to. For example, the following will log the element with the id arbitrary-id:

$("#arbitrary-id").click(function (e) { console.log(this); });

But what if we don’t want this to be bound the element we just clicked on? We can get the element anyways by referring to e.currentTarget. What if we want to assign this to an arbitrary element?

Develop a function, accessible as a method on any function, that will allow us to set the element this refers to inside the function. For example, this code will log the element with the id my-foo-div:

$("#arbitrary-id").click(function (e) { console.log(this); }.bind( $("#my-foo-div") ));

We need this method to be available on any function. We can add it to the Function prototype to accomplish this.

As an aside, why do we all cringe whenever anyone mentions doing anything to an object’s prototype? Are there really no safe use-cases? It seems like too powerful a tool to ignore completely.

But anyway, here’s the start of a method definition:

Function.prototype.bind = function (element) { console.log(element); };

We can call it like this:

( function () {} ).bind( $("#arbitrary-id") );

How to define the this object inside the callback function called by the jQuery click handler?  Hmm … Well, the callback function is just a function and we did just add a bind method to all functions, so we can start there. What if we just assign this to the element passed in, like so:

Function.prototype.bind = function (element) { this = element; console.log(this); };

Okay, that threw “ReferenceError: Invalid left-hand side in assignment”, so let’s try the call method, which exists to set the scope of a function at the time it’s called.

Function.prototype.bind = function (element) { console.log(this); };
( function () { console.log( this ); } ).call( $("#my-foo-div") ).bind( $("#my-foo-div") );

Things are starting to get seriously weird, and it looks like call doesn’t return a function anyway, which means we can’t chain bind to it. What are we doing again? Oh, yeah, basically, we want to be able to call call on a function, but have the function be runnable later as a callback, something like a deferred call.

We can at least get bind to return a function, so the function’s callable later:

Function.prototype.bind = function (element) { return function () { console.log(element); } };

Now if I run $("#arbitrary-id").click(function (e) { console.log(this); }.bind( $("#my-foo-div") ) );, the callback logs the element with id my-foo-div. So, I have a reference to my-foo-div inside the callback for the event handler attached to arbitrary-id. I think this is progress. I just need to get the callback function to run in the scope of my-foo-div and I think I can do this with call.

Function.prototype.bind = function (element) { return function () { console.log( element ) ); } };

I’m getting an error “Uncaught TypeError: Object # has no method ‘call'”, which makes sense because jQuery is still setting this to the element that the event handler is attached to, which is the whole point of this exercise, but we’re close! I need to get this to refer to the callback function itself, not a dom element. Inside the bind function definition, this refers to the caller, which is the callback function. Let’s cache this in the scope of the callback function, and then refer to the cached object inside the returned function:

Function.prototype.bind = function (element) { var that = this; return function () { console.log( that ); } };

Nice! When I click on the arbitrary-id element, I see “function (e) { console.log(this); }”, which is my stringified callback function, in the log. So now I just need to call that function via the call method, passing in the element I want to bind the scope to:

Function.prototype.bind = function (element) { var that = this; return function () { element ); } };

Yay! It works. Before calling it quits, I’d like to be able to pass arguments to the callback. Fortunately, call‘s cousin apply and the native arguments object makes this easy:

Function.prototype.bind = function (element) { var that = this; return function () { that.apply( element, arguments ); } };

This calls for an image to chill out to. Here’s a picture of a toucan doing his thing:

Toucan Three by Rhea Monique
Toucan Three by Rhea Monique

Getting started with Watchr (and trying again to install Node.js on Mac 10.6.4)

I recently started exploring testing options for Node.js. Yesterday, I wrote about my experiences with nodeunit. Today, I found Christian Johansen’s blog post Unit testing node.js apps. (Thanks for the write-up, Christian!) Although I was looking for unit testing options, what really got me excited was his mention of Watchr.

Watchr provides a way to run tests automatically in response to system events, e.g., when a file is saved, much like Autotest. I had fallen in love with Autotest’s functionality after learning about it in Micheal Hartl’s nice Ruby on Rails tutorial. According to Watchr’s docs, Autotest leaves something to be desired, but in any case I very much would like my tests to run without my having to think about it.

Git-ting (ha!) Watchr was easy enough, but to run Node tests on my Mac, which for some reason is an idea I’m hung up on, I need Node, and to date I haven’t been able to build Node on my Mac (10.6.4) successfully, so this is my challenge. After searching here and there, I found an archived thread from the Node mailing list that seemed promising. It mentions that MacPorts can break if I upgrade to Snow Leopard without upgrading MacPorts, which I had, and that this can prevent Node from compiling. After clicking through to the MacPorts migration docs, I followed the steps outlined there and I was able to build Node like this:

  1. I had tried and failed to build Node multiple times, so I blew away the build directory: rm -rf build
  2. ./configure
  3. Clean things up to be thorough: make clean
  4. make
  5. Run tests just in case: make test
  6. sudo make install

Ok, on to the testing. Here’s my folder structure:

    – autotest.watchr
    – lib/
      – example.js
    – test/
       – test_example.js

My autotest.watchr file is a blend of the one on Christian’s blog, and Watchr’s tests.watchr prepackaged script. It contains

watch( 'test/test_.*\.js' )  {|md| system("node #{md[0]}") }
watch( 'lib/(.*)\.js' )      {|md| system("node test/test_#{md[1]}.js") }

# --------------------------------------------------
# Signal Handling
# --------------------------------------------------
# Ctrl-\
Signal.trap('QUIT') do
  puts " --- Running all tests ---\n\n"

# Ctrl-C
Signal.trap('INT') { abort("\n") }

example.js contains = 'bar';

test_example.js contains

var assert = require("assert");
var example = require('../lib/example');

assert.strictEqual(, 'bar', 'var foo should be "bar"');

I fire up watchr like this: watchr autotest.watchr

Watchr then captures the terminal until I enter Ctrl+C. Saving either example.js or test_example.js causes test_example.js to run. At this point the tests are crude, so my output is nothing if the test passes, or an assertion error, e.g., “AssertionError: var foo should be “bar””, if the test fails.

I think this is a good start. Time to listen to some Bonobo and call it a day.

Getting started with unit testing for Node.js

I’m diving into unit testing with Node.js, and my first stop is nodeunit. Luckily, Caolan McMahon wrote an excellent introduction to nodeunit on his blog. Thanks, Caolan.

I installed nodeunit via npm no problem: npm install nodeunit

All the examples in the Installing nodeunit section worked fine, but I needed to add
var events = require('events');
to the first code sample in the Testing asynchronous code section to get those tests to pass. So, the top of my test-doubled.js file looks like:

var doubled = require('../lib/doubled');
var events = require('events');

Farther down in the blog post, in the Shared state and sequential testing section, there’s a code sample with the events include in it, so I think I’m on the right track.

In the Test cases, setUp and tearDown section, I had trouble getting the tests to run. After referencing the project’s README file, I tried adding a callback arg to setUp() and tearDown(), and calling the callback, which worked. So, my code looks like:


var testCase = require('nodeunit').testCase; = testCase({
setUp: function (callback) {
this._openStdin = process.openStdin;
this._log = console.log;
this._calculate = doubled.calculate;
this._exit = process.exit;

var ev = this.ev = new events.EventEmitter();
process.openStdin = function () { return ev; };

tearDown: function (callback) {
// reset all the overidden functions:
process.openStdin = this._openStdin;
process.exit = this._exit;
doubled.calculate = this._calculate;
console.log = this._log;


With the minor tweaks above, I was able to get all tests to pass:
Screenshot of all tests passing


Notes from Ryan Dahl’s talk “On Node.js” at Cloudstock

Here are my notes from Ryan Dahl’s talk “On Node.js” at Salesforce’s Cloudstock event on 12/6:

  • Google put a lot of thought into v8 performance
  • Node is a set of bindings to v8 to allow js to do non-browser things
  • “I/O needs to be done differently”
  • There’s a big difference between dealing w/ stuff from the cache vs from a network.
  • nginx is just 3x better than apache in terms of # concurrent clients x # req/sec, but nginx’s mem usage is nearly constant vs apache’s steep curve towards 40 mb
  • apache uses a thread for each request, whereas nginx uses a single thread w/ an event loop
  • it’s well known that you need to use an event loop f you want to go crazy w/ concurrency
  • but even w/ an event loop, you pay dearly for blocking processes
  • we should be writing all of our i/o using non-blocking calls
  • sleep() is a blocking operation
  • Why “transfer encoding: chunked” by default? because we don’t know the full size of the response, and we can start returning immediately
  • each connection costs 1-2 kb minimum
  • random, humorous paraphrase: … !==, not !=, i hate javascript. Coding on stage is so difficult …
  • questions
    • when is node going to become stable? we have a stable branch (0.2), and 0.3 branch will break backwards compatability. I want this to be awk. I want it to be a unix util
    • how’s the hosting landscape looking? joyent is working on a special service for node, which is in its beta. heroku has some stuff. you can always use a general vps w/ an ops layer like monit.
    • interesting applications? node is good for realtime, massive concurrency sorts of things. is a bot on an irc room which geolocates chats btwn people in a room.
    • what do you think about express? express is a web framework for node. it looks cool
    • what do you think about node being tied to v8? untying it doesn’t make sense at this point, but I’m appy w/ v8.

playing with e4x in firefox: loading arbitrary xml as an e4x-ready object

prereq: firefox w/ firebug installed and a server running php w/ simplexml
1) put this code in an php file
2) upload this file to your server
3) run it in firefox
4) look for output in console
var xml = new XML(
	$url = '';
	$sxml = simplexml_load_file($url);
    //strip off the xml declaration because the javascript XML() object expects raw xml
	echo str_replace('', '', $sxml->asXML());