Software engineering notes


leave a comment »

This is an organizational pattern I like:

  • 2-5 ppl
  • Cross-functional
  • Focused on a specific goal
  • Weekly demo to squad

I’ve heard this refered to as a “squad”, “swarm”, “e team” and “feature team”.

One of the nice things is the sense of comraderie from working closely with a small group on a specific goal. Another nice thing is the group dissolves after the goal is accomplished, giving the members closure and chances to try new things without switching teams. Another benefit is broad awareness of how a system works.

This pattern works well in a larger context:

  • Shared code ownership
  • 10-50 ppl
  • Focused on a specific, but larger goal
  • Fortnightly demos from all squads
  • Shared calendar for all squad coordination meetings

When a squad accomplishes its goal, the members dissolve into the larger group. Individuals can learn about work in the larger group by attending the fortnightly demos and/or sitting in on other squads’ coordination meetings.

For example, a product may be supported by several teams. To avoid exposing the org chart in the product, make a large team who’s goal is to make that product excellent. Define a squad for each significant work item. All members of the large group are free to contribute code to the product.

An underlying principle is alignment around customer value over specific products or features. Rather than a group of people owning a code base in perpetuity, regardless of the amount of work required, squads form in response to need.

A counter-example would be several teams supporting a product and that product having disjoint features. Another counter-example would be a large team trying to maintain the interest of all its members in weekly coordination meetings. Yet another counter-example would be lots of individuals working in isolation, esp if they’re doing so to avoid coordination co

Written by Erik

September 16, 2020 at 11:39 am

Posted in org, Uncategorized

GitHub Pages redirection

leave a comment »


  1. Install the jekyll-redirect-from plugin
  2. update the front matter on posts to include redirect_to:
  3. Load a post and observe the browser redirect to the new location 👍

Problem statement

I was using GitHub Pages (Jekyll) for blogging, but recently switched to WordPress. I didn’t want to break old links, so I needed a way to permanently redirect.

An SO answer got me thinking about a meta tag. Is there an efficient way to add this meta tag to posts? Yes. Some old GitHub Enterprise Pages docs recommend using the jekyll-redirect-from plugin. We can confirm it’s supported for non-Enterprise Pages by looking at the list of supported plugins. And I see it works via a meta tag.

Is the redirect permanent? Sort of. The HTTP response from GitHub is 200, but the HTML redirect includes a canonical link, eg:

$ curl -v
< HTTP/2 200 
< server:
< content-type: text/html; charset=utf-8
<!DOCTYPE html>
<html lang="en-US">
  <meta charset="utf-8">
  <link rel="canonical" href="">
  <meta http-equiv="refresh" content="0; url=">
  <meta name="robots" content="noindex">
  <a href="">Click here if you are not redirected.</a>

That script tag looks broken, but it’s shorthand for `window.location.href`.

Written by Erik

September 13, 2020 at 10:47 pm

Posted in tool

PlayFab’s LiveOps guide

My experience is largely in features and infrastructure for growing and retaining users, aka “growth”. Recently, I learned the games industry has a comparable concept “LiveOps”. I’ve found value in using the latter to learn more about the space in general.

PlayFab has an excellent guide to LiveOps. The guide is a brief and accessible reference, so I’ll just jot notes below.


The guide summarizes LiveOps as “Games are shifting from one-off experiences to services that evolve. Developers of successful live games focus on understanding their players, meeting their individual needs, and cultivating long-term relationships”

In growth-speak, I’d phrase this as analytics, personalization and retention. There is some direct association with growth: “We’re investing in games that people play for longer and engage with much more deeply … to drive growth …“

I guess the “live” in LiveOps refers to “services that evolve”: “… live services

represented nearly 58% of Electronic Arts’ net revenue …” I can see how this would be a big shift from encoding all logic in a released binary. “save client updates for entirely new features or large assets”

“With a LiveOps game, the real work starts with launch instead of ending there” I think there’s less of a distinction in a non-game app; most apps already pull content from a network.

A summary of LiveOps features:

  • “server-side configurations …”
  • “… content data untethered from client versions …”
  • “… in-depth analytics”

“Content data” refers to “… new experiences and content, thereby extending the lifetime of our games“, which explains the claim that LiveOps can reduce up-front investment.

“… the ‘live’ part of LiveOps goes through three post-launch stages:

  1. Iterating your Game …
  2. Personalizing the Player Experience …
  3. Managing the Community …”

I think all of these apply to apps in general.

I like how the breakdown also indicates infra and talent required in each step:

  1. Iteration requires “build, test-and-deploy pipeline, basic analytics, and content configurations“
  2. Personalization requires “data analysts and product managers to use more sophisticated tools such as recommendation systems and live events managers“
  3. Community management requires “customer support staff, marketing, and community managers … guild systems, user-generated content, and multiplayer services for matchmaking, cross-network play, and communications.”

The guide presents these as sequential steps of maturity. In my experience with growth, 1 and 3 came before 2, since generating per-user state was relatively resource intensive. Also, we could start with relatively naive approaches to 2 and 3, eg friend recommendations by a static topic like “sports”, and then layer on more sophisticated alternatives, eg per-user behavioral predictions.

Connecting to people

LiveOps has a user-centric perspective: “LiveOps developers know that players and communities evolve. When creating a game, we’re not movie directors with a singular vision, but more like TV network program managers … LiveOps games are player-centric and react to player desires and needs …”

I’m a fan of a customer-centric perspective. Differentiating user-centric seems like it should be obvious, but it’s nice to see it emphasized.

My recent experience is in growth as a service, which is why I differentiate “users” from “customers” (Customers would be apps that have users/players.)


“With LiveOps, acquisition is an ongoing process” I guess this recognizes that people may come and go from a game, although in the terminology I’m familiar with, returning would be “resurrection” or “reactivation”. (“Reactivation” is listed later as an example of acquisition.)

I appreciate the list of common acquisition sources:

  • Store discovery
  • Public relations
  • Advertising
  • Cross-promotion
  • Influencer marketing
  • Social installs, eg shares
  • Reactivation

Helpful tip: “Track player engagement and retention based on source of acquisition and look for trends” Platforms providing acquisition channels should also provide attribution, eg Google Play Store’s campaign attribution passed to Android’s Install Referrer API.

Kind of obvious, but the guide recommends A/B testing reactivation inducements. Later the guide simply recommends testing everything all the time.


Retention is “one of the only data-supported ways to know if players enjoy playing“

Common techniques for increasing retention:

  • Adding content
  • Stickiness due to investment – this comes up later in the “conversion” section
  • Social connections
  • Compelling game mechanics, eg Go has “simple rules that allow for endless new strategies”

Helpful tip: “Try to communicate only what’s interesting and valuable, and mix up rewards so they don’t become background noise” I’ve heard this phrased as “fatigue” Messaging platforms should provide features to help customers avoid fatiguing users.


The definition of “engagement” or “active” usage is often mysterious to me, so I appreciate the general description: “Active communities engage with a game by playing, providing feedback, and promoting (discussing online or in person, creating fan content, and so on) … Common reporting period intervals include 1-day, 7-day, and 30-day.” An arbitrary post from SurveyMonkey has some context for MAU.

Interesting: “engagement is the only KPI that some studios measure.“

Another relatively obvious tip: “Look at how studios with games like yours engage their community as a baseline for your own engagement efforts”, ie “competitive analysis”. But still, as a general primer, I appreciate the comprehensiveness.


“Your team needs the tools to isolate and identify problems, fix or escalate them … and communicate with players throughout the process.” 👍

Common tools:

  • External-facing ticketing, so internal and external actors can coordinate
  • “ ability to look up individual player profiles and make manual changes”. Ideally, a customer can do this themselves.
  • “A way to suspend or ban players”
  • “A way for players to upload crash logs” Seems this could be automatic, eg Crashlytics
  • “Ways to send messages to players” (and customers)

Good tip: “Changes in support contact and resolution rates (e.g. number of support tickets opened and closed) can indicate larger issues.”



I like the list of common metrics:

  • ARPU (Average Revenue Per User) … general business health“. I’m guessing percentiles would be good too
  • ARPPU (Average Revenue Per Paying User) … for gauging monetization strategies, such as store design or DLC“
  • (from the monetization section) “Paying rate is just as important as ARPPU for measuring monetization” I get the impression paying rate refers to the percentage of users who pay for anything
  • Unique Logins …  indicates newly acquired players”
  • Conversion Rate … success at converting free players into paid players.”
  • Retention … how well your game keeps players interested”
  • Average Session Length … how long your gameplay loop stays fun”
  • Session Frequency … how often players engage with the game”
  • LTV (Lifetime Value) … the total number of unique players divided by total revenue generated”
  • Errors … how stable your game is”
  • Content Logs … popularity, stability, and engagement of specific game content” This seems relatively game-specific, but I guess it could be generalized to feature-specific metrics

Good point: “Some metrics are best reviewed over long periods of time (e.g. Avg. Revenue), while others benefit from constant real-time updates (e.g. Errors)” And this may change over time, eg crash rates while changing a flag value.

Interesting: “Instead of boosting acquisition through marketing or app store advertising, they built traction by focusing on early retention metrics such as daily active users, session length, and crashes“ and “direct player feedback”

Good idea: “implementing direct player feedback through a public Trello community board,  letting users log bugs directly, and holding community votes on what to work on next.“

Good point: “Knowing your retention rate is important, but offers no insight on how to fix it. For that, you need to do a deep drill-down or segment your audience and experiment.”


“Segmenting groups is a necessary step to deliver the best content to the most players”

Good tip: “your analytics toolset should let you define custom segments”

Common use-cases:

  • “Designers segment players based on in-game behavior to understand their needs and develop player-centric content” presumably to increase retention
  • “Monetization teams use segments to understand spending patterns, identify fraudulent behavior, and predict revenue”
  • “Marketers create custom segments and optimize messaging for each to acquire or engage players”

“The most important thing about the testing aspect is the cohort and the segmentation …” 🤔 I guess an example would be identifying a low spending segment to test a feature to increase spending, as opposed to testing it on everyone, some of whom may already by spending a max.

A basic funnel:

  • New users
  • Non-spenders
  • Spenders
  • High spenders

“Once you [define a funnel like this], it’s easy to track your progress getting players to move through the funnel from one segment to the next.”

Good tip: “machine learning can help you automatically segment players”


“Good experiments have a hypothesis or some sort of goal KPI to change” 👍

I’m glad this is stated: “The size of your audience can affect how complex your testing can be. A game with millions of players can easily test subtle changes, but one with a smaller audience will only get significant data from tests with stark variations. The same goes for how many tests you can run simultaneously—a smaller player base means fewer simultaneous tests are statistically reliable.” I’d also say an opinionated approach, direct feedback and/or severely limited test concurrency can be a more efficient guide for a small user base than cluttering code with conditional logic and waiting a long time for significant data. Nice: “monitor user feedback … when player data is in short supply.”

Good tip: “be sure the test encompasses at least a whole week to measure fluctuations between weekday and weekend players” and users in different regions.

Interesting: “Make sure if one player sees something different from another, they can clearly understand why” I wonder if an example would be providing UI to list active experiments.

In-game surveys should “only ask one question at a time”

“Failed experiments are an important part of the process to learn” 👍

Best practices:

  • “Learn which metrics best capture performance for your game’s KPIs,and set appropriate periods to monitor and review them”
  • “Test gameplay mechanics early. It’s harder to test changes … after players have developed expectations” Reminds me of changes to Twitter UX basics, like changing the ⭐️ → ❤️
  • “When players have problems, analyze event history …” which implies an ability to collect and analyze such history is important, which may not be obvious before an issue happens
  • “Use limited-time events to test changes to gameplay—players are often more tolerant of gameplay changes when called out as events” Good idea. Reminds me of sports-based features, eg World Cup. I hadn’t thought of them as an opportunity to experiment w basic mechanics.
  • “Chart out the “funnel” progression for players in your game and experiment with ways to motivate players to move through your funnel”
  • “Ensure your analytics tools let you view KPIs by segment”
  • “Establish a clear success metric to gauge the impact of tests”
  • “Test qualitative factors by polling players with in-game surveys”


“It helps to put together a designated LiveOps team” I’ve also seen feature teams own their launches.

This seems like a launch checklist:

  1. Feedback loop and KPIs
  2. Support channels and data access guidelines
  3. Incident response strategy

Soft launch

Example soft launch: “choose a smaller geographic area, ideally with the same language as your core audience … and run your game for a few months” or “ limiting your initial audience with an Early Access or Beta period”. EAP and beta are something I have more experience with.

Good idea: “pay close attention to the core engagement metrics” for soft launch

Good idea: “During soft launch, confirm that you can update the game without causing disruption to players – and make sure that you can roll back changes if problems arise during deployment”, ie verify LiveOps infra works as expected.

“Many developers are moving away from soft launches in favor of lean launches“ 🤔… “As a small, indie studio, you don’t have the money to do user acquisition for a soft launch”

Lean launch

A lean launch:

  1. deploys an MVP version of the game
  2. connects with a target audience, and then 
  3. tunes the game based on player data and feedback


  • reliable data pipeline
  • smaller manageable audience without inflated expectations
  • be able to adapt your game quickly

“Collecting and analyzing your crash data and retention metrics is a must”, which is “ dependent on an effective LiveOps pipeline that allows for developing several pieces of content at once, and agile deployment”

Best practices

  • “Assemble a LiveOps team”
  • “Develop a calendar” to coordinate live updates post-launch
  • “Put validation checks in place” I guess because this approach is premised on making lots of significant changes, so the cost of failure is high
  • “Rehearse key LiveOps tasks”, which is good advice, but kind of contradicts an earlier statement “There’s no such thing as a dry run in live games”
  • “Ensure your team has a way to roll back changes ”
  • “Set roles and permissions”

Game updates

“Game updates aren’t limited to new levels or game mechanics. They can consist of new items for purchase, events, balance patches, bundles, or anything else that encourages a player to come back and play more.”

“Understanding your player base is a key element in designing and delivering relevant updates”

“Frequency and consistency are as important as quality when making updates”

Tip: experiment with time between updates in addition to the update content “to see if they impact engagement or retention.”

“save client updates for entirely new features or large assets … assets such as art and gameplay logic are included in the client, but how those assets are displayed to players is driven by server-side logic … plan your content architecture in advance and move as much of your game logic as possible onto the server or cloud.”

Best practices

  • “Make a list of everything in your game that could be considered ‘content’”
  • Plan how content will get to the client 👈
  • “Think about offline mode” 👍
  • “Vary your updates” between temporary and permanent changes
  • “Consider targeting new content to specific player segments”
  • “Consider cloud streaming or downloading assets in the background during gameplay to reduce friction”


“A live event is any temporary but meaningful change to a game’s content”

“Anything can be an event … Timebox it, reward it, there you go …”

“Successful events often include:

  • A measurable goal …
  • A limited-time period …
  • Engaging themes and content …
  • Surprise and predictability …
  • A sense of community effort …
  • An effective means of communicating with players …”

Reminds me of a “campaign” in other applications of targeted content.

Experiment w event frequency: “ By experimenting with event timing, they were able to settle on an event schedule that raised their baseline engagement while also minimizing lapsed players”

“Consider running repeatable events … Holidays work because players will be more understanding of temporary changes, and often have more time to play”

“Adding a special, limited-time leaderboard for a specific in-game goal is a common event.”

“Events can also run in parallel”


A calendar can help reduce the complexity of orchestrating events and avoid fatiguing users.


“Great player communication is critical to the success of live events”

Push notifications, email and social media are common channels of event communication.

Best practices

  • “Make a list of everything you might want to change as part of an event”
  • “Prepare to run events from the server, without a client update”
  • “Find natural ways to promote upcoming events in-game”
  • “Capture event data in your data warehouse“ for later analysis and segmentation
  • “Let your team be flexible when creating events” This seems like basic team management; micro-managing is bad
  • “Set goals for events” so we can evaluate performance
  • Maintain a calendar for coordination and to avoid fatiguing users
  • “Use events to experiment with ideas”
  • “Establish an event framework” that separates unique and repeatable aspects of an event


“… every discussion about monetization should consider:

  • The kind of game you’re building …
  • … [aligning] player needs with your revenue goals …
  • Ethical guidelines for monetization …
  • How your competition is monetizing … “


Aka “in app purchases” 👈

Common forms:

  • “Cosmetics are items that affect the physical appearance …”
  • “Account Upgrades are permanent enhancements to a player account …”
  • “Consumables are items that can be used once for a temporary effect …”
  • “VIP Programs are subscription-based programs …”
  • Content Access
  • “Random Boxes (or loot boxes) are items players can purchase without knowing exactly what they’ll receive”

Common “elements of in-game store management:

  • Presentation … should be easy to use …
  • Catalog management … (A good rule of thumb is once a week.) …
  • Pricing …
  • Offers and promotions …
  • Fraud … As soon as you start offering items with real-world currency value, there will be fraud …”

Nice: “Use server-side receipt validation … for added security”


I really like this topic. From the growth perspective, this is part of acquisition.

“two main challenges:

  1. eliminating the barriers to entry
  2. showing your players value”

The first one I’ve come to see a fundamental product consideration. If we want people to do anything, we need to minimize the cost of doing that thing. I think this also ties into an engineering best-practice: keep migrations and changes separate.

Regarding the second point, I think a great counter-example is a paywall before showing any content. “players have more of a propensity to pay once they have a trust

relationship with the game and the developer”

How players spend

I don’t have experience with in-app purchases, so this is all interesting.

“Players will have different levels of spending they are comfortable with”

“It’s easy to get caught up focusing on big spenders or trying to sell as much as possible as soon as the game launches. But those methods are often unreliable,

unsustainable, and may reflect poorly on your studio” Reminds me of low-quality ads, which eventually drive users off the platform.

“Build a broader, more reliable, and engaged spending base rather than chasing whales’ … A thousand players paying $10 is preferable to ten players paying $1000 because there is more opportunity for repeat purchases.”


“One of the most popular forms is rewarded video—short videos often promoting a different game or app, watched for an in-game reward or more playtime … [beware] players might be lured away by a competitor’s game.”

“As with almost every other LiveOps effort, you need to continuously test different solutions.”

Good idea: ”Many developers segment their audience and only show ads to certain segments, often limiting them to non-paying players.”


“You can usually do an on-the-fly calculation to compare the value per impression of an in-house-ad versus one from an external network, so you can decide what to show for a given player segment.”


“Many games use two virtual currencies: a “soft” currency earned ingame, and a “hard” purchased currency”

“Build a matrix of all the sources and sinks for in-game resources and build a model of the economic activity you can adjust in a tool such as Microsoft Excel, without rolling out updates.” I’ve heard of managing config this way.

“What we want is sustained investment and signs that a player has really perceived value…”

Best practices

  • Chose a strategy
  • Set ethical and quality guidelines
  • Prevent fraud
  • Simplicity and variety
  • Bundle commonly purchased items
  • Pair sales with events ← this reminds me of the growth practice of requesting feedback when engagement is high
  • Incentivize social sharing
  • Diversify ad networks
  • Keep loss aversion in mind
  • Always be testing “Never stop testing your monetization efforts, because your players’ perception of value (both real-world and in-game) will change over time“


“… detailed documentation on multiplayer architecture at”


“As soon as you add a leaderboard in a game, even if it’s a single-player game, players start seeing progress against other people, and people all of a sudden start engaging more” Makes me think there are mechanics for games based on human behavior comparable those used by growth features. For example, leaderboards increasing engagement highlights a human response to hierarchy.

Filtering makes leaderboards more fun:

  • Geo
  • Platform
  • Mode, eg player-vs-player
  • Option, eg difficulty
  • Level
  • Statistic, eg # wins
  • Time, eg stats for today

“combining the variables Platform, Level, and Statistic you could create a leaderboard for ‘Fastest time (Statistic) to complete Ventura Highway (Level) by PC players (Platform).’”

Leaderboards can also encourage social behavior, eg biggest contributor to team

An ability to reset the leaderboard can encourage participation

Award prizes based on achievements shown in the leaderboard.


“Groups … can get players more invested in a game”

Some group dynamics:

  • Communication
  • Game progress
  • Stats 

I wonder if these can be used for other groups, eg a working group.

Interesting: “Determine how short-term groups are formed based on how much players need to trust teammates to succeed … “

“Long-term groups (such as guilds) have been proven to increase player retention …” Seems like a form of “investment” that makes an app stickier. The fact that it was “proven” makes me think there might be papers to read.

“… how do I provide you the best experience not only within your guild, but when your guild is gone… It comes down to matchmaking … the right aspiration together as a group.” Reminds me of work dynamics.

Managing communities

“A dedicated community manager can help keep players satisfied and foster a positive community …” Reminds me of the dev “advocate” role

Some ways to avoid bad behavior:

  • Limiting communication options
  • Filtering words and phrases
  • Defining a code of conduct

“The team behind Guild Wars 2 reportedly built the whole game around the idea that ‘players should always be happy to see one another.’” 🙂

“The more you can provide a framework for people to operate in, the more likely they are …“


“50% or more of online users will only buy when presented offers in their native language.”

Good idea: given the localization team access to edit strings

“Store as much of the in-game text on the server as possible, so it can be easily edited and localized”

Best practices

  • Consider multiplayer early in development
  • Add multiplayer elements whenever possible
  • Experiment with matchmaking algorithms
  • Plan for multiplayer scaling needs
  • Offer multiple ways to communicate
  • Enable customization of groups, to increase engagement
  • Reset leaderboards on a regular basis
  • Award prizes based on leaderboard stats
  • Enable users to “refresh” game to explicitly load new config
  • Localize communications(!) 

Tools and services

The guide lists PlayFab’s API, but I think it’s more interesting as an overview of useful entities and controls:

  • Auth
  • Content
    • Game content
    • User generated content
  • User data
  • Matchmaking
  • Leaderboards
    • Tournaments
    • Reset schedules
    • Prizes
    • Fraud prevention
  • Communication
    • P2p
    • Text and voice with transcription and translation
    • Accessibility (speech to text and vice versa)
  • Eng controls
    • Config
    • Reporting
    • Events
    • Automation
    • Scheduling
  • Product & community controls
    • Reporting
    • Event log
    • User management
    • Automation
    • Scheduling
    • Segmentation
    • Experimentation
    • Messaging
  • Economics controls
    • Stores
    • Sales
    • Economy
    • Fraud prevention

Written by Erik

August 28, 2020 at 7:37 pm

Posted in perspectives

Tagged with

Software ecology

A documentary about an eco-friendly home near Austin inspired me to think about software systems from an ecological perspective.

The notion of a software or product “ecosystem” isn’t new, but I’d previously only thought about it as fostering healthy interactions in a system; I hadn’t considered the non-human actors. Is the code hard to maintain? Are alerts waking people up unnecessarily? Is the business sustainable? Is there a natural order? Is anything out of place, like an old tire in a stream? Can we achieve our goals in harmony with the natural order?

For example, I worked on a free service that would alert when resources were exhausted. Because it was free, it was natural for consumers deprioritize efficient usage. Maintainers of the service absorbed the cost in the form of routine alerts. A more balanced system would shift some cost to the consumers.

I think the idea of separating concerns is another example. Decoupling can reduce maintenance cost even if the functionality doesn’t change.

A colleague once remarked that every syntax variation allowed by a language would eventually appear in a code base; a convention could not stop this. Perhaps this was another example of an imbalanced natural order. The cost of enforcement was solely on the reviewer. Shifting this cost to programmatic validation, like a linter, would help restore balance.

Written by Erik

August 14, 2020 at 3:34 pm

Posted in perspectives

Binary prioritization

Prioritizing unplanned work is hard, especially when we’re overwhelmed.

In this context, I’ve found it helpful to split things into two groups: important or not. My team calls this “binary prioritization”. We work on the important stuff and forget everything else; anything misidentified as unimportant will bubble back up.

Brandon Chu’s essay “Ruthless Prioritization” provides a helpful distinction: work within a project vs work between projects. The former benefits from binary prioritization; the latter from a more rigorous process, like OKRs.

This also reminds me of Amazon’s “Bias for action” principle, and the Agile principles. For example, Agile embraces change (“Agile processes harness change for the customer’s competitive advantage”). Binary prioritization enables us to embrace a high volume of change.

Written by Erik

August 7, 2020 at 7:24 pm

Posted in org

Bias for action

This is straight from Amazon’s leadership principles. It also seems related to binary/ruthless prioritization, the Agile principle of “Simplicity — the art of maximizing the amount of work not done”, and customer focus.

An ambiguous problem may have many possible solutions. We risk slowing to a halt investigating all of them. It’s helpful to regularly question if a given task is important. “Analysis paralysis” might describe this, but I prefer Amazon’s phrasing: “Bias for action”.

As a concrete example, in an interview, there may be many possible ways to solve a problem, but time is limited and solving the problem by any means is preferable to identifying all possible solutions without implementing any.

Written by Erik

August 7, 2020 at 7:03 pm

Posted in org

Tagged with , ,

A pattern for reducing ambiguity

Here’s the pattern:

  1. Identify the owner
  2. Have the owner describe the problem and solution in a doc
  3. Invite stakeholders to comment on the doc
  4. If discussions on the doc linger, schedule a meeting to finalize
  5. Document conclusions in the doc, to facilitate communication and closure, and for future reference

This may seem obvious, but I often forget it, especially in cases where a task starts small and grows in complexity. A problem may seem too small to formally “own”. A solution may seem too trivial to document. Stakeholders may attend a meeting without context. A meeting may conclude with stakeholders feeling like they’ve expressed themselves, but there’s no actionable plan to resolve the problem.

Identifying one person from a group of stakeholders to own the project and be responsible for leading work to completion reduces organizational ambiguity.

Documenting the problem and proposed solution in writing reduces ambiguity by capturing ideas from a variety of mediums in a single, relatively objective place that stakeholders can comment on.

Documentation alone may achieve closure, but it may also spawn extensive commentary. Meetings are relatively expensive, but scheduling a meeting to drive closure reduces ambiguity by distilling commentary on the document into conclusions.

Documenting conclusions reduces ambiguity by rendering them in an objective form all stakeholders can agree on.

A few symptoms that indicate when this pattern might be useful:

  1. Endless back-and-forth in chat, bug comments, etc, which can give the impression of progress, but never resolve the issue
  2. Multiple and/or cross-functional stakeholders, which can obscure priorities
  3. Multiple people opining on a solution and/or answering questions, which can obscure ownership
  4. A problem that drags on, which can indicate it’s important, but inappropriately owned

This ties into larger discussions around project planning (e.g., managing planned vs unplanned work), and meeting efficiency (e.g., inviting stakeholders, assigning pre-work and clarifying outcomes), but the point here is just to succinctly identify an organizational pattern and when it can be helpful.

Written by Erik

August 2, 2020 at 2:52 pm

Posted in org, pattern


TLDR: like other REPLs, JShell provides an easy way to test Java one-liners, and, like the Rails console, a handy ad hoc CLI.

I appreciate a REPL for quickly checking the validity of small snippets. For example, I can improve the quality of my code reviews by verifying an idea works in a REPL before recommending it in a review.

My first exposure to a Java-esque REPL was the Scala REPL, which could also interpret Java. This was handy, but only easily available when Scala is installed.

When Scala wasn’t installed, I used for Java, but this is a public site , so I need to be mindful not to use it for anything confidential, and it can take some time to load.

Recently, I learned about JShell, which is included in the JDK as of version 9.

Aside, in case you’re on a Chromebook, Google’s Cloud shell is great ad hoc terminal.

Per the JShell docs, I can start/stop the shell:

$ jshell
|  Welcome to JShell -- Version 11.0.6
|  For an introduction type: /help intro

jshell> /exit
|  Goodbye

Hello world:

jshell> System.out.println("hi")

Note implicit semicolons for simple code. Return statements appear to need explicit semicolons, though, eg:

jshell> String get(){
   ...> return "s"
   ...> }
|  Error:
|  ';' expected
|  return "s"
|            ^

From the docs, I see there are “scratch variables”, which reminds me of the Scala REPL’s res variables:

jshell> 2+2
$3 ==> 4

jshell> $3
$3 ==> 4

jshell> $3 + 2
$5 ==> 6

The feedback is comparable to javac, eg:

$ cat
import java.util.HashMap;
import java.util.Map;

class MyMap {
        static Map<String, String> m = new HashMap<>();
        public void put(String k, String v){
        public void get(String k){
                return m.get(k);

$ javac error: incompatible types: unexpected return value
                return m.get(k);
1 error
$ jshell
|  Welcome to JShell -- Version 11.0.6
|  For an introduction type: /help intro

jshell> /open
|  Error:
|  incompatible types: unexpected return value
|                  return m.get(k);
|                         ^------^

We can use tab completion:

jshell> List
List                 ListIterator         ListResourceBundle


<press tab again to see documentation>

jshell> List.
class     copyOf(   of(

jshell> List.of("1", "2")
$3 ==> [1, 2]

jshell> $

The up arrow scrolls back through history. We can also print it:

jshell> /list

   1 : List.of("1", "2")
   2 : $>System.out.println(n))

We can also search history, eg Ctrl + R for reverse search:

(reverse-i-search)`hi': System.out.println("hi")

Editing multi-line code is cumbersome, eg:

jshell> class Foo {
   ...> String get(){}
   ...> }
|  created class Foo

// Up arrow to edit method definition

jshell> String get(){
   ...> return "s";
   ...> }
|  created method get()

// Creates new function instead of editing Foo.get

jshell> get()
$10 ==> "s"

jshell> Foo f = new Foo();
f ==> Foo@1e88b3c

jshell> f.get();


I’d recommend using an external editor for anything non-trivial.

JShell has an /edit command to launch an external editor, but it doesn’t appear to save the output.

jshell> /set editor vim
|  Editor set to: vim

jshell> class Foo {}
|  created class Foo

jshell> /edit Foo // add bar method to Foo
|  replaced class Foo

jshell> Foo f = new Foo()
f ==> Foo@56ac3a89

|  Error:
|  cannot find symbol
|    symbol:   method bar()
|  ^---^

jshell> /edit Foo // Observe bar method is undefined

I’d recommend just having an editor open in a separate terminal, and using JShell’s /open command to load the file after changes.

For folks using Google Cloud Shell, it appears to have an implicit tmux session, which makes it easy to edit in one pane and use JShell in another.

In practice, I’m guessing there’s little use for JShell when editing complex code, but it does provide a handy CLI for exploring complex code. We could have a build target, like pants repl, or a CLI for our app, like rails console.

For example, given a naive script

javac -d bin src/main/com/example/* \
        && jar cf bin/MyMap.jar -C bin com \
        && jshell --class-path bin/MyMap.jar

We could:

$ ./                                                                                                                                                          
|  Welcome to JShell -- Version 11.0.6
|  For an introduction type: /help intro

jshell> import com.example.MyMap;

jshell> MyMap m = new MyMap();
m ==> com.example.MyMap@6a41eaa2

jshell> m.put("k", "v")

jshell> m.get("k")
$4 ==> "v"

Written by Erik

May 16, 2020 at 11:27 pm

Posted in tool

Tagged with ,

Architecture for Light

The Illuminating Engineering Society has a couple interviews with Kim and Paul Mercier, authors of a book called Architecture for Light. The content is fascinating, touching on anthropology, physics, engineering, design and their professional experience, but a few things stood out to me in particular as overlapping with software engineering:

  1. The role of people
  2. Efficiency
  3. Multi-disciplinary planning


Perhaps the connection to people is more immediate in the field of lighting design than software engineering, but the two have a couple things in common that can negatively affect usability: creative purity and cost efficiency.

In software and architecture, we can design something that’s aesthetically pure, but difficult to incorporate or maintain.

One of the best tools I’ve found in project management is a focus on the customer. This can help us prioritize and avoid ethical pitfalls.

Related, healthy organizations recognize software is built by people, and people thrive in a nurturing environment. For example, Google’s research indicated psychological safety correlated strongly with high-performing teams.


The Merciers describe an approach to lighting that minimizes electrical usage and maximizes sales, but focusing light on products. (A common alternative is to uniformly light a space.)

They mentioned the most cost-effective solution is to turn the lights off. This reminds me of a balance in software engineering: we need to maintain everything we build, so not building minimizes maintenance cost. A similar balance comes up in SRE: we can maximize stability by minimizing changes. Obviously, doing nothing has other costs, but having all options on the table opens up opportunities, such as only invest light where a customer’s attention produces value.

The Merciers propose a method that maximizes efficiency and usability. This was an amazing observation: not too long ago, daylight was the most efficient light source; it’s still very efficient, but we need to rediscover the skills to use it.

Multi-disciplinary planning

The Merciers recommend including lighting designers from the “imagination” phase of a project rather than pulling them in later. Lighting designers can then advise on structural changes to maximize lighting efficiency. They describe the opposite as an anti-pattern: bringing specialists in late to fix problems.

This reminds me of a tension when organizing software development teams: a “waterfall” model where each specialty contributes in sequence, versus an “agile” model where teams are composed of cross-functional members.

They describe several examples in their interview, but one reminded me of complex workarounds that can arise in software: a building design that let in too much daylight. They observed a small overhang would be ideal, but because the design was already finalized the building had to be outfitted with a mechanical shade.

Written by Erik

May 10, 2020 at 3:31 pm

Posted in perspectives

Tagged with

Customer feedback

Customer feedback can come through a variety of channels. Here’s a list of the ones I’ve found valuable, and practices for aggregating feedback through these channels.


Inline survey

This is an in-context prompt for feedback, eg
“Like the service? 😀 | 😐 | 😭”


  • Relatively easy for feedback provider, resulting in broad participation
  • Confidential


  • Limited signal
  • Hard to filter for quality participation

User research

This is when a company makes an open call for feedback.


  • Long-form discussion
  • More control over participation than inline surveys
  • Confidential


  • Quality of participants can vary

Partner interviews

This is when a company solicits feedback from valuable customers. I’ve found it useful to do this periodically, eg yearly.


  • Long-form discussion
  • High-quality participants
  • Confidential


  • Risk of focusing too closely on feedback from a small group

GitHub issues

GitHub provides a way for users of a repository to report issues.


  • Public, so customers can pile on and follow progress
  • Intuitive for GitHub users
  • Reactions help capture sentiment
  • Easy to discover, eg via Web search
  • Enables customers to crowd-source support


  • Specific to a repository


This is when a company maintains a public Slack channel.


  • Good for quick Q/A
  • Enables customers to crowd-source support


  • Can be noisy/interruptive. A support rotation can help distribute cost across team.

Stack Overflow

Stack Overflow enables people to ask questions about anything, but companies can use it’s features to support customers and collect feedback.


  • Easy to discover, eg via Web search
  • Enables customers to crowd-source support


One challenge of having numerous feedback channels is distilling themes, identifying task and communicating progress. I’ve seen a few patterns to address this.

Product management

Collecting feedback from customers and using that feedback to guide planning is basic product management. Engineers may end up doing aspects of this, but they should get appropriate credit.

Dedicated support

This is when a company staffs a team to monitor all feedback channels. This team can also aggregate reports and/or work with product management to identify themes.

Support rotation

Distribute the cost of monitoring feedback sources across the team using a weekly rotation. This person will be completely distracted and may identify issues, so it pairs well with an on-call rotation.

Centralized tracking

This can be a spreadsheet or a full-featured bug tracking tool like Jira, but it’s helpful to have a single place to prioritize all issues and feature requests.

Delegated communication

If there’s a team that, say, owns a given GitHub repository, they can own the task of keeping GitHub issues for that repository synchronized with centralized tracking state.

Crowd-sourcing themes

One pattern I’ve seen work well is to split a team into groups, assign each group a subset of feedback, and ask each group to identify themes in that feedback. For example, as part of quarterly planning. This works well with large bodies of inline survey results.

Written by Erik

April 26, 2020 at 5:33 pm

Posted in product, support

Tagged with