Backbone mistakes we made, and how we solved them with Marionette

Yes, that is a 3500 line module

3500 lines of debugging hell. Yes, if you look closely above at the list of JavaScript files that make up Tint, you will see that I have indeed helped write a 3500 line Backbone module. In terms of code, a verbose behemoth. In terms of maintenance, a costly reminder of what happens when you keep tacking on features without thinking about the “big picture”. Yeah, I’ll say it: architecture.

“The secret to building large apps isnever build large apps. Break yourapplications into small pieces. Then,assemble those testable, bite-sizedpieces into your big application” - Justin Meyer, author JavaScriptMVC

I’ll be the first to admit my failures. But I hope that also means that I’ll be the first to learn from them. Over the past year, I helped write a backbone app that started out as a free widget to powering a display in Times Square. The rapid growth in customers and increasing demand for key new features helped accelerate an already growing amount of technical debt. As we tacked on new feature after new feature, things got complicated. Eventually, we reached a point where we all agreed that a refactor was in order.

We chose to refactor using Marionette because we were already familiar with Backbone’s patterns and figured that it would be an easier learning curve. Sure enough, after 3 weeks using Marionette to refactor Tint Analytics, we’ve gotten up to speed. We have identified some key Backbone mistakes we made that Marionette helps us handle. Here’s a list of Backbone pitfalls and how Marionette works to help us avoid them.

Views containing too much logic

In Backbone, there are only 2 kinds of objects: Models and Views. Models help connect to the API and serve to maintain state. Views do everything else. As you can imagine, this can lead to Views which start out small, but quickly grow. And since the library doesn’t have any established pattern on how to handle composition, it’s easy to have Views grow to an unmanageable size. Marionette helps us by extracting Views into Controllers and Views, and encouraging highly composited architecture with the idea of Layouts and Regions. The Marionette Controller is in charge of view initialization and communication between subviews, acting as a Mediator.

Not having enough Modularization

This point also ties in with the above point of views containing too much logic. Because it requires a lot of boilerplate in Backbone to create composite views, it is too easy to hold off on creating lots of small views composed to create the larger overall view. Marionette helps by extending Backbone Views into ItemViews, CompositeViews, and LayoutViews. Marionette automatically takes care of accepting collections and iterating through models to create ItemViews, reducing the cost of composition and increasing modularization of view code.

Forgetting to unbind events causing unexpected behavior and memory leaks

Backbone has no pattern or tool to help get rid of zombie views. Instead, it relies on developers to come up with their own solutions to unbind events. I relied on the BaseView technique to make sure events were being unbound, but I always thought it was a little goofy for the library to not handle this automatically. Luckily, Marionette does. Marionette Controllers, Views, and Modules have built in functionality to automatically unbind events. Yay!

Interdependency through global variables

One conundrum that I encountered while building our large Backbone app was how to handle multiple views that share a model, or a view that needs to reference another view’s model. I eventually ended up having a couple global state variables. The problem was, there was no way to figure out the parts of the code that were manipulating or reading the global. In addition, it was easy for the Backbone model and the global variable to become out of sync. Oy!

Entities to the rescue. Marionette Entities are an additional abstraction that allows you to have clearly defined entry and exit points for your models, making them all both globally accessible and also well defined and easily debuggable. It also lets you easily implement functionality like making a model getter a Singleton or having custom model initialization. The best thing is that the View considers the Entity a black box and communicates to it via messaging, reducing unnecessary coupling.

A giant router with all module initialization

Almost every sample app I’ve looked at for Backbone has a single router file. For simple applications, this works fine, but for larger scale application, the router can grow to be unmanageable. A large router is hard to read and maintain because it often times will be responsible for initializing unrelated Views and Models. Marionette helps solve this by distributing routing and initialization among marionette controllers. It allows you to define Model and View instantiation where you can find it later.


Overall, the codebase is looking easier to digest, although we still have much to learn. By not making the same mistakes we did above, you too will be able to avoid code hell, and instead, see something like this:

yep, we gotta break filterbar down

Further Reading

An open letter to tech workers of SF

An open letter to the tech workers of san francisco

Dear fellow tech workers,

“We interviewed a senior being evicted from their home in the Mission who said, ‘Google is Hitler’. What would you say to that?”

An interviewer from TechCrunch asked me this question a month ago. The question didn’t surprise me, even though it should have. It seems like that’s all that’s been in the news lately.

The same week, I went to a Youth Speaks poetry slam with Monica. This was the first poetry slam that I’ve ever been to, and I was excited to hear youth speaking out about the issues they hold dearest. The event was fantastic and it inspired me to see youth cultivating their creativity.

But it wasn’t long until a slam came up accusing “toxic” tech workers of ruining the city:

Link to video

On valencia now that's all you see. It's spreading. Like an airborne toxicity. And that's exactly what I mean, it's a toxic city. So they force us out. Both young and old. Raised up the cost of living, no rent control. So if we can't afford to live our only option is to die or move out to Tracy or Antioch like a couple of my guys. While I'm in my city, they're out in the burbs. Not to mention that Twitter and Google are too strung up for words. They're speechless. Denying the fact that the only ones who can afford to live here now are the ones that are Google bussed in. Like they're employees from the mystical wonderland called the valley of silicon. It's really damn sickening, and I'm a 19 year old mother f**cking San Franciscan, damn. - Jerome Robles-Reyes "In My City"

protestors against tech shuttles It seems, from all fronts, that the city hates tech workers. Even SF Streetsblog, a blog I hold near and dear as a daily cyclist, declares the tech community as a monoculture that “blames those less wealthy for their own problems”.

Monocultures serve no one, including those whose culture takes over. - Fran Taylor [SF Streetsblog](

From these articles, I should be ashamed. I should move back to where I came from. I guess that would be Indiana.

But I’m staying in San Francisco. The solution to evictions is building more housing. But building more housing isn’t going to conquer the root problem which is the animosity many native SF’ers have against people who work in software.

Instead of leaving, I’m going to see all the hate as a challenge to become a better member of the local San Francisco community. I think as tech workers we can make a big difference in public perception with consistent, everyday steps that any techie is capable of doing. You don’t need to be a community organizer to make things happen. A community is just a bunch of ordinary folks having relationships with each other.

I did some research, and apparently there are 20 ways to not be a gentrifier as described by local paper Oakland Local. It inspired me to make a list of my own:

  • Go get a haircut at a local barbershop or hairdresser (price must be < $15 (guys) or < $30 (gals)). Talk to your hairdresser. Talk about the car accident that happened down the block last weekend. Talk about the traffic issues from Outside Lands. And listen. Learn what’s on the mind of folks in the community.

  • Read and talk about local news. Be aware of the pulse of the city and about what’s affecting everyone, not just the software industry.

  • Get involved in local volunteerism. - This summer I helped Doug, a local SFUSD high school teacher, in an externship hosted at Tint. He learned technical skills with us that he can bring to the classroom in the upcoming school year. This fall, I hope to mentor local high school students so they too can learn how to write code. There are lots of resources for you out there, you just have to look! For starters, check out SF Citi or Mission Bit.

  • Participate in local art. It could be as simple as going to a poetry slam or an art walk, or go even further! My friend and colleague Brandon is a great example for this. He’s working with a local organization called Clittorati on the Vulvatron. What could be more SF than a visually iconic mobile art piece, empowering women, goddesses, and the feminine identity?

  • Don’t talk down to people less fortunate than you - I once met a fellow tech worker who condescendingly referred to the 38 as the ‘dirty eight’. As someone who rides the 38 every day, it made my blood boil to hear that comment. I finally knew how it felt to hate techie outsiders. Don’t reinforce negative stereotypes.

These are just a small subset of the many things that can be done to cultivate a community and dismantle the image of the evil techie outsider. But, the biggest change that anyone can make is to treat everyone from all walks of life with respect. Even with the fairest of intentions, it’s easy to seem condescending to outsiders, so it’s our responsibility to think and act actively to participate in the community.

How To Recruit Engineers In San Francisco

small teams looking for more people


6 months ago, Nikhil and I were the only developers at our 4 person startup. With business growing steadily, we were so spread thin that there was no hope of improving our product if we didn’t bring more help onto the team. So, Nik and I put our recruiting hats and began our journey to find talented engineers to join us. Fast forward 6 months to the present, and our Engineering team is about to grow to 7 (including 2 interns!), and I can safely say that I’ve learned a whole lot in the process:

Never consider recruitment work “a waste of time”

Time spent finding the right people for your team lays the foundation for everything else at a startup. A great product starts with a great team. So no matter how disheartening it feels to comb through resume after resume and still not find the right fit, always remember that recruitment work is as important as building a new feature or optimizing a process. So do yourself a favor and put quality time into doing the following:

Work your network

Our second engineering hire, Brett, came from Nikhil’s extended network and not from any job board or recruiting company. You never know who’s looking and with social media, it’s easier than ever to let all of your friends know that you’re looking to hire. It’s also easier to bring someone onto the team if they’re vetted by a friend than if they are a stranger. Not only do you feel like you can trust their competency, they can also better trust your competency!

Post a Quality Job Post

Know what people are looking for in their next job.

Hint: it’s probably Mastery, Autonomy, and Purpose

And know why your job is what people are looking for.

Your job post should highlight your strengths.

For example, our strength is our company culture. Our mission is to build a company culture that champions transparency, fairness, happiness, and sustainability. And we make sure to highlight that in our job posting:

  • Profit Sharing - We split 20% of all revenue made over 100k and distribute it evenly among the team.
  • Team Transparency - We calculate compensation based on a formula that we all agree on. Cap table is made available to all employees. Business financials are known by all teammates.
  • Personal Autonomy / Consensus Driven Culture - We foster consensus-driven rather than top-down decision making when it comes to important business decisions. From what features to build next to what furniture to buy for the office, we believe it’s the fairest way of making decisions.
  • Customer Driven Culture - We’re very in-tune with our customers and they love us. For example, we decide what features to build based on surveys we send directly to customers. Check out this one that we sent out last year to decide what we would build this past quarter.
  • Personal Development Stipend - A monthly stipend designed for self-improvement. Whether it’s books, yoga classes, or a fitness tracker, we want our teammates to improve themselves.

Send Quality Emails

Quality recruiting emails are emails that recognize and understand the candidate. Here are some tips to add some empathy to your correspondance, embedded in a sample Tint recruitment email:

Thanks for scheduling a time with me! To prep for our interview, I’d recommend reading up on our company, getting familiar with what we do, and coming up with a few questions to ask us.

Here are some helpful links to peruse:

Key Observations:

  • Give the candidate a small assignment to assess their interest in the listing.
  • Arm your candidate with the basic knowledge you expect them to know so you can have a productive discussion.
  • Give the candidate the motivation they need to get excited about the opportunity.

Protip: Use to schedule your interviews. It’s a Gmail extension that allows you to easily give candidates a way to instantly book a meeting with you and have it show up in your calendar!

Protip 2: Use Yesware to create templates for your common recruiting emails, saving you further time.

Use a CRM

Handing resumes manually through email is incredibly time consuming. Use one of thousands of resume tracking Applicant Tracking Software (ATS in recruiter lingo) such as Resumator, Jobvite, or JobScore to simplify your life.

Find creative places to post to

A job listing link can travel far! But it’s your job to take it there. Consider the following places we posted our link to:

tint craigslist

  • Craigslist - We posted our listing on 10 major metro tech centers advertising paid relocation and had some success on attracting some good candidates. At $25 a posting, it was an affordable way to reach attractive candidates in markets that have much less competition than San Francisco.
  • Hacker News - We found some quality candidates (including one of our interns) from posting our listing as a comment within the monthly “Who’s Hiring” thread. It gets posted on the 1st of every month, so don’t miss out!
  • Indeed/Careers/Monster - Surprisingly, these mainstream job boards are frequented by talented people too! Most ATS systems will post to these major boards automatically, so be sure to configure your system to do that.
  • Github Jobs - We found some alright leads from this paid posting, fewer applicants but the average quality was higher.
  • StackOverflow Careers - We paid to run a campaign on StackOverflow but found that all of the submissions were overseas Java developers at big corporations looking for visa sponsorships. Maybe we were doing something wrong, but we ended up asking for a refund.
  • Reddit - Plenty of subreddits to explore if you’re looking to find a community of people who you think would be a good fit. Think /r/bigdatajobs or /r/sysadminjobs


If you’re looking to expand your team, you have to recruit like a pro. It’s better to do things thoroughly from the get-go than to lukewarmly recruit for a longer period of time. Follow the tips above, and finding an engineer in San Francisco shouldn’t be as impossible as everyone says.

CI Using Sauce Labs and Travis CI

I’ve been meaning to set up a build / integration server for the past year but haven’t gotten around to it for a myriad of reasons. Last week, I had enough with the

  • Features breaking every time a new feature is released (regression)
  • Manually smoke testing URLs
  • Having no structure for developing/testing new features

So, I decided to setup a Continuous Integration system for Tint! Here’s some notes on what I found as I navigated the confusing waters of setting up a build server.

Initial research

Continuous integration: the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.

Outline what your needs are with the build server. For example, my needs were:

  • Run selenium tests, preferably in parallel
  • Be triggered by Github pull requests and git pushes
  • Have an easy to use UI to see breaking builds
  • Have easy integration to email, HipChat, and Github


  • Travis CI - I used them over CircleCI because Travis seemed to have more industry adoption and also had a better UI and documentation.
  • Sauce Labs - The leader in Selenium Grid SAAS, they also do a lot of active development on open source Selenium projects such as Selenium Builder, which is cool.
  • Ruby/Rspec/Rake - Wanted to use a language that had strong automation tools around it and was low on verbosity, yet still easily readable, so we went with Ruby and company.
  • Of course, there are many, many alternatives to Travis and Sauce (I actually started this project using CircleCI and BrowserStack), however, I chose Travis and Sauce in the end because they had more documentation and were easier to use.


  • (Optional) Use The Selenium Guidebook to setup a framework for your Selenium Tests, and to learn how to write maintainable tests. Make sure all the tests run on your Cloud Selenium tool you plan on using (Sauce or BrowserStack) before going forward.
  • Make an account with TravisCI, and turn on the repo that you want to set up a build server for.
  • TravisCI will automatically hook into Github events so that it will trigger a build on pull requests and pushes. It uses a .travis.yml file in the root of your directory to figure out how to create your build server.
  • Configure your .travis.yml so it builds your server properly. For example, in our travis.yml file, we clone our puppet manifest and then use puppet to create our webapp server and handle package dependencies, virtualhost files, random config files, and starting the services.
  • Once the server is built properly, you can use Sauce Lab’s Connect feature to run Selenium Webdriver test on Sauce against your build server. Unfortunately, how this all works is not adequately covered in the Sauce Labs documentation, so bear with the magic (this is literally what Sauce Labs gives you for documentation).

Debugging Your TravisCI Build Server

If you’re having trouble creating a working build server, you can email nicely and they will allow you to setup a debug build server for you to log into to test things out. However, according to their support team, running a Vagrant VM using the default precise32 ubuntu image is very close to their current setup, so consider that as an option as well.

I highly recommend using Puppet to simplify your build process. Puppet is also useful outside of setting up CI as it allows you to easily configure many servers quickly (for example, adding a new virtualhost file to 10 servers in one shot), and does it in a way that is maintainable and version controlled.

Even with all of these useful tools, it took me a good couple of days to get the build up and running, so don’t be discouraged by how tedious it may seem. The only way to really debug a build server at the moment seems to be to make changes, rebuild the server, see if the build goes farther, and then repeat until everything is working as expected.

Weird things I ran into

When sauce labs tells you to insert the following into your travis.yml: curl | bash It should actually be curl | bash Otherwise, you’ll get a strange “Connection refused” error.

Trying to load a second private repo in your Travis build server will result in a “Repository not found” or “Authentication failed” error which can only be fixed using this obscure support article.

Tuning Nginx and PHP-FPM

I was running into the issue where our CPU on our nginx webapp servers was not being fully utilized, and caused timeouts whenever CPU went above about 10% and memory was hardly being used. I had tried changing the configurations for nginx in the past with no success, so things were getting out of hand. When our traffic spiked yesterday morning due to Google Cloud Developers Conference, Where they are using Tint, we went down, and I had to increase our server count to 20 8GB 8vCPU servers.

Twenty servers to handle 20RPS just seemed ridiculous to me since nginx can handle thousands of RPS on a tuned machine. So I spent a couple of hours yesterday formulating a process to guess and check the effects of the server configurations in order to find out what was causing the issue.

The Process

  1. Isolate a single production server by removing it from all load balancers.
  2. Set up a account and validate the server in step 1 using the various methods outlined within blitz.
  3. Load test the server to see its performance.
  4. Shell into the server and change the server configuration, I was experimenting with /etc/nginx/nginx.conf and /etc/php5/fpm/pool.d/www.conf (don’t forget to restart the server)
  5. Load test the server while running ‘top’ and see if the performance changed.

Those 5 steps allowed me to finally figure out a combination of settings that allowed nginx and PHP to better utilize the CPU.

Test Results


Image of pm.max_children = 5 test results


Image of pm.max_children = 375 test results

Server Configuration Changes


what I changed in nginx.conf


I changed pm.max_children = 5 to pm.max_children = 375

See the links below for more details on what these settings mean.

Additional Findings

average day

  • All of our traffic (~1600 concurrent users on Google Analytics realtime overview) can be handled by a single server with these new configurations. CPU of the single server handling all of our traffic was ~40%.
  • 6 of these servers behind a load balancer, an average of 53RPS could be handled while keeping response time less than 1s. Usually our RPS is around 5-15.

How To Create an Apple Developer Account Anonymously

Masked Credit Card

Yesterday, we ran into an issue where we were trying to release our white-label app in the Apple App Store. We wanted to submit it under a pseudonym or anonymously, however the App Store requires you to use a credit card with a name attached to it, and uses that name when you publish in the App Store. This is obviously unacceptable for a white label app, since searching any of our teammate’s names will quickly lead back to our company, and it put us in a bind because creating a credit card under a pseudonym is a pain in the ass.

So, after some sleuthing around, we found a credit card masking service called Do Not Track Me run by a company called Abine. Some further sleuthing revealed that this is in fact a legitimate company as cited by Forbes. We signed up for the premium monthly service, created a masked card, and voila! We were able to sign up to an Apple Developer Account under a pseudonym.

The Evolution of Your Deployment System

Stage 1

You wake up one day and decide to build an app. You spin up a micro instance and start developing. Deployment is simple. Just git pull and you’re done. When you expect more people to start loading the site, you spin up a couple more micros and put them behind a load balancer. This is the first time you’ve ever used these fancy load balancers, and everything seems to be humming along. Although, it gets annoying that you have to shell into each machine to update your code.

Stage 2

A couple weeks set in and it gets annoying to shell into each and every server to tweak anything. You do some sleuthing online and you read about Capistrano. So you set it up and after a half a day of being stuck on an SSH key issue, and another half a day tweaking the ruby code (while learning ruby at the same time), you can deploy with a single command. Yay.

Stage 3

Except, the only hitch is that the site is down for 30 seconds to a minute while the deployment happens. Not a big deal when your site has a handful of users on it concurrently. But as you grow, more and more users notice when you have downtime and you cannot ignore the issue any longer. You can’t figure out why Capistrano is causing the downtime anywhere on the web. Sigh. So, you decide to write a script that interacts with AWS to stagger Capistrano deploy commands and remove servers from the load balancer before deploying. After learning the AWS CLI, you cobble together a python script that results in zero downtime deployments.

Stage 4

Except, the script takes about 1 minute to run. Then, you have to double the number of servers since more and more people are using your service. That doubles deployment time to 2 minutes. Although it isn’t impossible, waiting 2 minutes for your code to deploy begins to irritate you. But your coworker mentions and you install it in 30 minutes. Suddenly you have a scalable deployment method that deploys in less than 10 seconds and even has an intuitive web interface. Lovely. Why didn’t you figure this out earlier?