Every startup plans for growth in some shape or form. Growth will happen on multiple axes, with revenue as the dominating factor, but growth can also be represented through the number of customers, the headcount of the company and also the amount of data or traffic its systems can handle.
Whether you’re building a sales organization or a back-end API, you need to consider the promise of growth and prepare for the future. The main question is - how do you approach this promise of near-exponential growth?
Do you look one, two or three years into the future and build a system, an organization or a process that can handle the expected future?
In this blog post, I’ll give an example of how we did this at Unacast. The example is fairly technical, related specifically to our data pipeline, but I expect people across multiple functions can benefit from it.
So let’s start with the most basic question.
How do you plan for the hockey stick?
The graph below shows the approximate data points ingested by Unacast’s Visitation pipeline per day over the last two and a half years.
When we built the first prototype of our visitations product, some two and a half years ago, we knew that we would see exponential growth, but we ignored it.
That might seem like a stupid thing to do, and yes, at times it also gave us growing pains. However, in hindsight, we would never have been at the point where we are now if we had initially created that prototype so that it could handle today's volumes.
So, let’s take a look at the stages we went through to get to where we are today.
The first iteration of the visitation pipeline
Our first visitation product was very basic. We ingested data in BigQuery, via S3 and Cloud Storage. Every day we calculated our entire universe of data and did a full export to our clients.
This was great for prototyping and low volumes, and at this point, all our logic is encapsulated in SQL queries.
This happiness didn’t last long.
After a few months with data, we understood that we couldn’t recalculate everything every single day, since the queries in BigQuery were starting to get very slow.
We did a regroup and re-architected our entire pipeline.
The second iteration of the visitation pipeline
Now we split the duties of a few tasks, normalization and transformation, Point of Interest assignments and the clustering, allowing for a more scalable pipeline.
We also moved to batch-wise processing, which put less pressure on the system. Since we’re doing device level calculations, we see that there is a slightly exponential performance in the clustering part in Data Flow. With the volumes we have at this moment, this is acceptable.
Some time passes, and we see that both the performance and the cost of our new pipeline is suboptimal.
Our POI database doesn’t scale the way we wanted to, and our clustering is taking way too long and costs way too much money.
We identify that there is a certain step in the clustering (the Group By Key for my fellow geeks) that is the culprit of the exponential performance, and we start developing the next iteration of our pipeline.
The third iteration of the visitation pipeline
In the last iteration that I’ll describe here, we’ve done some big changes. We moved one of the more costly steps of the clustering from DataFlow to BigQuery (the Group By Key, remember?), which makes our processing a lot faster and 95% cheaper. Big wins!
Why does this matter?
You might scratch your head and wonder why this matters for me and my business. You might think, “We’re not building data pipelines, so why should I care?”.
If you are building data pipelines you might think “Well, that was a quick run-through, where are all of the gory details?”
My point in this isn’t about data pipelines, it’s about doing sensible planning for the future.
If we set out to architect the perfect data pipeline two years ago, one that would scale even further into the future that we are now, we’d still be working on it.
Our firm belief is that you should do the appropriate amount of planning. There are a few reasons why we have managed to build a best-in-class visitation product in a short time. This comes from following a few guidelines, which have been more or less unchanged since the inception of the company.
This is our mantra:
- Spike, test, delete
We try things. Both stupid and smart things. We test them out, and if they don’t work, we are quick to delete it. This curiosity has led us to some of our best innovations, both technical and on an organizational level.
- Invest in what survives
The counterpart of point 1. When you find something that truly works, invest in it. Make it better, more refined, and treasure it.
- Not invented here for a reason
We stand on the shoulders of giants. We learn from others and use the tools and techniques others have invented. The combination of which we use these is how we are unique. For the technical part of the company, this means that you will to the fullest extent possible use fully managed services, where someone else already has done the hard work of setting it up, and you yourself can utilize the functionality and get faster to the value.
- Architecture is a hygiene factor
Architecture is a broad theme. For a tech company this, of course, means the technical architecture, but this can also be on a organizational level. If your process, be it technical or not, is working, the architecture of your system or organization will give itself.
So, what’s the main learning from all of this? It’s actually pretty simple: Take one step at a time. Simultaneously, you have to be mindful of any growing pains, whether that is in your technical or organizational architecture, and be take swift decisions to relieve those pains. Aim to undo one pain at a time, and not to fix everything at once.
It worked for us, do you think it will work for you? Let me know! Reach out to me on Twitter @heim.