- Anthony Corletti
After working as a professional developer for the past ten years, I've experienced a few interest waves in the tech ecosystem, but one point will hold true for the next few decades no matter the wave: developers will always have to run their code somewhere that's not their local machine.
Whether it's a website, backend API, cron job, or machine learning model, the code is going to have to live somewhere that's not the developer's local machine if the developer wants to share their work with the world.
Now more than ever, there are so many choices to do that. Google Cloud (GCP), Amazon Web Services (AWS), and Microsoft Azure (Azure) are the big three clouds with enough ways to deploy code that would take you years to learn them all, and even longer to master them, if that's even possible, because they're always changing.
There are also other cloud servicing providers that are worth mentioning, like DigitalOcean, Heroku, Hetzner, Railway, Render, Vercel, Planetscale, Supabase, Neon, and so many more.
So how do you choose?
For example, you might choose AWS because you're already using AWS for other parts of your business, or you might choose GCP because you're already using Workspace for your email and documents, similarly with Azure and Microsoft. Maybe you only need a static website, so you choose Vercel because it's free and easy to use.
Generally I would recommend using whatever cloud already supports your identity management system so you don't have to handle another way of doing account management. This is especially true if you're a small team and you don't have a dedicated DevOps person. GCP and Azure shine here.
To get more technical, you might also consider using services that allow you to translate across cloud services.
For example, as long as I can use Kubernetes on one cloud, I can shift to any cloud that also uses Kubernetes. In theory this works, but in practice it works so long as you're willing to invest more time and energy into migration cycles. It's not a simple copy and paste, but it's possible.
Another way to think about translatable services is at the container level. Let's say you've placed your application in a Docker container and now you have to run it somewhere. You can do that in so many different ways that all have variable costs and different user experiences.
I think the best choice to make here is to not over-engineer with the mindset that you must be able to run on every cloud.
Focus on simplicity, security, and speed by answering the following questions when testing and evaluating a new platform;
- Do my developers and I love this platform's user experience?
- Is this platform secure by default? Are they compliant with all the necessary regulations for my business and my customers?
- Are the feedback loops fast enough for my developers to be productive? Can we run code in minutes, and understand what's going on with live services quickly?
If you can answer yes to all three of those questions, then you've found a good platform to run your code on.
I'd also like to highlight the fact that you get what you pay for, and sometimes it won't make sense to use the services that everyone else is using.
To illustrate this, let's do a simple cost analysis of the cloud.
Let's imagine a scenario where you need to run your code in the cloud where you want to have at least 8 CPU and 16 Gi of memory. More than enough to run a UI, API, and database in containers – a full stack application if you will.
Approximately what would that cost us on a few of these services?
|Cloud Provider||Cost / Month|
|DigitalOcean (Premium AMD Droplet)||$112|
|Azure VM (Linux, A8 v2, East US)||$146|
|AWS EC2 (Linux, a1.2xlarge, us-east-1)||$150|
|GCP GCE (Free OS tier, e2-standard-8, us-central-1)||$195|
|Railway (by usage for 8 CPU and 16 Gi of Memory)||$320|
|Heroku (Performance L)||$500|
So what's going on here? Why is Hetzner so much cheaper than the rest?
Well, Hetzner is a medium-sized company that owns their own data centers which drives the cost of their services down, there's not too much on that machine so you would have to setup your own virtualization layer (QEMU/ KVM) in order to deploy and run containers for example. This option costs more in terms of time and energy. Worth noting that the Hetzner CCX32 is a dedicated server, so you're not sharing resources with anyone else.
Heroku and Railway on the other hand are pricing in their developer experience and the cost of maintenance. They're trying to make it as easy as possible to deploy your code, and they're charging you for that ease of use. These kinds of services might be worth it if you're a small team and you don't have dedicated DevOps expertise.
So what's the point of all this?
The point is that you should be aware of the tradeoffs you're making when you choose a cloud provider. You should also be aware of the fact that you can always move your code to another cloud provider if you need to.
A general recommendation I would make is to start simple and fast and observe how your business and information systems grow, and then make decisions based on that growth.
A great way to start is with the cloud and managed services, and once workloads become predictable and profitable, move them to a more predictable and secure location that doesn't sacrifice speed and developer experience.
When you reach a scale of and level of growth like 37 Signals, you might consider moving to your own data centers, but that's a whole other blog post.