12. 4. 2020
11 min read
Setting up an infrastructure for a monorepo on Render
I think that DevOps is something that has not yet been solved and all the existing providers (AWS, GCP, Azure) leave much to be desired. That is why every time we are starting a new project, I always do extensive research on existing solutions and new players on the market.
I think that DevOps is something that has not yet been solved and all the existing providers (AWS, GCP, Azure) leave much to be desired. That is why every time we are starting a new project, I always do extensive research on existing solutions and new players on the market. At the beginning of 2020, we were, once again, in a situation where we had to research and pick a provider for a new project we started working on. During that research, we stumbled upon Render.
We were looking for a quick setup and wanted to spend less than a full day on infrastructure. That is why we were looking for something simple, fast and reliable. We decided to give Render a try and knew that there was always the option to fall back to AWS. Render is a fairly new cloud provider you might have never heard of. They have launched in 2019 and won the TechCrunch Disrupt that year. They have a lot going for them.
I'd like to first talk about the infrastructure of our app, so you can see what moving parts we wanted to have. We will also take a look at all that Render has to offer and see how we managed to utilize it in our project. Lastly, I'd like to talk about some shortcomings and take a look at what Render can not help you with (yet, since they are adding new features all the time).
Our infrastructure and Render
We are building a travel platform for companies. The app that we are building consists of a GraphQL backend, combined with a Postgres database for data persistence. On the frontend, we have a user-facing React application that uses Apollo for communicating with our backend. We will also have a standalone Node application that is doing ELT for us. This background application also has its own Postgres database. We will also host a static landing page, that will be built with gatsby.js.
Render has our back on most of the things we will need. But first, let's set up our environment for local development. For this particular project, we will use a monorepo, which means all of our services will live in one directory and under one repository. Fortunately, it's easy to work with monorepos on Render even though it requires some workarounds. But more on that later.
On the backend, we will be running our development environment in Docker, since we want to make sure everybody on the team has the same environment. Another advantage here is, that most machines do not come with Postgres pre-installed and even if it did, it might not be the same version. We can avoid a lot of headaches doing it this way. In production on Render, we will not be using Docker (although we could because it is supported). Let's spin up two services in our Docker with this
docker-compose.yml in our
version: "3"services:db:image: postgresrestart: alwaysports:- "5432:5432"environment:POSTGRES_USER: "postgres"POSTGRES_PASSWORD: "postgres"POSTGRES_DB: "nextretreat_dev"volumes:- db-volume:/var/lib/postgresql/dataapp:build: .depends_on:- dbports:- "3001:3001"env_file: .envvolumes:- .:/home/app/- /home/app/node_modulesvolumes: db-volume:
Our Dockerfile will look like this:
FROM node:12.14.1-alpineEXPOSE 3001WORKDIR /home/appCOPY package.json /home/app/COPY package-lock.json /home/app/RUN npm installCOPY . /home/appCMD ["npm", "run", "start:dev"]
This is a pretty standard configuration if you're used to working with Docker, but there is one thing to keep in mind that is specific to Render and that is the way of specifying
npm version. Locally, it's always a good idea to have an
.nvmrc file that will help the developers on the team to use the same
npm version. In order to let Render know what version it should use, we need to add a
.node-version file in the root of the repository. For this project we are using
12.4.1, so the only thing you need to do is paste that in the
.node-version file. Every time a service is built / re-built on Render servers, it reads the file and uses that for
Next up is our ELT Node service. It will live in our monorepo, in a directory called
data-service. It is a simple app with an
index.js file in the root of the directory that will manage the extraction and loading of the data into our warehouse. As we mentioned before, it has its own database that it stores the information in, so we will once again use Docker for that. We will re-use our docker configuration from our
api service. There are only a couple of things we will change here:
We have to use a different port on our computer (the first one of the
docker-compose.yml) because that would prevent us from running both
data-serviceat the same time. Let's change it
We can remove the EXPOSE 3001 from our
Dockerfile, since this app does not require it
For these kinds of long-running services, you would most likely want to leverage some sort of restart-on-error tool, like forever, or pm2. But we will leave all of this up to Render and its Background Worker service. The start command for this service is a simple
For our static site we will be using Gatsby, which is an awesome tool for building static applications. We will use a simple template, add in our code for the landing page in the
landing-page directory and specify the build command as
gatsby build. Not much else to do here.
The last app in our repository is the web client. Following the super simple documentation, we will create and move into our
web-client directory in the monorepo and scaffold our application using
npx create-react-app my-app. We will specify two main scripts for this app. For local development, we will have a
react-scripts start command and the build command will be
react-scripts build. Super easy. Now we can start developing our app as a regular CRA app.
The only thing left to do for now is to
git push this project to Github (or another git provider).
Setting up Render
We now have our monorepo ready with all the services we will be using locally. Let's recreate our services on Render for the production environment and hook it all up. After creating an account and linking it to our github repository we are presented with a plethora of services we can choose from. For this project, we will not be using Docker in production and this is still going to be extremely easy to set up. We are going to start with databases.
As of writing this article, Render only supports one type of database and that is Postgres. Fortunately, that is exactly what we want to use in this case anyway. Setting up a database is very easy, we just head to the Databases section and create a new database. If your database increases in size during later stages, no worries, you can easily scale it up with a click of a button. Scaling down, however, is not supported yet, so make sure you pick right at this point so that you won't pay for something you will not end up using. We will create 2 databases for now, one for the
api and one for our
When you click on a detail of an existing database you can see a pair of connection strings. It is pretty self-explanatory, but if you are connecting to a database from within a Render-hosted service, you use the internal connection string. If you want to connect to the database from outside (GUI app, or a service that is running elsewhere), you use the external connection string.
Let's navigate to the Services tab and create our
api service. We will be using a Web Service and choose Node as our environment. Next up, we specify the
master branch for the production version of our app (we follow the Git-flow Workflow).
Since this project is living inside of a monorepo we have to remember that when it's cloned, it is located in the root of the repository. We therefore need to prepend all our commands with a
cd <service-directory> in order to be in the correct folder. The build command is here to make sure you have any prep work, like installing dependencies, or setting up external configuration ready before you start the service. In our case, it will be:
cd api && npm install. The start command is
cd api && npm start.
After the service is created, there is only one thing left to do and that is to create our list of production environment variables. For the database connection string, we can just copy the internal connection string and, voilà, our
api service is hooked up to our database. What's more, if you have more services that will use the same environment variables you can also create environment groups and assign those to different services here.
For our long-running service, we will return to the Services tab and create a New Background Worker. As a sidenote, background worker is ideal for processes like this one. They need to run indefinitely and do not expose a port, whereas web services are meant to be accessed by the outside world. We will once again opt-in for the Node environment and specify the
master branch for building. Our build command is:
cd data-service && npm install. Our start command is:
cd data-service && npm start. You can also scale up all of your services horizontally by running multiple instances of that service. This might be useful for a data acquisition service like this one, but let's leave it on 1 for now and see what we will require in the future.
Static site and user-facing client
Setting up static sites is extremely straight forward and if you want to try Render out, your first static site is even free of charge.
For our landing page, we will select Static Site as our environment when creating this service and make
cd landing-page && npm install && npx run gatsby build our build command. We specify our publish directory to be
./landing-page as instructed in our template.
In order to deploy our
web-client, we create another Static Site and make our build command
cd web-client && npm install && npm run build. The publish directory will be
./web-client/build, since this is a CRA app.
After all of this, we can kick back and relax, because we are now running all our services in the cloud on Render. Everything is hooked up, and we can turn on auto deploys if we choose to. This will make sure, that every time something is pushed to our repository, on the specified
master branch, our services get updated and are re-built with the newest additions.
Shortcomings of Render
At the time of writing this article, Render doesn't currently support CDNs and object storage, whereas something like AWS along with S3 has these capabilities. For now, it is a disadvantage, because we have to use another service for our CDN and object storage.
It also doesn't have real support for monorepos. If you do it the way I described, every time you push something to any service in your repository, the hook registers the command on all the other services you have and rebuilds everything. This might not be ideal, but it works and is not that bad.
We usually use AWS on larger projects and my main beef with Amazon is, that it is very complicated to set up. Render on the other hand (kind of like Heroku, or Digital Ocean), has preexisting services that are super easy to set up and scale well at the same time. Although there were inquiries from the community, they do not plan to open source the configurations for their services, because they believe that it would defeat the purpose. It would go against the one-click setup functionality, and I have to say I am with them on this one. This is what makes Render special and so easy to use. For us, it met the requirements we had for it, as we've been able to easily set up this infrastructure in less than a day. That was what we initially wanted to achieve and Render delivered.
Even though it is missing some services and features, the team is very active in delivering new features. While the documentation is pretty complete, some edge cases are not covered and since Render is still pretty new, the internet might be short on answers. But the community is super helpful and the Slack channel is bustling. Whenever I had an issue, a team member always reached out and helped me with whatever I was dealing with.