Modern apps place high demands on front-end developers. Web apps require complex functionality, and the lion’s share of that work is falling to front-end devs:
- building modern, accessible user interfaces
- creating interactive elements and complex animations
- managing complex application state
- meta-programming: build scripts, transpilers, bundlers, linters, etc.
- reading from REST, GraphQL, and other APIs
- middle-tier programming: proxies, redirects, routing, middleware, auth, etc.
This list is daunting on its own, but it gets really rough if your tech stack doesn’t optimize for simplicity. A complex infrastructure introduces hidden responsibilities that introduce risk, slowdowns, and frustration.
Depending on the infrastructure we choose, we may also inadvertently add server configuration, release management, and other DevOps duties to a front-end developer’s plate.
Software architecture has a direct impact on team productivity. Choose tools that avoid hidden complexity to help your teams accomplish more and feel less overloaded.
The sneaky middle tier — where front-end tasks can balloon in complexity
Let’s look at a task I’ve seen assigned to multiple front-end teams: create a simple REST API to combine data from a few services into a single request for the frontend. If you just yelled at your computer, “But that’s not a frontend task!” — I agree! But who am I to let facts hinder the backlog?
An API that’s only needed by the frontend falls into middle-tier programming. For example, if the front end combines the data from several backend services and derives a few additional fields, a common approach is to add a proxy API so the frontend isn’t making multiple API calls and doing a bunch of business logic on the client side.
There’s not a clear line to which back-end team should own an API like this. Getting it onto another team’s backlog — and getting updates made in the future — can be a bureaucratic nightmare, so the front-end team ends up with the responsibility.
This is a story that ends differently depending on the architectural choices we make. Let’s look at two common approaches to handling this task:
- Build an Express app on Node to create the REST API
- Use serverless functions to create the REST API
Express + Node comes with a surprising amount of hidden complexity and overhead. Serverless lets front-end developers deploy and scale the API quickly so they can get back to their other front-end tasks.
Solution 1: Build and deploy the API using Node and Express (and Docker and Kubernetes)
Earlier in my career, the standard operating procedure was to use Node and Express to stand up a REST API. On the surface, this seems relatively straightforward. We can create the whole REST API in a file called server.js
:
const express = require('express'); const PORT = 8080;
const HOST = '0.0.0.0'; const app = express(); app.use(express.static('site')); // simple REST API to load movies by slug
const movies = require('./data.json'); app.get('/api/movies/:slug', (req, res) => { const { slug } = req.params; const movie = movies.find((m) => m.slug === slug); res.json(movie);
}); app.listen(PORT, HOST, () => { console.log(`app running on http://${HOST}:${PORT}`);
});
This code isn’t too far removed from front-end JavaScript. There’s a decent amount of boilerplate in here that will trip up a front-end dev if they’ve never seen it before, but it’s manageable.
If we run node server.js
, we can visit http://localhost:8080/api/movies/some-movie
and see a JSON object with details for the movie with the slug some-movie
(assuming you’ve defined that in data.json
).
Deployment introduces a ton of extra overhead
Building the API is only the beginning, however. We need to get this API deployed in a way that can handle a decent amount of traffic without falling down. Suddenly, things get a lot more complicated.
We need several more tools:
- somewhere to deploy this (e.g. DigitalOcean, Google Cloud Platform, AWS)
- a container to keep local dev and production consistent (i.e. Docker)
- a way to make sure the deployment stays live and can handle traffic spikes (i.e. Kubernetes)
At this point, we’re way outside front-end territory. I’ve done this kind of work before, but my solution was to copy-paste from a tutorial or Stack Overflow answer.
The Docker config is somewhat comprehensible, but I have no idea if it’s secure or optimized:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
Next, we need to figure out how to deploy the Docker container into Kubernetes. Why? I’m not really sure, but that’s what the back end teams at the company use, so we should follow best practices.
This requires more configuration (all copy-and-pasted). We entrust our fate to Google and come up with Docker’s instructions for deploying a container to Kubernetes.
Our initial task of “stand up a quick Node API” has ballooned into a suite of tasks that don’t line up with our core skill set. The first time I got handed a task like this, I lost several days getting things configured and waiting on feedback from the backend teams to make sure I wasn’t causing more problems than I was solving.
Some companies have a DevOps team to check this work and make sure it doesn’t do anything terrible. Others end up trusting the hivemind of Stack Overflow and hoping for the best.
With this approach, things start out manageable with some Node code, but quickly spiral out into multiple layers of config spanning areas of expertise that are well beyond what we should expect a frontend developer to know.
Solution 2: Build the same REST API using serverless functions
If we choose serverless functions, the story can be dramatically different. Serverless is a great companion to Jamstack web apps that provides front-end developers with the ability to handle middle tier programming without the unnecessary complexity of figuring out how to deploy and scale a server.
There are multiple frameworks and platforms that make deploying serverless functions painless. My preferred solution is to use Netlify since it enables automated continuous delivery of both the front end and serverless functions. For this example, we’ll use Netlify Functions to manage our serverless API.
Using Functions as a Service (a fancy way of describing platforms that handle the infrastructure and scaling for serverless functions) means that we can focus only on the business logic and know that our middle tier service can handle huge amounts of traffic without falling down. We don’t need to deal with Docker containers or Kubernetes or even the boilerplate of a Node server — it Just Works™ so we can ship a solution and move on to our next task.
First, we can define our REST API in a serverless function at netlify/functions/movie-by-slug.js
:
const movies = require('./data.json'); exports.handler = async (event) => { const slug = event.path.replace('/api/movies/', ''); const movie = movies.find((m) => m.slug === slug); return { statusCode: 200, body: JSON.stringify(movie), };
};
To add the proper routing, we can create a netlify.toml
at the root of the project:
[[redirects]] from = "/api/movies/*" to = "/.netlify/functions/movie-by-slug" status = 200
This is significantly less configuration than we’d need for the Node/Express approach. What I prefer about this approach is that the config here is stripped down to only what we care about: the specific paths our API should handle. The rest — build commands, ports, and so on — is handled for us with good defaults.
If we have the Netlify CLI installed, we can run this locally right away with the command ntl dev
, which knows to look for serverless functions in the netlify/functions
directory.
Visiting http://localhost:888/api/movies/booper
will show a JSON object containing details about the “booper” movie.
So far, this doesn’t feel too different from the Node and Express setup. However, when we go to deploy, the difference is huge. Here’s what it takes to deploy this site to production:
- Commit the serverless function and
netlify.toml
to repo and push it up on GitHub, Bitbucket, or GitLab - Use the Netlify CLI to create a new site connected to your git repo:
ntl init
That’s it! The API is now deployed and capable of scaling on demand to millions of hits. Changes will be automatically deployed whenever they’re pushed to the main repo branch.
You can see this in action at https://serverless-rest-api.netlify.app and check out the source code on GitHub.
Serverless unlocks a huge amount of potential for front-end developers
Serverless functions are not a replacement for all back-ends, but they’re an extremely powerful option for handling middle-tier development. Serverless avoids the unintentional complexity that can cause organizational bottlenecks and severe efficiency problems.
Using serverless functions allows front-end developers to complete middle-tier programming tasks without taking on the additional boilerplate and DevOps overhead that creates risk and decreases productivity.
If our goal is to empower frontend teams to quickly and confidently ship software, choosing serverless functions bakes productivity into the infrastructure. Since adopting this approach as my default Jamstack starter, I’ve been able to ship faster than ever, whether I’m working alone, with other front-end devs, or cross-functionally with teams across a company.
The post Serverless Functions: The Secret to Ultra-Productive Front-End Teams appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter.