Leverage Turing Intelligence capabilities to integrate AI into your operations, enhance automation, and optimize cloud migration for scalable impact.
Advance foundation model research and improve LLM reasoning, coding, and multimodal capabilities with Turing AGI Advancement.
Access a global network of elite AI professionals through Turing Jobs—vetted experts ready to accelerate your AI initiatives.
What are microservices in Node.js and why are they needed? In short, they are services written in Node.js with each one having a responsibility. Microservices can have different programming languages for multiple services, but this is completely optional. In this article, we will be focusing on how to build microservices with Node.js.
Microservices are many small services responsible for one functionality or domain, for example, user services, authentication services, etc. Each microservice has characteristics, such as being loosely coupled and independently deployable. Despite these advantages, they do have a few downsides. These include having to monitor each service when they grow in number, and issues with debugging when services have defects.
Monolithic means that there is only one service but there may be many modules in it. It is different from microservices in terms of deployment. Monolithic will deploy all modules. Note that when the modules become huge, it can take more time to deploy. For example, if we release a new feature related to the User Module, we will need to pack all of the modules to deploy the service.
Here’s a figure to understand the monolithic deployment process better.
Let’s compare it with the microservices deployment.
A major difference between both deployment processes is that with microservices, we can update the User Service as long as it’s not related to other services. We need to avoid breaking changes to achieve the process. For example, avoid removing an endpoint, any attributes from the response, etc.
Here’s a comparison between monolithic and microservices in Node.js.
Let’s explore why we need microservices in Node.js, and look at the benefits.
1. Flexible scalability: Since there is more flexible scalability, we can scale per service or critical services.
2. Frequent deployment: We can deploy each day, each week, and more frequently with no hassle.
3. Promotes agility: We can work with small teams to manage one service and release frequently.
4. Better reliability: We can deploy one service without worrying about breaking the entire service.
Here are a few popular and easy-to-learn tools to build microservices in Node.js:
This architecture is not specific to Node.js since it can be used for other systems or applications.
1. Database per service: We usually need this pattern because we separate each service for its responsibilities. We expect each service to only handle its databases.
2. Shared database: This is commonly used when first migrating from monolithic to microservices.
3. Service per team: Used by large organizations, this pattern is unique because each team will have responsibility for its services.
4. Messaging: This utilizes asynchronous messaging for communicating with another service.
We can combine these microservices based on needs and requirements. One can also choose to hire a Node.js developer to handle the task.
There are many patterns for microservices, but these are some of the more well-known and most-used architectures. Others can be explored here.
In this section, we will explore how to build microservices in Node.js in detail.
The following are the tools needed to build microservices in Node.js:
1. Node.js
2. NPM, which is a tool that is usually already installed within Node.js.
3. Docker for deploying microservices locally.
4. RabbitMQ, which will be used as a messaging server. Note: we suggest running RabbitMQ using Docker. The same can be set up here.
Here’s a hands-on guide for practice. We will follow the messaging pattern.
1. Initialize the project using the command npm init. Name the project producer.
a. Install some dependencies or libraries to communicate with RabbitMQ. In this case, we will use amqplib. Run the command npm install amqplib.
b. Install the “uuid” library to give a unique ID to each message. Run the command npm install uuid.
2. Initialize another project in a different directory and name it consumer. Install the library with the name amqplib by using the command npm install amqplib.
3. We’re now ready to code. This Github Repository can be checked for the final result of the code.
Next, we follow the steps below for the implementation.
1. Prepare a file with the name index with extension js in each project.
2. To better underline each line or process of the code, check this for the producer and this for the consumer.
1. We can set up the connection to RabbitMQ with the following code:
const connection = await amqp.connect(process.env.RABBITMQ_HOST || 'amqp://localhost');
2. We will need to create the channel before sending any messages. The code is as follows:
const channel = await connection.createChannel();
3. We will use a queue and the consumer will use the same to read any incoming messages. The code below will create the queue:
await channel.assertQueue(queueName, { durable: true, });
4. Once the preparation is complete, we can send the messages using sendToQueue. Here’s the code for the same.
const correlationId = uuidv4(); // send 10 messages and generate message id for each messages for (let i = 1; i <= 10; i++) { const buff = Buffer.from(JSON.stringify({ test: `Hello World ${i}!!` }), 'utf-8'); const result = channel.sendToQueue(queueName, buff, { persistent: true, messageId: uuidv4(), correlationId: correlationId, }); console.log(result); }
5. We need to close the channel and connection after sending the messages.
await channel.close(); await connection.close();
1. Similar to the producer, we will need to set up the connection, channel, and queue (if it is not created yet). Here’s the code.
const connection = await amqp.connect(process.env.RABBITMQ_HOST || 'amqp://localhost'); const channel = await connection.createChannel(); await channel.assertQueue(queueName, { durable: true, });
2. We can use consume with our queue name. Note that we need to have the same queue name.
channel.consume(queueName, function (message) { console.log("[%s] Received with id (%s) message: %s", message.properties.correlationId, message.properties.messageId, message.content.toString()); channel.ack(message); }, { noAck: false, });
We will now learn how multiple services communicate with each other through HTTP API. The diagram below illustrates the implementation.
The common use case of this diagram is when the User Service doesn't have any data and needs to ask Hello Service to provide the data. The User Service will combine the data and return it to the user.
For now, we won’t learn with databases. We will instead learn how they communicate with each other. It’s common to have HTTP API communication when implementing microservices in Node.js.
We will generate two projects: Hello Service and User Service. We will use the Express Generator in this case.
1. Generate the project using this command “npx express-generator hello”. The output will be as below.
2. Move to the directory with the name “hello”. Run “npm i” to install the dependencies.
3. Focus on the file in routes with name users and the file extensions js. Update the code like this.
/* GET users listing. */ router.get('/', function(req, res, next) { let name = req.query["name"]; res.send({ message: `Hello ${name}!` }); });
4. It helps to learn Postman and try the API with it. The project can be run using the command npm run start.
5. We can check the response of the API using Postman.
For the final result of Hello Service, we can take a look at this code.
1. Generate the project using this command “npx express-generator users”.
2. Move to the directory with the name “users”. Run “npm i” to install the dependencies.
3. We will add a new library/dependency named “Axios” which can be installed with npm install axios. We will use it to communicate with Hello Service.
4. Set the environment variable for PORT to 3001. We need to change the PORT because it’s already used by Hello Service. We can see this for Windows. In Linux, we can use “export PORT=3001”.
5. Update the users' files with this code.
var express = require('express'); var router = express.Router(); var axios = require('axios');/* GET users listing. */ router.get('/', function(req, res, next) { let name = req.query["name"]; // request the message from the hello service axios.get(
http://localhost:3000/users?name=${name}
).then(result => { // return with a custom response and give the message res.send(You've greeted with this message: <b>${result.data["message"]}</b>
); }).catch(err => { // resend the error res.send(err); }) });module.exports = router;
6. We can see that the response is different.
7. For the final result of User Service, we can look at this code.
Now, we move on to learning how to connect to the database. We will use the existing User Service and connect with PostgreSQL. We will use pg as our main library.
Before implementation, we need to install PostgreSQL. Note: it’s recommended to install it using Docker. This documentation will help.
1. Move to the User Service directory.
2. Install the pg library using the command npm install pg.
3. Now, set up the migration by installing node-pg-migrate with this command npm install node-pg-migrate.
4. Make sure that the environment variable is already set. We will use
DATABASE_URL. For example, export DATABASE_URL=postgres://postgres:postgres@localhost:5432/test
5. Set up the scripts with the name migrate. The file will be like this.
"scripts": { "start": "node ./bin/www", "migrate": "node-pg-migrate" },
6. Prepare the migrations files in the migrations directory. We will use this name: 1_first_migration.
exports.up = (pgm) => { pgm.createTable('users', { id: 'id', name: { type: 'varchar(1000)', notNull: true }, createdAt: { type: 'timestamp', notNull: true, default: pgm.func('current_timestamp'), }, }) }
7. Now, we will prepare the database with that table. We will create the users' table with columns id, name, and createdAt. Using the command npm run migrate, we make sure the migrations are complete.
8. Navigating to the users file again. We will try to code the connection and make sure the users are created after giving their names.
9. To establish the connection, we first need to define the pool connection.
var { Pool } = require('pg');var pool = new Pool();
10. The environment variable must be set up. This documentation has more information. We need to update the environment variables of PGUSER, PGDATABASE, and PGDATABASE.
11. We will update the functions to async so the code is easier to read.
router.get('/', async function (req, res, next) { // get the name from the query let name = req.query["name"]; try { // check existing data let existing = await pool.query("SELECT COUNT(*) FROM users WHERE name=$1", [name]); if (parseInt(existing.rows[0].count) === 0) { // insert user data let result = await pool.query("INSERT INTO users(name) VALUES($1) RETURNING id", [name]); console.log(Saved with id: ${result.rows[0].id}
); } // request the message from the hello service let response = await axios.get(http://localhost:3000/users?name=${name}
); // return with a custom response and give the message res.send(You've greeted with this message: <b>${response.data["message"]}</b>
); } catch (err) { console.error(err); res.send(err); } });
12. We will insert the user name if it’s the first time they call the API. We can check the result in the database directly.
13. We can also create an API to retrieve data from the databases using this code.
// GET by Name router.get('/byName', async function (req, res, next) { // get the name from query let name = req.query["name"]; try { // check existing data let existing = await pool.query("SELECT * FROM users WHERE name=$1", [name]); // return existing res.send(existing.rows); } catch (err) { console.error(err); res.send(err); } });
14. Let us try the codes in Postman.
And there you have it: a detailed guide to building microservices with Node.js. You’ve learned how to communicate with multiple services through messaging providers like RabbitMQ and HTTP API, and learned simple API using Express. You’ve also learned how to connect with databases. However, to enhance your understanding further, consider exploring NoSQL databases like MongoDB, DynamoDB, Cassandra, etc. It will also be a different experience with relational databases like PostgreSQL.
It’s important to learn more about databases like MySQL, MSSQL (Microsoft SQL Server), etc., and ORMs like Sequelize, TypeORM, Mongoose, and Objection. We recommend learning about these terms too: API Gateway, Load Balancer, API Key, and Rate-Limit. In brief, API Gateway is useful for managing multiple APIs, while Load Balancer comes in handy for managing resources like multiple virtual machines. Meanwhile, API Key secures an API, and Rate-Limit prevents the API from becoming exhausted. It also helps avoid DOS and DDOS.
Bervianto Leo Pratama is a software engineer and technical writer who loves to learn every day. He actively writes technical blogs. His blog is about Microservices, DevOps, and Developer Tools.