How to create & manage a Postgres database in NodeJS from scratch

Seif Ghezala's photo
Seif Ghezala
Updated 2023-11-13 路 10 min
Table of contents
Notice: Before you jump in and start reading, it's important to understand that this is not a tutorial you'd read while sitting in public transportation or on your toilet seat. You might want to find a nice place to sit for an hour and follow the tutorial.

We have 1 goal: set up a production-ready NodeJS backend for a blog.

Just by targeting that goal, we will learn:

  • What a database schema is and how to design it. How to quickly create a Postgres database based on that schema with 1 command.
  • How to visualize & interact with the database with pgAdmin.
  • What an ORM is and what makes it better than using direct SQL queries. How to use the Sequelize ORM to perform migrations, seeds, and queries on the database.

Let's do this!

馃巵 Gift:Here's a nice Spotify playlist to listen to while doing this.

Step 1: designing the database schema

The database schema is the blueprint of how the database will be organized. The schema indicates how this usually designed prior to building the database in order to best fit the application's requirements. Therefore, the first step should be to clearly define our blog's requirements.

What are the requirements of our blog?

Our blog requirements can be summarized in the following points:

  • We can have multiple users. Each user has a name, email, password (hash) and a profile picture URL. It's also important to keep track of when a user account is created or modified.
  • The blog can have multiple articles. Each article has a title, body, and minutes of reading. It's also important to keep track of when an article is created or modified.
  • A user can write one or many articles. An article can also have one or many authors.
  • Articles can be organized into categories. Each article can belong to one or many categories.

What entities can we extract from these requirements?

One way of looking at these requirements is to group them based on the following entities/atoms:

  • Users: they have a name, email, password (hash) and a profile picture URL.
  • Articles: they have a title, body, and minutes of reading.
  • Categories: they have a name.

Each entity represents a database table.

What are the relationships between these entities?

Now that we extracted the different database entities, we can extract the different relationships between them:

  • ArticleAuthors: this is the relationship between articles and users. Each user can have multiple articles and each article can have multiple users (authors).
  • CategoryArticles: this is the relationship between categories and articles. Each category can have multiple articles and each article can have multiple categories.

Each relationship represents a database table.

Drawing the database schema

That's it! Believe it or not, we already designed our database. The only thing left is to draw it with a database design tool such as dbdiagram.

Final database schema
Final database schema

Step 2: creating a Postgres database with 1 command

Instead of installing a million tools to be able to run our database, we will create and run it with 1 command using Docker.

If you don't have Docker installed already, you can install it here. To bootstrap a database, we can run the following command:

docker run -d -p 5432:5432 --name my-postgres -e POSTGRES_PASSWORD=postgres postgres

This will run a Postgres Docker container containing by default a Postgres database called postgres. The database is then available on the port 5432.

Step 3: visualizing & interacting with the database with pgAdmin

One way of interacting with a Postgres database is via a UI tool such as pgAdmin.

After installing pgAdmin, let's create a server connection to our database. To do so, we right-click on Servers in the left tab and select Create then Server.

Connecting to our database

Let's call our connection blog in the General tab:

General tab when creating a server connection to our database
General tab when creating a server connection to our database

We can then enter the necessary information to connect to the database in the Connection tab:

Host: localhost
Port: 5432
Maintenance database: postgres
Username: postgres
Password: postgres
Connection tab when creating a server connection to our database
Connection tab when creating a server connection to our database

Step 4: using the Sequelize ORM to perform migrations, seeds, and queries on the database.

What's an ORM? 馃

An ORM (Object-relational-mapper), is a tool that facilitates managing and interacting with a database without having to manually write SQL. An ORM avoids our backend from having long SQL queries all over the place and offers other features to reliably manage our database.

In the next few steps, we will install and use Sequelize, a NodeJS ORM.

Setting up the project

Let's first create an empty project folder blog and initialize it with npm:

mkdir blog
cd blog
npm init -y

Let's also create an empty index.js file that will have the code for our server:

touch index.js

Installing Sequelize

Now let's go ahead and install Sequelize and its command-line tool:

npm install sequelize sequelize-cli

Bootstrapping an initial project structure

sequelize-cli allows to quickly bootstrap a useful initial boilerplate that saves us some time. Let's do so by running the following command:

./node_modules/.bin/sequelize init

The result should be something like:

Created "config/config.json"
Successfully created models folder at ".../blog/models".
Successfully created migrations folder at ".../blog/migrations".
Successfully created seeders folder at ".../blog/seeders".

This command created the following:

  • A config/config.json file that will contain the necessary configuration to connect to our database in development, staging, and production environments.
  • A models/ directory which will have models. Models are simply blueprinting functions that map directly to our database tables. We will have a model for every table in our schema.
  • A migrations/ directory. Migrations are scripts that allow us to reliably transform with time our database schema and keep it consistent across environments. In other words, if we ever change our mind about the database schema we designed, migrations are the best way to change it without sweating on losing our data.
  • A seeders/ directory. Seeders are scripts to inject data in our database. We will use that to populate our database tables with test data.

Creating a migration for the tables

Guess what? dbdiagram allows us as well to export the necessary SQL statements to create our database. Let's go ahead and export the Postgres compatible SQL queries:

Exporting SQL queries to create our database tables
Exporting SQL queries to create our database tables

Let's save these queries under blog/queries/create-tables.sql. These queries will be used in our first migration to create the database tables and should contain the following:

CREATE TABLE "Users" (
"id" int PRIMARY KEY,
"name" varchar,
"email" varchar UNIQUE,
"hash" varchar,
"picture" varchar,
"createdAt" timestamp,
"updatedAt" timestamp
);
CREATE TABLE "Articles" (
"id" int PRIMARY KEY,
"title" varchar,
"body" varchar,
"minutesRead" varchar,
"createdAt" timestamp,
"updatedAt" timestamp
);
CREATE TABLE "Categories" (
"name" varchar PRIMARY KEY
);
CREATE TABLE "ArticleAuthors" (
"authorId" int PRIMARY KEY,
"articleId" int PRIMARY KEY
);
CREATE TABLE "ArticleCategories" (
"articleId" int PRIMARY KEY,
"categoryName" varchar PRIMARY KEY
);
ALTER TABLE "ArticleAuthors" ADD FOREIGN KEY ("authorId") REFERENCES "Users" ("id");
ALTER TABLE "ArticleAuthors" ADD FOREIGN KEY ("articleId") REFERENCES "Articles" ("id");
ALTER TABLE "ArticleCategories" ADD FOREIGN KEY ("articleId") REFERENCES "Articles" ("id");
ALTER TABLE "ArticleCategories" ADD FOREIGN KEY ("categoryName") REFERENCES "Categories" ("name");
Since Postgres doesn't support defining multiple Primary Keys with this syntax, let's modify the queries for creating ArticleAuthors and ArticleCategories:
CREATE TABLE "ArticleAuthors" (
"authorId" int NOT NULL,
"articleId" int NOT NULL,
"createdAt" timestamp NOT NULL,
"updatedAt" timestamp NOT NULL,
CONSTRAINT pk1 PRIMARY KEY ("authorId","articleId")
);
CREATE TABLE "ArticleCategories" (
"articleId" int NOT NULL,
"categoryName" varchar NOT NULL,
CONSTRAINT pk2 PRIMARY KEY ("articleId","categoryName")
);

Let's also create drop-tables.sql in the same folder. This contains necessary queries to drop the tables in case we want to rollback our migration for creating tables:

DROP TABLE "ArticleCategories";
DROP TABLE "ArticleAuthors";
DROP TABLE "Categories";
DROP TABLE "Articles";
DROP TABLE "Users";

Now, we can create our first migration, create-tables :

./node_modules/.bin/sequelize migration:generate --name create-tables

This creates the migration script create-tables containing the following:

"use strict";
module.exports = {
up: (queryInterface, Sequelize) => {
/*
Add altering commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.createTable('users', { id: Sequelize.INTEGER });
*/
},
down: (queryInterface, Sequelize) => {
/*
Add reverting commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.dropTable('users');
*/
},
};

Our migration file is pretty straight-forward and contains 2 functions:

  • up: executed to do the migration work. In our case, it will contain the script to create our tables.
  • down: executed to rollback (undo) the migration.

Notice that both functions receive queryInterface as argument. In the up function, we can use it to create our tables by running the queries in queries/create-tables.sql. In the down function, we can use queryInterface to drop the tables by running the queries in queries/drop-tables.sql:

const fs = require("fs");
const path = require("path");
const readFile = require("util").promisify(fs.readFile);
module.exports = {
up: async (queryInterface) => {
try {
const queryPath = path.resolve(
__dirname,
"../queries/create- tables.sql"
);
const query = await readFile(queryPath, "utf8");
return await queryInterface.sequelize.query(query);
} catch (err) {
console.error("Unable to create tables: ", err);
}
},
down: async (queryInterface) => {
try {
const queryPath = path.resolve(__dirname, "../queries/drop-tables.sql");
const query = await readFile(queryPath, "utf8");
return await queryInterface.sequelize.query(query);
} catch (err) {
console.error("Unable to drop tables: ", err);
}
},
};

Now let's run our migration:

./node_modules/.bin/sequelize db:migrate

Once this is done, let's right-click on the postgres database in pgAdmin and click on refresh. After that, our tables should be available 馃檶

Our tables in pgAdmin
Our tables in pgAdmin

Adding models & associations

Now let's add the necessary models for our tables. Their relationships can be translated through associations. Once we define associations, Sequelize is able to automatically perform join queries when needed.

Let's start with the User model:

// models/User.js
const Sequelize = require("sequelize");
module.exports = function createUserModel(sequelize) {
const User = sequelize.define(
"User",
{
username: { type: Sequelize.STRING, allowNull: false },
firstname: { type: Sequelize.STRING, allowNull: false },
lastname: { type: Sequelize.STRING, allowNull: false },
email: {
type: Sequelize.STRING,
allowNull: false,
validate: { isEmail: true },
},
picture: {
type: Sequelize.STRING,
},
hash: { type: Sequelize.STRING, allowNull: false },
},
{}
);
User.associate = ({ Article, ArticleAuthors }) =>
User.belongsToMany(Article, {
as: "articles",
through: ArticleAuthors,
foreignKey: "authorId",
});
return User;
};
Notice: sequelize automatically maps the createdAt and updatedAt fields.

The Article model looks as follows:

// models/Article.js
const Sequelize = require("sequelize");
module.exports = function createUserModel(sequelize) {
const Article = sequelize.define(
"Article",
{
title: { type: Sequelize.STRING, allowNull: false },
body: { type: Sequelize.STRING, allowNull: false },
minutesRead: { type: Sequelize.INTEGER, allowNull: false },
},
{}
);
Article.associate = ({
User,
ArticleAuthors,
Category,
ArticleCategories,
}) => {
Article.belongsToMany(User, {
as: "authors",
through: ArticleAuthors,
foreignKey: "articleId",
});
Article.belongsToMany(Category, {
as: "categories",
through: ArticleCategories,
foreignKey: "articleId",
});
return Article;
};
};

The ArticleAuthors model is pretty short and looks as follows:

// models/ArticleAuthors.js
module.exports = function createUserModel(sequelize) {
return sequelize.define("ArticleAuthors", {}, {});
};
Now the Categories model contains the following:
// models/Category.js
const Sequelize = require("sequelize");
module.exports = function createUserModel(sequelize) {
const Category = sequelize.define(
"Category",
{
name: { type: Sequelize.STRING, allowNull: false }
},
{}
);
Category.associate = ({ Article, ArticleCategories }) =>
Category.belongsToMany(Article, {
as: "articles",
through: ArticleCategories,
foreignKey: "categoryName"
});
return Category;
};

Just like ArticleAuthors, ArticleCategories is pretty short:

// models/ArticleCategories.js
module.exports = function createUserModel(sequelize) {
return sequelize.define("ArticleAuthors", {}, {});
};

Inserting test data in the tables

Now let's create a seeder to test some data:

./node_modules/.bin/sequelize migration:generate --name create-tables

The seeder script has a syntax similar to migrations and uses queryInterface to seed data:

const { hash } = require("../../utils");
module.exports = {
up: async (queryInterface) => {
try {
const authorId = 1;
const articleId = 2;
const categoryNames = ["React", "Node"];
await queryInterface.bulkInsert(
"Users",
[
{
id: authorId,
username: "elvira93",
firstname: "Elvira",
lastname: "Chenglou",
email: "elvira@demo.com",
hash: hash("WallayBillayItsaPassword"),
createdAt: new Date(),
updatedAt: new Date(),
},
],
{}
);
await queryInterface.bulkInsert(
"Articles",
[
{
id: articleId,
title: "Bilal writes all the titles",
body:
"Les pyramides comme tu sais, on est l脿. Fais attention, on est chaud, il y a des scientifiques dans la place.",
createdAt: new Date(),
updatedAt: new Date(),
},
],
{}
);
await queryInterface.bulkInsert(
"Categories",
[
{
name: categoryNames[0],
},
{
name: categoryNames[1],
},
],
{}
);
await queryInterface.bulkInsert(
"ArticleAuthors",
[
{
authorId,
articleId,
createdAt: new Date(),
updatedAt: new Date(),
},
],
{}
);
await queryInterface.bulkInsert(
"ArticleCategories",
[
{
articleId,
name: categoryNames[0],
},
{
articleId,
name: categoryNames[1],
},
],
{}
);
} catch (err) {
console.error("Error in seeding: ", err);
}
},
down: (queryInterface) => {
const deleteArticleCategories = queryInterface.bulkDelete(
"ArticleCategories",
null,
{}
);
const deleteArticleAuthor = queryInterface.bulkDelete(
"ArticleAuthors",
null,
{}
);
const deleteUser = queryInterface.bulkDelete("Users", null, {});
const deleteArticle = queryInterface.bulkDelete("Articles", null, {});
const deleteCategory = queryInterface.bulkDelete("Categories", null, {});
return Promise.all([
deleteArticleCategories,
deleteArticleAuthor,
deleteUser,
deleteArticle,
deleteCategory,
]);
},
};



Recent articles