FastAPI is an open-source modern framework that is used to build APIs in Python.
PostgreSQL is an open-source object-relational database management system.
In this tutorial, we will build a sample RESTful API using Fast API and leverage the power of persisting data using PostgreSQL. We will then containerize our API and database using Dockerfile and Docker Compose files. A Dockerfile is a text file that contains a sequence of instructions that will be executed in the Docker Compose file to build a container. Docker compose is a tool that defines and shares multi-container Docker containers. Our application will consist of two containers. The Fast API container and the PostgreSQL container.
Prerequisites and tools.
-
Docker – You will need to have a basic understanding of how docker works. To understand how docker works you can head over to my previous Getting started with docker post. You will learn how to install docker, how docker works, and docker commands.
-
Python – You will need to have Python installed on your machine. Preferably python 3.10.
Getting started with FastAPI
We shall build a product listing sample application in Python where our users will be able to perform CRUD operations through the API. We will then use PostgreSQL for persisting product data. However, we need to understand what our project directory structure will look like. Here is a snapshot of the project directory structure in FastAPI:
. └── FastAPI_APP/ ├── app/ │ ├── api/ │ │ ├── v1/ │ │ │ ├── endpoints/ │ │ │ │ ├── __init__.py │ │ │ │ └── products.py │ │ │ ├── __init__.py │ │ │ └── api.py │ │ ├── __init__.py │ │ └── deps.py │ ├── core/ │ │ ├── __init__.py │ │ └── settings.py │ ├── crud/ │ │ ├── __init__.py │ │ ├── base.py │ │ └── product.py │ ├── db/ │ │ ├── __init__.py │ │ └── session.py │ ├── models/ │ │ ├── __init__.py │ │ ├── basemodel.py │ │ └── products.py │ ├── schemas/ │ │ ├── __init__.py │ │ └── product.py │ └── utils/ │ ├── __init__.py │ └── idgen.py └── main.py
In a rough overview:
- FastAPI_APP – This is the root directory of our application.
- app – Holds the services of our API.
- main.py – This is the API entry point.
- api – Contains the API endpoints.
- core – Contains core functionalities such as settings, and logging.
- crud – Contains CRUD(Create, Read, Update, Delete) operations.
- db – Contains the database-related code.
- models – Contains the database models.
- utils – Contains the utility functions and classes.
To start off building the API, you will need to have Vscode installed or your preferred IDE. Then head over to create a new project with the directory structure illustrated above.
Setting up our docker environment
We will kick off by creating a docker compose file and a docker file for our application.
So head over to vscode code open the file named Dockerfile and paste the instructions below.
docker # Use python official image from docker hub FROM python:3.10.13-bullseye # prevents pyc files from being copied to the container ENV PYTHONDONTWRITEBYTECODE 1 # Ensures that python output is logged in the container's terminal ENV PYTHONUNBUFFERED 1 RUN apt-get update # dependencies for building Python packages && apt-get install -y build-essential # psycopg2 dependencies && apt-get install -y libpq-dev # Translations dependencies && apt-get install -y gettext # cleaning up unused files && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false && rm -rf /var/lib/apt/lists/* # Copy the file 'requirements.txt' from the local build context to the container's file system. COPY ./requirements.txt /requirements.txt # Install python dependencies RUN pip install -r /requirements.txt --no-cache-dir # Set the working directory WORKDIR /app # Run Uvicorn to start your Python web application CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Next, we will create our docker compose file. Inside your vscode open the docker-compose.yml file and paste the following instructions.
# specify the compose version version: '3.7' # Specify the services for our docker compose setup services: api: build: # path to the directory containing the Dockerfile context: . # Specify the image name image: products_api # this volume is used to map the files and folders on the host to the container # so if we change code on the host, code in the docker container will also be changed volumes: - .:/app # Map port 8000 on the host to port 8000 on the container ports: - 8000:8000 # Specify the .env file path env_file: - ./.env # Define a dependency on the "products_db" service, so it starts first depends_on: - products_db products_db: # specify the image name of our database # If the image is not found in our local repository # It will be pulled from docker registry that is Docker Hub image: postgres:16rc1-alpine3.18 # Mount a volume to persist postgreSQL data volumes: - postgres_data:/var/lib/postgresql/data/ environment: # Use environment variables for db configuration - POSTGRES_USER=${POSTGRES_USER} - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} - POSTGRES_DATABASE=${POSTGRES_DATABASE} # Define a volume for persisting postgreSQL data volumes: postgres_data:
Environment variables with FastAPI.
Next, we will create a .env
file and instantiate our environment variables. Inside your vscode open the .env file and include the following variables
# PostgreSQL database host POSTGRES_HOST=products_db # PostgreSQL database user POSTGRES_USER=username # PostgreSQL database password POSTGRES_PASSWORD=password # PostgreSQL database name POSTGRES_DATABASE=database # PostgreSQL database port POSTGRES_PORT=5432 # Asynchronous database URI for connecting to PostgreSQL ASYNC_DATABASE_URI=postgresql+asyncpg://username:password@products_db:5432/database # Name of the project or application PROJECT_NAME=Product Listings
The .env file contains sensitive variables. Including these
sensitive variables in a .env file is always a good practice.
Generating unique IDs.
Within our FastAPI application, we will define a robust utility function for generating unique IDs. The function will be using the UUID module. Head over to the utils module/ folder and open the idgen.py file and paste the code snippet below.
import uuid def idgen() -> str: # Generate a random uuid string return str(uuid.uuid4().hex)
Settings configuration in FastAPI
Next, we will create a configuration class. The class will inherit from the pydantic base settings class. The class will be responsible for loading the environment variables to the application context and defining other application settings. open the file named settings.py in your vscode and insert the following code snippet.
python # import packages from pydantic_settings import BaseSettings import os from dotenv import load_dotenv import secrets load_dotenv() class Settings(BaseSettings): """ Application settings and configurations parameters This class defines app settings using pydantic a data validation library """ PROJECT_NAME: str = os.getenv("PROJECT_NAME") API_V1_STR: str = "/api/v1" ASYNC_DATABASE_URI: str = os.getenv("ASYNC_DATABASE_URI") SECRET_KEY: str = secrets.token_urlsafe(32) settings = Settings()
Creating our models in Fast API
We will start off by creating a base class. The base class will contain common attributes in all models. This will help in keeping our code DRY. Open the file named base.py which is in the folder named models. Inside the file paste the following code snippet
python from sqlalchemy import DateTime, func from sqlalchemy.orm import Mapped, declared_attr, DeclarativeBase, mapped_column from app.utils.idgen import idgen from datetime import datetime class Base_(DeclarativeBase): """ Base class for SQLAlchemy models with common attributes to stay DRY (Don't Repeat Yourself). This class is intended to serve as a base class for SQLAlchemy models. It defines common attributes such as table name, creation timestamp, and update timestamp that can be inherited by other models, helping you to adhere to the DRY (Don't Repeat Yourself) principle. Attributes: __tablename__ (str): The table name, derived from the class name in lowercase. id (str): The unique ID of each record. created_on (datetime): The timestamp of when the record was created. updated_on (datetime, optional): The timestamp of when the record was last updated. Defaults to None until an update occurs. Example: To create a SQLAlchemy model using this base class: class YourModel(Base_): # Define additional attributes for your model here. """ @declared_attr def __tablename__(cls): # The table name is derived from the class name in lowercase return cls.__name__.lower() # The unique UUID ID for each record id: Mapped[str] = mapped_column(primary_key=True, default=idgen,index=True) # The timestamp for record creation created_on: Mapped[datetime] = mapped_column(DateTime(timezone=True), server_default=func.now()) # The timestamp for record update, initially None until an update occurs updated_on: Mapped[datetime] = mapped_column(DateTime(timezone=True), onupdate=func.now(), nullable=True)
The base class contains the id attribute which is the unique UUID for each record, and also contains the created_on attribute which is a timestamp for the record creation, and also contains the updated_on which is the timestamp for the record update.
Next, we will define our product's model. The model will inherit from the base class which is Base_
. Open the file named product.py which is contained in the folder named models. Inside the file, paste the following code snippet.
from sqlalchemy.orm import Mapped, mapped_column from sqlalchemy import String from .base import Base_ class Product(Base_): """ This is the SQLAlchemy class for defining the product model. It inherits all the attributes and methods of Base_ class. This class defines common attributes such as name, price image, and weight. Attributes: name (str): The product name price (str): The product price image (str): The product image url weight (str): The product price 'nullable=False' means these columns cannot have NULL values in the database. """ name: Mapped[str] = mapped_column(String(30), index=True, nullable=False) price: Mapped[str] = mapped_column(String(30), nullable=False) image: Mapped[str] = mapped_column(String, nullable=False) weight: Mapped[str] = mapped_column(String, nullable=False)
The Mapped[str]
is just a Python-type hint. It stresses that the attribute will hold values of the type string. The mapped_column
replaces the previous sqlalchemy Column
.
Creating schemas in Fast API.
We will now define our pydanticschemas. These schemas act as data classes that define what data is expected to be received by a certain API endpoint for a request to be considered a valid request. They can also be used in Fast API to define the response model which is the response to be returned by an endpoint. Open the file named product.py which is contained in the folder named schemas
and paste the following code snippet.
from typing import Optional from pydantic import BaseModel class ProductBase(BaseModel): name: str # Name of the product (required) price: str # Price of the product (required) image: str # URL or path to the product image (required) weight: str # Weight of the product (required) class ProductCreate(ProductBase): ... class ProductUpdate(ProductBase): ... class ProductPatch(ProductBase): name: Optional[str] # Name is optional for patching price: Optional[str] # Price is optional for patching image: Optional[str] # Image is optional for patching weight: Optional[str] # Weight is optional for patching class Product(ProductBase): id: str class Config: orm_mode = True
Optional
is imported from the Python typing module which defines that the field is not required and thus can be None.
Creating CRUD operations in Fast API
We will now define the Create, Read, Update, and Delete methods. To get started we will create a base class for the operations. The base class class will help in maintaining DRY code design in Python. Various SQLAlchemy models will also inherit from this class to perform database operations. Therefore, open the file named base.py in the folder named crud. Paste the code snippet below.
from typing import Any, Dict, Generic, Optional, Type, TypeVar from pydantic import BaseModel from sqlalchemy.ext.declarative import DeclarativeMeta from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy.future import select from sqlalchemy import func, update from fastapi.encoders import jsonable_encoder ModelType = TypeVar("ModelType", bound=DeclarativeMeta) CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel) UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel) class CRUDBase(Generic[ModelType, CreateSchemaType, UpdateSchemaType]): """ Generic CRUD (Create, Read, Update, Delete) operations for SQLAlchemy models. This class provides a set of generic CRUD operations that can be used with SQLAlchemy models. It includes methods for creating, retrieving, updating, and deleting records in the database. Args: model (Type[ModelType]): The SQLAlchemy model class to perform CRUD operations on. Example: To create a CRUD instance for a specific model (e.g., User model): ```python crud_user = CRUDBase[Prodcut, ProductCreateSchema, ProductUpdateSchema] ``` """ def __init__(self, model: Type[ModelType]): self.model = model # get single instance async def get(self, db: AsyncSession, obj_id: str) -> Optional[ModelType]: query = await db.execute(select(self.model).where(self.model.id == obj_id)) return query.scalar_one_or_none() # get all multiple entities async def get_multi(self, db: AsyncSession, *, skip: int = 0, limit: int = 100) -> ModelType: query = await db.execute(select(self.model)) return query.scalars().all() # search a specific entity async def get_by_params(self, db: AsyncSession, **params: Any) -> Optional[ModelType]: query = select(self.model) for key, value in params.items(): if isinstance(value, str): query = query.where(func.lower(getattr(self.model, key)) == func.lower(value)) else: query = query.where(getattr(self.model, key) == value) result = await db.execute(query) return result.scalar_one_or_none() # add an entity async def get_or_create(self, db: AsyncSession, defaults: Optional[Dict[str, Any]], **kwargs: Any) -> ModelType: instance = await self.get_by_params(db, **kwargs) if instance: return instance, False params = defaults or {} params.update(kwargs) instance = self.model(**params) db.add(instance) await db.commit() await db.refresh(instance) return instance, True # Partially update an entity async def patch(self, db: AsyncSession, *, obj_id: str, obj_in: UpdateSchemaType | Dict[str, Any] ) -> Optional[ModelType]: db_obj = await self.get(db=db, obj_id=obj_id) if not db_obj: return None update_data = obj_in if isinstance(obj_in, dict) else obj_in.model_dump(exclude_unset=True) query = ( update(self.model) .where(self.model.id == obj_id) .values(**update_data) ) await db.execute(query) return await self.get(db, obj_id) # Fully update an entity async def update( self, db: AsyncSession, *, obj_current: ModelType, obj_new: UpdateSchemaType | Dict[str, Any] | ModelType ): obj_data = jsonable_encoder(obj_current) if isinstance(obj_new, dict): update_data = obj_new else: update_data = obj_new.model_dump(exclude_unset=True) for field in obj_data: if field in update_data: setattr(obj_current, field, update_data[field]) db.add(obj_current) await db.commit() await db.refresh(obj_current) return obj_current # fully delete an entity from db async def remove(self, db: AsyncSession, *, obj_id: str) -> Optional[ModelType]: db_obj = await self.get(db, obj_id) if not db_obj: return None await db.delete(db_obj) await db.commit() return db_obj
We have defined various methods. The get method gets a single record that matches the object ID from the database. The get_multi method gets paginated documents from the database. The get_by_params method searches for matching records based on the matching parameters. The get_or_create method first looks if the entity exists, if it does not exist, then the entity is created in the DB. The patch method updates record fields. The update method fully updates record fields. The remove method deletes a record from the DB.
Having defined our base class for CRUD operations, we will now define the Product CRUD operations. The Product CRUD operations will inherit from the base class which is CRUDBase
. Open the file named product.py in the crud folder. Paste the code snippet below.
from typing import Any, Coroutine, Dict, Optional from fastapi_pagination import Page from sqlalchemy.ext.asyncio import AsyncSession from .base import CRUDBase from app.schemas.product import ProductUpdate, ProductCreate from app.models.product import Product class CRUDProduct(CRUDBase[Product, ProductCreate, ProductUpdate]): async def get(self, db: AsyncSession, obj_id: str) -> Product: return await super().get(db, obj_id) async def get_or_create(self, db: AsyncSession, defaults: Dict[str, Any] | None, **kwargs: Any) -> Product: return await super().get_or_create(db, defaults, **kwargs) async def get_multi(self, db: AsyncSession, *, skip: int = 0, limit: int = 20) -> Page[Product]: return await super().get_multi(db, skip=skip, limit=limit) async def update(self, db: AsyncSession, *, obj_current: Product, obj_new: ProductUpdate | Dict[str, Any] | Product): return await super().update(db, obj_current=obj_current, obj_new=obj_new) async def remove(self, db: AsyncSession, *, obj_id: str) -> Product | None: return await super().remove(db, obj_id=obj_id) product = CRUDProduct(Product)
Creating the database session
Here we will define an asynchronous database engine to perform async operations to the database. Then we will bind the engine to a sessionmaker which will interact with the database asynchronously. Open the file named session.py which is contained in the folder named db. Paste the code snippet below.
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession from sqlalchemy.orm import sessionmaker from app.core.settings import settings # Create an asynchronous SQLAlchemy engine using the ASYNC_DATABASE_URI from application settings. engine = create_async_engine( settings.ASYNC_DATABASE_URI, ) # Create an AsyncSession class using sessionmaker, bound to the SQLAlchemy engine. # This session class will be used to interact with the database asynchronously. SessionLocal = sessionmaker( engine, expire_on_commit=False, class_=AsyncSession )
create_async_engine
– This creates an asynchronous SQLAlchemy engine using the ASYNC_DATABASE_URI from application settings.
sessionmaker
– This creates an AsyncSession class using sessionmaker, bound to the SQLAlchemy engine.
Creating Fast API dependencies
Here we will define all the dependencies that will be used in our application. This may include the database session. Open the file named deps.py contained in the API folder and paste the code snippet below.
from typing import AsyncGenerator from sqlalchemy.ext.asyncio import AsyncSession from app.db.session import SessionLocal async def get_db() -> AsyncGenerator[AsyncSession, None]: async with SessionLocal() as db: yield db
The get_db
function is an asynchronous generation function that yields the database session.
Creating the product listings endpoints
Here we will define the POST, GET, PUT, PATCH, and DELETE methods.
- POST will create a new product.
- GET will retrieve a product or products.
- PUT will update a product fully.
- PATCH will update fields specified for a product.
- DELETE will remove a product from the db.
Head over to your code editor and open the file named products.py which is contained in the folder named endpoints. Inside the file paste the code snippet below.
# Import necessary modules and components from typing import Annotated from fastapi import APIRouter, status, Depends, HTTPException from sqlalchemy.ext.asyncio import AsyncSession from fastapi_pagination import Page, paginate from app.schemas.product import Product, ProductCreate, ProductPatch, ProductUpdate from app.api.deps import get_db from app import crud # Create an APIRouter instance router = APIRouter() # Define a route to create a new product @router.post("/", response_model=Product, status_code=status.HTTP_201_CREATED) async def create_product( db: Annotated[AsyncSession, Depends(get_db)], product_in: ProductCreate ): # Use the CRUD (Create, Read, Update, Delete) operations from the 'crud' module # to create a new product or return an existing one if it already exists product, created = await crud.product.get_or_create( db=db, defaults=product_in.dict() ) # If the product already exists, raise an HTTPException with a 400 status code if not created: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail="Product exists" ) # Return the created or existing product return product # Define a route to retrieve a product by its ID @router.get("/{productId}", response_model=Product, status_code=status.HTTP_200_OK) async def get_product( db: Annotated[AsyncSession, Depends(get_db)], productId: str ): # Use the CRUD operation to retrieve a product by its ID product = await crud.product.get(db=db, obj_id=productId) # If the product does not exist, raise an HTTPException with a 404 status code if not product: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Product not found" ) # Return the retrieved product return product # Define a route to retrieve a paginated list of products @router.get("/", response_model=Page[Product], status_code=status.HTTP_200_OK) async def get_products( db: Annotated[AsyncSession, Depends(get_db)], skip: int = 0, limit: int = 20 ): # Use the CRUD operation to retrieve multiple products with pagination products = await crud.product.get_multi(db=db, skip=skip, limit=limit) # If no products are found, raise an HTTPException with a 404 status code if not products: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Products not found" ) # Return a paginated list of products return paginate(products) # Define a route to partially update a product @router.patch("/{productId}", status_code=status.HTTP_200_OK) async def patch_product( db: Annotated[AsyncSession, Depends(get_db)], product_Id: str, product_in: ProductPatch ): # Use the CRUD operation to retrieve a product by its ID product = await crud.product.get(db=db, obj_id=product_Id) # If the product does not exist, raise an HTTPException with a 404 status code if not product: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Product not found" ) # Use the CRUD operation to patch (partially update) the product product_patched = await crud.product.patch(db=db, obj_id=product_Id, obj_in=product_in.dict()) # Return the patched product return product_patched # Define a route to fully update a product @router.put("/{productId}", response_model=Product, status_code=status.HTTP_200_OK) async def update_product( db: Annotated[AsyncSession, Depends(get_db)], productId: str, product_in: ProductUpdate ): # Use the CRUD operation to retrieve a product by its ID product = await crud.product.get(db=db, obj_id=productId) # If the product does not exist, raise an HTTPException with a 404 status code if not product: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Product not found" ) # Use the CRUD operation to fully update the product product_updated = await crud.product.update( db=db, obj_current=product, obj_new=product_in ) # Return the updated product return product_updated # Define a route to delete a product @router.delete("/{productId}", status_code=status.HTTP_204_NO_CONTENT) async def delete_product( db: Annotated[AsyncSession, Depends(get_db)], productId: str ): # Use the CRUD operation to retrieve a product by its ID product = await crud.product.get(db=db, obj_id=productId) # If the product does not exist, raise an HTTPException with a 404 status code if not product: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Product not found" ) # Use the CRUD operation to remove (delete) the product await crud.product.remove(db=db, obj_id=productId) # Return a 204 No Content response indicating successful deletion return
The endpoints consist of comments which explain what is happening in each endpoint.
Now we need to expose the endpoints to the API entry point for that we will edit two files. For the first file, we will open up the api.py file which is inside the folder named v1. Then paste the code snippet below.
# Import the APIRouter class from FastAPI from fastapi import APIRouter # Import the 'products' router from the 'app.api.v1.endpoints' module from app.api.v1.endpoints import products # Create an instance of the APIRouter router = APIRouter() # Include the 'products' router as a sub-router under the '/products' prefix # and assign the tag "Products" to group related API endpoints router.include_router(products.router, prefix="/products", tags=["Products"])
Then open the main.py file and paste the code snippet below.
# Import the FastAPI class from the FastAPI framework from fastapi import FastAPI # Import the add_pagination from fastapi_pagination import add_pagination # Import the 'router' from the 'app.api.v1.api' module from app.api.v1.api import router # Import the 'settings' object from the 'app.core.settings' module from app.core.settings import settings # Create an instance of the FastAPI application # - 'title' is set to the project name from 'settings' # - 'openapi_url' specifies the URL for the OpenAPI documentation app = FastAPI( title=settings.PROJECT_NAME, openapi_url=f"{settings.API_V1_STR}/openapi.json" ) # Add the necessary pagination parameters to all routes that use paginate add_pagination(app) # Include the 'router' (which contains your API routes) in the FastAPI application app.include_router(router)
Up to this point, we can try to spin up our server. For that, we must build our application with docker containers and docker images.
Running the product listings API
Here we will try to run our API.
Assuming that you have docker installed in your local machine, open the vscode terminal.
To toggle the terminal on:
- Windows use the shortcut ctrl + `.
- Mac OS use the shortcut ⌘ +`.
- Linux use the shortcut Ctrl+Shift+`.
In` the terminal write the following command:
docker-compose -f docker-compose.yml up -d
-
docker-compose
– This command is used for managing Docker containers using Docker compose. - -f – This is used to specify the path of the compose file.
- docker-compose.yml – This is the path to the compose file where containers are defined. In our case its docker-compose.yml.
- up – This is used to initialize and start the services named in the compose file. In our case, it starts the
products_db
and theapi
services. - -d – This specifies that the containers should start in detach mode ie the containers are started as background services.
After successfully executing the command, you can verify that the containers are indeed up and running by executing the below command in the vscode terminal:
docker ps
You should be able to see the following output:
To view the API documentation via Swagger, you can open your preferred browser and paste the URL below:
http://localhost:8000/docs
By default, we will access our API via port 8000 since it is the port we mapped to the host as we previously specified in our docker compose file.
In your browser, you will be able to see something similar to this:
We have now successfully set up our API for product listings. However, if we try to perform a POST request in Swagger, we will get a 500 Internal server error.
To see what is causing the error, we will view the api
container logs. To view the logs we can use docker desktop or use our terminal to view the logs. For this, we will execute the command below in our vscode terminal:
docker logs <CONTAINER ID>
The CONTAINER ID
is the ID of the api
container that is currently running.
To get the CONTAINER ID
we will run:
docker ps
After successfully running the docker logs
command we will get the following error as shown in the picture below:
In the last line, we can clearly see that the logs indicate that "database" does not exist
. Previously, we had defined in our **.env**
that our POSTGRES_DATABASE=database. Whereas this database named database does not exist. This means that we actually have to create the database itself first.
To create the database we will use the products_db
container.
In your vscode terminal:
- Run the command below
docker exec -it <CONTAINER ID> /bin/bash
The above command launches a Bash terminal within a container.
Run docker ps
to get the products_db
ID and replace the CONTAINER ID
with the ID of your products_db
image instance.
- We need to create the database. For this, we will run the following series of commands in the container Bash terminal:
psql -U username
The above command launches a terminal-based front-end to PostgreSQL. It allows us to type in queries interactively.
CREATE DATABASE database;
The above command creates a database named database in PostgreSQL.
ALTER ROLE username WITH PASSWORD 'password';
The above command alters the role username and assigns it the password of password
.
GRANT ALL PRIVILEGES ON DATABASE database TO username;
The above command grants all the database privileges to user named username.
Having done that, we now need to perform database migrations.
Fast API database migrations
Database migrations or schema migrations are controlled sets of changes developed to modify the structure of the objects within a relational database.
To perform migration in our API, we will create an alembic.ini file and an alembic folder in the project root directory. Inside the alembic folder create another folder named versions and two files named env.py and script.py.mako.
Now the project directory structure looks like this:
. └── FastAPI_APP/ ├── app/ │ ├── alembic.ini │ ├── alembic/ │ │ ├── versions │ │ ├── env.py │ │ └── script.py.mako │ ├── api/ │ │ ├── v1/ │ │ │ ├── endpoints/ │ │ │ │ ├── __init__.py │ │ │ │ └── products.py │ │ │ ├── __init__.py │ │ │ └── api.py │ │ ├── __init__.py │ │ └── deps.py │ ├── core/ │ │ ├── __init__.py │ │ └── settings.py │ ├── crud/ │ │ ├── __init__.py │ │ ├── base.py │ │ └── product.py │ ├── db/ │ │ ├── __init__.py │ │ └── session.py │ ├── models/ │ │ ├── __init__.py │ │ ├── basemodel.py │ │ └── products.py │ ├── schemas/ │ │ ├── __init__.py │ │ └── product.py │ └── utils/ │ ├── __init__.py │ └── idgen.py └── main.py
We will now edit the files we have added.
Open the alembic.ini file and paste the script below:
# A generic, single database configuration. [alembic] # path to migration scripts script_location = alembic # template used to generate migration file names; The default value is %%(rev)s_%%(slug)s file_template = %%(year)d-%%(month).2d-%%(day).2d-%%(hour).2d-%%(minute).2d_%%(rev)s # sys.path path, will be prepended to sys.path if present. # defaults to the current working directory. prepend_sys_path = . version_path_separator = os # Use os.pathsep. Default configuration used for new projects. sqlalchemy.url = [post_write_hooks] # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S
The alembci.ini file is a configuration file used with Alembic and it provides settings and options for managing database schema changes over time.
Open the env.py file which is contained in the alembic folder or module and paste the code snippet below:
# Import necessary modules import asyncio import sys import pathlib from alembic import context from sqlalchemy.ext.asyncio import create_async_engine # Import the necessary database models and settings from app.models.product import Product from app.core.settings import settings from app.models.base import Base_ from sqlalchemy.orm import declarative_base # Define the target metadata for migrations target_metadata = Base_.metadata # Append the parent directory of the current file to the sys.path # This allows importing modules from the parent directory sys.path.append(str(pathlib.Path(__file__).resolve().parents[1])) # Define a function to run migrations def do_run_migrations(connection): context.configure( compare_type=True, dialect_opts={"paramstyle": "named"}, connection=connection, target_metadata=target_metadata, include_schemas=True, version_table_schema=target_metadata.schema, ) with context.begin_transaction(): context.run_migrations() # Define an asynchronous function to run migrations online async def run_migrations_online(): """Run migrations in 'online' mode. In this scenario, we create an Engine and associate a connection with the context. """ # Create an asynchronous database engine using the URI from settings connectable = create_async_engine(settings.ASYNC_DATABASE_URI, future=True) # Connect to the database and run migrations within a transaction async with connectable.connect() as connection: await connection.run_sync(do_run_migrations) # Run the migrations online using asyncio asyncio.run(run_migrations_online())
The script above runs database migrations using Alembic for an asynchronous database engine in Fast API.
Open the file named script.py.mako contained in the alembic module. Paste the script below:
""" Revision ID: ${up_revision} Revises: ${down_revision | comma,n} Create Date: ${create_date} """ # Import necessary modules from Alembic and SQLAlchemy from alembic import op import sqlalchemy as sa # Import any additional necessary modules (if specified) ${imports if imports else ""} # Define revision identifiers used by Alembic revision = ${repr(up_revision)} # The unique identifier for this revision down_revision = ${repr(down_revision)} # The revision to which this one applies (if any) branch_labels = ${repr(branch_labels)} # Labels associated with this revision (if any) depends_on = ${repr(depends_on)} # Dependencies for this revision (if any) def upgrade(): ${upgrades if upgrades else "pass"} """ This function is called when upgrading the database schema. You can specify SQL operations to apply schema changes. If no operations are specified, 'pass' can be used. """ def downgrade(): ${downgrades if downgrades else "pass"} """ This function is called when downgrading the database schema. You can specify SQL operations to reverse schema changes. If no operations are specified, 'pass' can be used. """
Having defined the scripts for handling the migrations, we now have execute them in the api container. For this run the commands below:
docker exec -it <CONTAINER ID> /bin/bash
The above command launches a Bash terminal within a container.
Replace the with the actual ID of the api container. To get the api container, run docker ps
command.
alembic revision --autogenerate -m "Migrate products table"
The above command generates a new migration script. The new migration script contains the differences between the current database schema and the model definitions in the code.
alembic upgrade head
The above command applies to all pending migrations.
Testing our API
Since we have already performed migrations to our database schema, we can now confidently test our API via swagger docs.
To access the Swagger docs, type the following URL in your browser:
http://localhost:8000/docs
We can start off by performing a POST request.
POST request Fast API
In Swagger expand the POST request collapsible, then click the Try it out button. In the Response body section, change the values of the keys of the JSON schema to your preference as shown below.
For the image key, you can put an image URL. Then click on the execute button. After a successful POST, you will see a 201 created status code with a response body as shown below:
The Response body may differ from the one shown above based on the values you assigned to your JSON schema.
GET Request (paginated data)
In the GET request, we would like to get multiple items. For this we can specify the skip and the limit.
Skip similar to OFFSET is the number of rows of the result table to skip before any rows are retrieved.
Limit is a syntax specifying to fetch the FIRST N rows of the result table.
Click on the get request and for the skip param we can go with the default which is 0, and for the limit, we can also go with the default which is 20.
Click on execute button and as a result, you will see a Response body containing the paginated product data.
Bonus points
As an added bonus, you have the option to explore the remaining endpoints and share your thoughts on the 'Response body' in the comments section.
You can access the project in my GitHub repository at the following URL:
https://github.com/mbuthi/product_listing_API
Clone the project to your local repository, and proceed to run it.
Conclusion
In conclusion, this article has walked you through containerizing your Fast API application and PostgreSQL database using docker. By bundling our API and database into separate containers, we have achieved portability and ease of deployment.
We began by setting up our dockerfile and docker compose files for our docker environment, setting up models, schemas, CRUD operations, and endpoints.
Throughout the article, we have defined how to persist data using volumes in docker, and docker best practices and emphasized DRY programming design.
I hope this article has given you insights into containerizing your Fast API application and PostgreSQL databases using docker, empowering you to take your web applications to the next level. As you continue being exposed to the containerization journey, explore more advanced topics in Fast API and docker.