Building & Dockerizing a Task Manager App — Three-Part Journey

Introduction

In the modern software landscape, containerization has become a cornerstone of efficient development and deployment. Technologies like Docker have transformed how developers build, run, and scale applications — allowing entire environments to be packaged into lightweight, portable containers that work consistently across machines and platforms.


When I began developing my Task Manager Web Application, my main goal wasn’t just to build a functional web tool — it was to learn how Docker could streamline the development workflow, improve scalability, and simplify the deployment process of a multi-component system. The project became a perfect case study for understanding the real-world impact of Docker on modern full-stack development.


At its core, the Task Manager helps users create, organize, and monitor their daily tasks through an intuitive web interface. Beyond traditional task management, it also features an AI assistant that summarizes the user’s tasks to help them stay organized and focused.


The architecture of the project is built with HTML, CSS, JavaScript, TailwindCSS, and Bootstrap for the frontend; Node.js and Express for the backend; Firebase for authentication and data storage; and FastAPI with Hugging Face for the AI assistant module. All these components are dockerized and orchestrated using Docker Compose, ensuring isolated, modular, and reproducible deployments.


Through this three-part Digital Assignment journey, I explored how Docker can:

  • Simplify complex multi-service projects
  • Enable parallel development and modular testing
  • Facilitate smooth scaling and deployment
  • Improve project portability and consistency
By the end of this project, I had a fully functional, containerized, AI-enhanced Task Manager — and more importantly, a solid understanding of Docker’s power in real-world web application development.


Objectives of Part 1 : Learning Docker and Its Relevance

  • Learn Docker fundamentals: images, containers, networks, volumes, and Docker Compose basics.

  • Understand the benefits of containerization for portability, reproducibility, and modular design.

  • Explore how Docker would fit the Task Manager architecture (frontend, backend, AI) and plan the containerization strategy.

  • Prepare a minimal containerized environment (experiment with nginx:alpine) to serve static content and verify Docker setup.

Objectives of Part 2 : Building and Dockerizing the Frontend

  • Build the Task Manager frontend using HTML, CSS, JavaScript, TailwindCSS, and a customized Bootstrap template.

  • Integrate Firebase Authentication and Firestore directly in the frontend (no separate backend yet).

  • Containerize the frontend with a simple Dockerfile using nginx:alpine and deploy it on Docker Desktop to demonstrate environment portability.

  • Validate the client-only approach for quick prototyping and understand the limitations that motivate moving to a backend in Part 3.

Objectives of Part 3 : Multi-Container Deployment, AI Assistant, and Documentation

  • Transition from a frontend-only prototype to a multi-container architecture: separate the backend (Node.js + Express) and add an AI assistant (FastAPI) as an independent microservice.

  • Create Dockerfiles for frontend (nginx:alpine), backend (node:18-alpine), and AI assistant (python:3.11-slim).

  • Orchestrate all services with Docker Compose, publish images to Docker Hub, and ensure reproducible deployment.

  • Produce comprehensive documentation, prepare a Docker Showdown live demo, record a 5–10 minute YouTube walkthrough, and publish a technical blog summarizing the work.


Name of the Containers Involved (Docker Hub links)


Container

Docker Hub

Base Image Used

taskmanager-frontend

https://hub.docker.com/r/pranav6788/taskmanager-frontend

nginx:alpine

taskmanager-backend

https://hub.docker.com/r/pranav6788/taskmanager-backend

node:18-alpine

taskmanager-assistant

https://hub.docker.com/r/pranav6788/taskmanager-assistant

python:3.11-slim



Primary functionality of the containers:

  • taskmanager-frontend - Hosts the frontend web interface built with HTML, CSS, JavaScript and Tailwind CSS. Serves static files and communicates with the backend APIs
  • taskamanager-backend - Implements the REST API using Node.js, Express.js and CORS. Handles authentication, database operations, and API logic using Firebase SDK service account credentials.
  • taskmanager-assistant - Runs the AI summarization module built with FastAPI and HTTPX. COnnects to the Hugging Face API to generate intelligent summaries of user tasks.

Software tools and Technologies Used 

Technology

Version used (approx)

Purpose

HTML / CSS / JavaScript

HTML5 / CSS3 / ES6

Frontend markup and behavior

Tailwind CSS

v3.4

Utility-first styling

Bootstrap

v5.3

Base admin template (customized)

Node.js

v18.17.1

Backend runtime

Express.js

v4.18.2

REST API framework

CORS (npm)

v2.8.5

Cross-origin support

Nginx (nginx:alpine)

1.25-alpine

Static frontend serving

Firebase (Auth + Firestore)

Firebase SDK ~v10.4.0

Authentication & database

FastAPI

v0.110.0

AI assistant microservice

HTTPX

v0.27.0

Async HTTP client for AI calls

Hugging Face API

Latest

NLP model inference for summaries

Docker

v26.0.2

Containerization

Docker Desktop

v4.31.0

Local dev & orchestration

Docker Compose

v2.27.0

Multi-container orchestration

Docker Hub

Cloud Service

Image registry

VS Code

v1.94

IDE

Git / GitHub

v2.44

Version control

Postman

v11.0

API testing



Part 1 – Architecture: Learning Docker and Its Relevance

In Part 1, the focus was primarily on understanding Docker’s core principles — images, containers, layers, and orchestration — and analyzing how containerization would benefit the Task Manager project. There was no separate backend or AI module at this stage; instead, the emphasis was on building the conceptual and technical foundation. The prototype consisted of a static frontend (HTML/CSS/JS) hosted locally and the initial experiments with Docker Desktop, pulling base images, creating containers, and observing their isolation behavior.

Data Flow

  1. Input: Developer runs docker build and docker run commands from the terminal.

  2. Processing: Docker Engine pulls the base image (nginx:alpine) from Docker Hub, builds the image, and runs it as a container.

  3. Output: A lightweight running web server hosts the static content (test HTML/JS files) accessible via localhost.

  4. Feedback Loop: Developer interacts with Docker CLI and Docker Desktop to view logs, monitor container health, and understand image layers.




Part 2 – Architecture: Building and Dockerizing the Frontend

Part 2 focused on building the actual frontend of the Task Manager web application and connecting it directly to Firebase Cloud Storage. At this stage, no dedicated backend existed — all Firebase interactions (authentication, CRUD operations) were managed using JavaScript code embedded in the frontend.

Once the interface was complete, it was containerized using Docker. The Dockerfile utilized nginx:alpine as a lightweight base image to serve static files efficiently. This setup demonstrated the real-world application of Docker learned in Part 1 — enabling isolated, reproducible environments where the frontend could be easily built, tested, and deployed on any system using Docker Desktop.

This phase was the first step toward application-level deployment. It represented the transition from basic Docker experimentation to an actual running web-based system capable of interacting with a live cloud database.


Data Flow

  1. Input: User accesses the application via a browser and performs actions like login, add task, or delete task.

  2. Frontend Processing: JavaScript captures user input and directly communicates with Firebase through the Firebase SDK.

  3. Cloud Processing: Firebase authenticates the user, performs database operations, and returns results.

Output: Task data is displayed dynamically in the user interface, hosted from within the Dockerized Nginx container.

Part 3 – Architecture: Multi-Container Deployment, AI Assistant, and Documentation

In Part 3, the architecture evolved into a multi-container application connected through Docker Compose. The project introduced a dedicated Node.js backend container (using the node:18-alpine base image), a Python-based AI Assistant container (using python:3.11-slim), and the existing frontend container.
Each component was isolated in its own Docker environment and connected through a shared Docker network. The backend handled API routing, authentication, and Firebase interaction. The AI Assistant, built using FastAPI and the Hugging Face API, generated intelligent task summaries for users.
Beyond technical setup, Part 3 also included documentation, Docker Showdown demonstration, YouTube video creation, and blog publication—showcasing the full lifecycle from concept to presentation.

Data Flow

  1. Input: User interacts with the frontend UI (task creation, viewing, summarization).

  2. Frontend: Sends API requests to the backend via Docker network.

  3. Backend:

    • Handles Firebase CRUD/authentication.

    • Communicates with AI Assistant for summarization requests.

  4. AI Assistant: Sends the summarized result back to the backend.

Output: Results displayed on the frontend; logs, documentation, and visuals generated for report/blog submission.

Overall Architecture

System architecture overview:

The final system follows a microservice-inspired, containerized architecture where each major responsibility is isolated into its own Docker container. The frontend (HTML/CSS/JS + Tailwind, served by nginx:alpine) handles all UI and user interactions; the backend (node:18-alpine, Express) provides REST APIs, authentication, and Firestore access through the Firebase Admin SDK; and the AI assistant (python:3.11-slim, FastAPI) acts as a dedicated microservice that queries the Hugging Face inference API to produce natural-language summaries. These containers are orchestrated via Docker Compose and communicate over an internal bridge network, offering isolation, easy development workflows, and consistent deployments across environments.

Documentation & demo ecosystem:

Beyond the runtime system, Part 3 includes a documentation and dissemination layer: a formal report (procedures, diagrams, screenshots), a YouTube demo video (5–10 mins) capturing live container orchestration and feature flows, a blog post for public sharing, and a Docker Showdown live demonstration. These artifacts draw on container logs, screenshots, and outputs, providing reproducible walkthroughs and evidence for each claimed behavior. This combined approach ensures that the technical implementation is not only functional but also demonstrable, reproducible, and understandable by external audiences.

Data Flow

  1. User interacts with the frontend (browser).

  2. Frontend communicates with backend APIs over Docker network.

  3. Backend handles Firebase database operations and sends data to AI Assistant when required.

  4. AI Assistant returns processed summaries to backend → frontend.

  5. Data and artifacts are stored on Firebase, GitHub, and Docker Hub.

  6. Final documentation, video, and blog outputs represent dissemination of the project.




Procedure — Part 1 (Learning Docker & planning)

1. Install and Configure Docker Desktop:

  • Installed Docker Desktop on the host machine (Windows/macOS/Linux).

  • Verified installation using commands docker --version and docker run hello-world.

Fig 1:- Docker Desktop view

Fig 2:- Basic Docker Initializations

2. Understand Core Docker Concepts:

  • Experimented with images, containers, and volumes using simple test containers.

  • Explored how containerization ensures portability and reproducibility across systems.

3. Create a Minimal Nginx Test Container:

  • Pulled the lightweight image nginx:alpine using docker pull nginx:alpine.

    Fig 3:- Pulling base nginx:alpine image

  • Created a small HTML page (index.html) for testing static content serving.

  • Mounted the page inside the container using a volume:


   Fig 4:- Volume mounted to container

  • Accessed the page in the browser to confirm the containerized web service.

Fig 5:- HTML test page

4. Experiment with Dockerfile and Compose:

  • Created a simple dockerfile for the nginx container.

Fig 6:- Simple frontend Dockerfile

  • Created a simple docker-compose.yml file to understand how multiple containers interact with each other.


Procedure — Part 2 (Frontend build & frontend-only containerization)

  • Frontend Setup:

    • Built the UI using HTML, CSS, and JavaScript integrated with the NiceAdmin Bootstrap template.
    • Added TailwindCSS for utility-first styling and responsive design.
Fig 7:- Login Page

Fig 8:- User Dashboard

Fig 9.1:- Add new task

Fig 9.2:- Add new task

Fig 10:- Viewing All tasks

Fig 11:- Calendar

Fig 12:- User Profile

Fig 13:- View task details
    • Configured Firebase SDK for Authentication and Firestore access directly from the frontend.
  • Firebase Integration:

    • Created a Firebase project and initialized web credentials.

Fig 14:- Firebase web console
    • Enabled Email/Password and Google Sign-In authentication methods.

Fig 15:- Setting up Firebase Authentication
    • Verified Firestore read/write operations through the frontend app.
  • Containerization with Nginx:

    • Created a Dockerfile for the frontend

    
Fig 16:- Frontend Dockerfile
    • Built and ran the image using:

Fig 17:- Building Frontend Image
    • Run taskmanager-frontend with - docker run -p 8080:80 task-manager-frontend

    • Verified the frontend loads correctly and functions as expected.

  • Testing Deployment Portability:

    • Tested the container on multiple systems (Windows/macOS) to confirm consistent performance.

    • Validated that Firebase-based auth and Firestore worked identically across environments.
  • Document Findings:

    • Recorded insights on how frontend-only containerization simplifies setup but limits backend extensibility, setting the stage for Part 3.

Procedure — Part 3 (Full dockerization, backend separation, AI assistant & documentation)

  • Backend Development and Containerization:

    • Developed the backend using Node.js and Express.js to handle task CRUD operations and user authentication.
    • Integrated Firebase Admin SDK for secure data access.
    • Created a Dockerfile for backend.
    
Fig 18:- Backend Dockerfile
    • Built and tested container locally using:
    docker build -t task-manager-backend .
    docker run -p 5000:5001 taskmanager-backend
    docker-compose up --build
  • AI Summarization Microservice:

    • Implemented a lightweight FastAPI app that summarizes task content using a Hugging Face transformer model.

    • Created a Dockerfile for the AI service using python:3.11-slim, exposing port 8000.
Fig 19:- AI module Dockerfile
    • Tested the endpoint with sample text before integration.
  • Compose-based Orchestration:

    • Created docker-compose.yml to define and manage the three containers:

      • frontend → Nginx (port 80)

      • backend → Node.js + Express (port 5000)
      • ai-service → FastAPI summarizer (port 8000)
Fig 20.1:- Docker-compose.yml file

Fig 20.2:- Docker-compose.yml file
    • Configured inter-service networking via Docker internal bridge.

    • Launched all containers together using:
      • docker-compose up --build
Fig 21.1:- Launching the docker-compose.yml file

Fig 21.2:- Building containers

Fig 21.3:- Application running
  • Integration Testing:

    • Verified smooth API communication:

      • Frontend → Backend (task CRUD)

      • Backend → AI Service (summary generation).
Fig 22.1:- Checking DB connectivity

Fig 22.2:- Task added to DB

Fig 22.3:- Verified added task
    • Conducted real-time testing of task creation and AI summarization workflow.
  • Docker Hub:

    • Pushed all images (frontend, backend, AI service) to Docker Hub after creating an account in Docker Hub.

    • docker tag taskmanager-frontend pranav6788/taskmanager-frontend:v1
      docker tag taskmanager-backend pranav6788/taskmanager-backend:v1
      docker tag taskmanager-assistant pranav6788/taskmanager-assistant:v1

    • docker push pranav6788/taskmanager-frontend:v1
      docker push pranav6788/taskmanager-backend:v1
      docker push pranav6788/taskmanager-assistant:v1
    • docker pull pranav6788/taskmanager-frontend:v1

      docker run -p 80:80 pranav6788/taskmanager-frontend:v1
Fig 23:- Docker Hub repositories
    • Uploaded final Compose file and code to GitHub repository.

    • Documented setup, architecture diagrams, and workflow in the technical blog.

  • Docker Showdown & YouTube Demo Preparation:

    • Created demo scripts and recorded a 5–10 minute walkthrough.

    • Highlighted Docker benefits — portability, scalability, reproducibility — across all phases.

Modifications done in downloaded contianers

The Task Manager Project did not require the use of any pre-built images. Base images were used rather than downloading prebuilt containers and these base images were then modified to suit the specifications of the application.

Frontend (nginx:alpine)

  1. Pulled the minimal Nginx Alpine image (nginx:alpine) as the base layer.

  2. Created production build of static frontend.

  3. Replaced default Nginx HTML with custom app files (COPY ./dist /usr/share/nginx/html).

  4. Configured nginx.conf to route SPA paths to loginPage.html.

  5. Tested that Firebase SDK calls work within the container (browser → container → Firebase).

Backend (node:18-alpine)

  1. Started from the official Node.js LTS Alpine image — node:18-alpine, chosen for its small footprint and security.

  2. Added index.js and package.json into container filesystem.

  3. Installed dependencies (npm install) inside image during build.

  4. Integrated Firebase Admin SDK and included service account credentials (prefer secrets or environment variables for security).

  5. Implemented REST endpoints for tasks and authentication; tested via Postman and frontend.

  6. Copied only the essential metadata files first (package*.json) to leverage Docker’s layer caching:

  7. Installed only production dependencies to minimize image size and speed up container startup:

  8. Copied the remaining backend source files (index.js, routes, configs, and Firebase key loader):

AI Assistant (python:3.11-slim)

  1. Used Python 3.11 Slim as the base image (python:3.11-slim) for a minimal, efficient runtime.

  2. Created a working directory /assitant.

  3. Added FastAPI app code and requirements.txt.

  4. Installed packages in image (pip install -r requirements.txt).

  5. Configured environment variables for Hugging Face API token.

  6. Implemented POST endpoint /summarize that accepts task text and returns a summary.

  7. Exposed port 8000 and used Uvicorn as the application server:


Github Link / Docker Hub link of modified containers.

GitHub Repository

Outcomes of the DA

  • Working Task Manager:
    Functional web app with user authentication, task creation, editing, deletion, and real-time updates through Firebase.

  • AI Summarization:
    Integrated AI assistant using Hugging Face API to generate concise summaries of user tasks.

  • Containerized Microservices:
    Three fully isolated containers — Frontend (Nginx), Backend (Node.js + Express), and AI Assistant (FastAPI) — orchestrated using Docker Compose.

  • Cloud Integration:
    Secure data handling and authentication via Firebase Cloud Firestore and Firebase Auth.

  • Reproducibility & Portability:
    With publicly available Docker Hub images and a Docker Compose file, the entire stack can be replicated or deployed on any system with a single command.

  • Practical Docker Experience:
    Hands-on exposure to Dockerfiles, image building, multi-container networking, and Docker Hub integration, reinforcing real-world DevOps and containerization concepts.

  • Documentation & Outreach:
    Complete documentation, YouTube demonstration, and this blog post for project explanation and community sharing.

  • Docker Showdown Experience:
    Live presentation improved deployment, debugging, and presentation skills through peer and mentor feedback.

  • Presentation-Ready:
    Final slides and demo script prepared for Docker Showdown showcase.

Conclusion

This three-phase journey transformed a concept into a modular, containerized web application that’s reproducible, demonstrable, and extensible. Starting from Docker fundamentals, moving through a frontend-only prototype, and culminating in a multi-container architecture with an AI microservice, the DA demonstrates both hands-on skills and an understanding of modern deployment practices.

Participating in the Docker Showdown workshop offered invaluable insights into real-world DevOps workflows, optimization of multi-container systems, and best practices in image management and orchestration. It also provided an opportunity to present the project to peers and mentors, refining both the technical and communication aspects of deployment. Meanwhile, the creation of the YouTube demonstration video reinforced the importance of clear visualization, step-by-step explanation, and audience-oriented presentation — enhancing the overall documentation quality. Together, these experiences strengthened the understanding of containerized development, collaborative evaluation, and reproducible research practices.

References

  • Docker documentation and official images (nginx, node, python).

  • Firebase docs (Auth & Firestore).

  • FastAPI & HTTPX docs.

  • Hugging Face API docs.

  • YouTube for tutorials.

Acknowledgements

This project acknowledges the original authors and contributors of the base Docker images and template repositories used, including the developers of nginx:alpine, node:18-alpine, and python:3.11-slim. I extend my gratitude to VIT Chennai, the School of Computer Science and Engineering (SCOPE), and Dr. Subbulakshmi T for their constant guidance and insights throughout the Fall Semester 2025–26 as part of the course BCSE408L – Cloud Computing. I would also like to acknowledge the IIT Bombay Spoken Tutorial for Docker for providing valuable learning resources, and express heartfelt thanks to my friends and family for their continuous feedback and support during the development of this project.


-Sarvepalli Krishna Pranav

VIT CHENNAI






































Comments