Building & Dockerizing a Task Manager App — Three-Part Journey
Introduction
In the modern software landscape, containerization has become a cornerstone of efficient development and deployment. Technologies like Docker have transformed how developers build, run, and scale applications — allowing entire environments to be packaged into lightweight, portable containers that work consistently across machines and platforms.
When I began developing my Task Manager Web Application, my main goal wasn’t just to build a functional web tool — it was to learn how Docker could streamline the development workflow, improve scalability, and simplify the deployment process of a multi-component system. The project became a perfect case study for understanding the real-world impact of Docker on modern full-stack development.
At its core, the Task Manager helps users create, organize, and monitor their daily tasks through an intuitive web interface. Beyond traditional task management, it also features an AI assistant that summarizes the user’s tasks to help them stay organized and focused.
The architecture of the project is built with HTML, CSS, JavaScript, TailwindCSS, and Bootstrap for the frontend; Node.js and Express for the backend; Firebase for authentication and data storage; and FastAPI with Hugging Face for the AI assistant module. All these components are dockerized and orchestrated using Docker Compose, ensuring isolated, modular, and reproducible deployments.
Through this three-part Digital Assignment journey, I explored how Docker can:
- Simplify complex multi-service projects
- Enable parallel development and modular testing
- Facilitate smooth scaling and deployment
- Improve project portability and consistency
Objectives of Part 1 : Learning Docker and Its Relevance
-
Learn Docker fundamentals: images, containers, networks, volumes, and Docker Compose basics.
-
Understand the benefits of containerization for portability, reproducibility, and modular design.
-
Explore how Docker would fit the Task Manager architecture (frontend, backend, AI) and plan the containerization strategy.
-
Prepare a minimal containerized environment (experiment with
nginx:alpine) to serve static content and verify Docker setup.
Objectives of Part 2 : Building and Dockerizing the Frontend
-
Build the Task Manager frontend using HTML, CSS, JavaScript, TailwindCSS, and a customized Bootstrap template.
-
Integrate Firebase Authentication and Firestore directly in the frontend (no separate backend yet).
-
Containerize the frontend with a simple Dockerfile using
nginx:alpineand deploy it on Docker Desktop to demonstrate environment portability. -
Validate the client-only approach for quick prototyping and understand the limitations that motivate moving to a backend in Part 3.
Objectives of Part 3 : Multi-Container Deployment, AI Assistant, and Documentation
-
Transition from a frontend-only prototype to a multi-container architecture: separate the backend (Node.js + Express) and add an AI assistant (FastAPI) as an independent microservice.
-
Create Dockerfiles for frontend (
nginx:alpine), backend (node:18-alpine), and AI assistant (python:3.11-slim). -
Orchestrate all services with Docker Compose, publish images to Docker Hub, and ensure reproducible deployment.
-
Produce comprehensive documentation, prepare a Docker Showdown live demo, record a 5–10 minute YouTube walkthrough, and publish a technical blog summarizing the work.
Name of the Containers Involved (Docker Hub links)
Primary functionality of the containers:
- taskmanager-frontend - Hosts the frontend web interface built with HTML, CSS, JavaScript and Tailwind CSS. Serves static files and communicates with the backend APIs
- taskamanager-backend - Implements the REST API using Node.js, Express.js and CORS. Handles authentication, database operations, and API logic using Firebase SDK service account credentials.
- taskmanager-assistant - Runs the AI summarization module built with FastAPI and HTTPX. COnnects to the Hugging Face API to generate intelligent summaries of user tasks.
Software tools and Technologies Used
Part 1 – Architecture: Learning Docker and Its Relevance
Data Flow
-
Input: Developer runs
docker buildanddocker runcommands from the terminal. -
Processing: Docker Engine pulls the base image (nginx:alpine) from Docker Hub, builds the image, and runs it as a container.
-
Output: A lightweight running web server hosts the static content (test HTML/JS files) accessible via
localhost. -
Feedback Loop: Developer interacts with Docker CLI and Docker Desktop to view logs, monitor container health, and understand image layers.
Part 2 – Architecture: Building and Dockerizing the Frontend
Part 2 focused on building the actual frontend of the Task Manager web application and connecting it directly to Firebase Cloud Storage. At this stage, no dedicated backend existed — all Firebase interactions (authentication, CRUD operations) were managed using JavaScript code embedded in the frontend.
Once the interface was complete, it was containerized using Docker. The Dockerfile utilized nginx:alpine as a lightweight base image to serve static files efficiently. This setup demonstrated the real-world application of Docker learned in Part 1 — enabling isolated, reproducible environments where the frontend could be easily built, tested, and deployed on any system using Docker Desktop.
This phase was the first step toward application-level deployment. It represented the transition from basic Docker experimentation to an actual running web-based system capable of interacting with a live cloud database.
Data Flow
-
Input: User accesses the application via a browser and performs actions like login, add task, or delete task.
-
Frontend Processing: JavaScript captures user input and directly communicates with Firebase through the Firebase SDK.
-
Cloud Processing: Firebase authenticates the user, performs database operations, and returns results.
Part 3 – Architecture: Multi-Container Deployment, AI Assistant, and Documentation
In Part 3, the architecture evolved into a multi-container application connected through Docker Compose. The project introduced a dedicated Node.js backend container (using the node:18-alpine base image), a Python-based AI Assistant container (using python:3.11-slim), and the existing frontend container.
Each component was isolated in its own Docker environment and connected through a shared Docker network. The backend handled API routing, authentication, and Firebase interaction. The AI Assistant, built using FastAPI and the Hugging Face API, generated intelligent task summaries for users.
Beyond technical setup, Part 3 also included documentation, Docker Showdown demonstration, YouTube video creation, and blog publication—showcasing the full lifecycle from concept to presentation.
Data Flow
-
Input: User interacts with the frontend UI (task creation, viewing, summarization).
-
Frontend: Sends API requests to the backend via Docker network.
-
Backend:
-
Handles Firebase CRUD/authentication.
-
Communicates with AI Assistant for summarization requests.
-
-
AI Assistant: Sends the summarized result back to the backend.
Overall Architecture
System architecture overview:
The final system follows a microservice-inspired, containerized architecture where each major responsibility is isolated into its own Docker container. The frontend (HTML/CSS/JS + Tailwind, served bynginx:alpine) handles all UI and user interactions; the backend (node:18-alpine, Express) provides REST APIs, authentication, and Firestore access through the Firebase Admin SDK; and the AI assistant (python:3.11-slim, FastAPI) acts as a dedicated microservice that queries the Hugging Face inference API to produce natural-language summaries. These containers are orchestrated via Docker Compose and communicate over an internal bridge network, offering isolation, easy development workflows, and consistent deployments across environments.Documentation & demo ecosystem:
Beyond the runtime system, Part 3 includes a documentation and dissemination layer: a formal report (procedures, diagrams, screenshots), a YouTube demo video (5–10 mins) capturing live container orchestration and feature flows, a blog post for public sharing, and a Docker Showdown live demonstration. These artifacts draw on container logs, screenshots, and outputs, providing reproducible walkthroughs and evidence for each claimed behavior. This combined approach ensures that the technical implementation is not only functional but also demonstrable, reproducible, and understandable by external audiences.Data Flow
-
User interacts with the frontend (browser).
-
Frontend communicates with backend APIs over Docker network.
-
Backend handles Firebase database operations and sends data to AI Assistant when required.
-
AI Assistant returns processed summaries to backend → frontend.
-
Data and artifacts are stored on Firebase, GitHub, and Docker Hub.
-
Final documentation, video, and blog outputs represent dissemination of the project.
Procedure — Part 1 (Learning Docker & planning)
1. Install and Configure Docker Desktop:
Installed Docker Desktop on the host machine (Windows/macOS/Linux).
Verified installation using commands docker --version and docker run hello-world.
2. Understand Core Docker Concepts:
Experimented with images, containers, and volumes using simple test containers.
Explored how containerization ensures portability and reproducibility across systems.
3. Create a Minimal Nginx Test Container:
Pulled the lightweight image nginx:alpine using docker pull nginx:alpine.
Fig 3:- Pulling base nginx:alpine imageCreated a small HTML page (index.html) for testing static content serving.
Mounted the page inside the container using a volume:
Accessed the page in the browser to confirm the containerized web service.
4. Experiment with Dockerfile and Compose:
Created a simple dockerfile for the nginx container.
Created a simple docker-compose.yml file to understand how multiple containers interact with each other.
Procedure — Part 2 (Frontend build & frontend-only containerization)
Frontend Setup:
- Built the UI using HTML, CSS, and JavaScript integrated with the NiceAdmin Bootstrap template.
- Added TailwindCSS for utility-first styling and responsive design.
- Configured Firebase SDK for Authentication and Firestore access directly from the frontend.
Firebase Integration:
Created a Firebase project and initialized web credentials.
Enabled Email/Password and Google Sign-In authentication methods.
- Verified Firestore read/write operations through the frontend app.
-
Containerization with Nginx:
Created a Dockerfile for the frontend
Built and ran the image using:
Run taskmanager-frontend with - docker run -p 8080:80 task-manager-frontend
-
Verified the frontend loads correctly and functions as expected.
-
Testing Deployment Portability:
Tested the container on multiple systems (Windows/macOS) to confirm consistent performance.
- Validated that Firebase-based auth and Firestore worked identically across environments.
-
Document Findings:
- Recorded insights on how frontend-only containerization simplifies setup but limits backend extensibility, setting the stage for Part 3.
Procedure — Part 3 (Full dockerization, backend separation, AI assistant & documentation)
Backend Development and Containerization:
- Developed the backend using Node.js and Express.js to handle task CRUD operations and user authentication.
- Integrated Firebase Admin SDK for secure data access.
- Created a Dockerfile for backend.
- Built and tested container locally using:
AI Summarization Microservice:
Implemented a lightweight FastAPI app that summarizes task content using a Hugging Face transformer model.
- Created a Dockerfile for the AI service using
python:3.11-slim, exposing port 8000.
docker build -t task-manager-backend .docker run -p 5000:5001 taskmanager-backenddocker-compose up --build
- Tested the endpoint with sample text before integration.
-
Compose-based Orchestration:
Created
docker-compose.ymlto define and manage the three containers:frontend→ Nginx (port 80)backend→ Node.js + Express (port 5000)
ai-service→ FastAPI summarizer (port 8000)
-
Configured inter-service networking via Docker internal bridge.
- Launched all containers together using:
- docker-compose up --build
-
Integration Testing:
Verified smooth API communication:
Frontend → Backend (task CRUD)
- Backend → AI Service (summary generation).
- Conducted real-time testing of task creation and AI summarization workflow.
-
Docker Hub:
Pushed all images (frontend, backend, AI service) to Docker Hub after creating an account in Docker Hub.
- docker tag taskmanager-frontend pranav6788/taskmanager-frontend:v1
docker tag taskmanager-backend pranav6788/taskmanager-backend:v1
docker tag taskmanager-assistant pranav6788/taskmanager-assistant:v1 - docker push pranav6788/taskmanager-frontend:v1
docker push pranav6788/taskmanager-backend:v1
docker push pranav6788/taskmanager-assistant:v1 docker pull pranav6788/taskmanager-frontend:v1
docker run -p 80:80 pranav6788/taskmanager-frontend:v1
Uploaded final Compose file and code to GitHub repository.
-
Documented setup, architecture diagrams, and workflow in the technical blog.
-
Docker Showdown & YouTube Demo Preparation:
Created demo scripts and recorded a 5–10 minute walkthrough.
- Highlighted Docker benefits — portability, scalability, reproducibility — across all phases.
Modifications done in downloaded contianers
The Task Manager Project did not require the use of any pre-built images. Base images were used rather than downloading prebuilt containers and these base images were then modified to suit the specifications of the application.
Frontend (nginx:alpine)
Pulled the minimal Nginx Alpine image (
nginx:alpine) as the base layer.-
Created production build of static frontend.
-
Replaced default Nginx HTML with custom app files (
COPY ./dist /usr/share/nginx/html). -
Configured
nginx.confto route SPA paths tologinPage.html. -
Tested that Firebase SDK calls work within the container (browser → container → Firebase).
Backend (node:18-alpine)
Started from the official Node.js LTS Alpine image —
node:18-alpine, chosen for its small footprint and security.-
Added
index.jsandpackage.jsoninto container filesystem. -
Installed dependencies (
npm install) inside image during build. -
Integrated Firebase Admin SDK and included service account credentials (prefer secrets or environment variables for security).
-
Implemented REST endpoints for tasks and authentication; tested via Postman and frontend.
-
Copied only the essential metadata files first (
package*.json) to leverage Docker’s layer caching: Installed only production dependencies to minimize image size and speed up container startup:
Copied the remaining backend source files (
index.js, routes, configs, and Firebase key loader):
AI Assistant (python:3.11-slim)
Used Python 3.11 Slim as the base image (
python:3.11-slim) for a minimal, efficient runtime.Created a working directory
/assitant.-
Added FastAPI app code and
requirements.txt. -
Installed packages in image (
pip install -r requirements.txt). -
Configured environment variables for Hugging Face API token.
-
Implemented POST endpoint
/summarizethat accepts task text and returns a summary. Exposed port 8000 and used Uvicorn as the application server:
Github Link / Docker Hub link of modified containers.
GitHub Repository
Docker Hub Images
Outcomes of the DA
Working Task Manager:
Functional web app with user authentication, task creation, editing, deletion, and real-time updates through Firebase.-
AI Summarization:
Integrated AI assistant using Hugging Face API to generate concise summaries of user tasks. -
Containerized Microservices:
Three fully isolated containers — Frontend (Nginx), Backend (Node.js + Express), and AI Assistant (FastAPI) — orchestrated using Docker Compose. -
Cloud Integration:
Secure data handling and authentication via Firebase Cloud Firestore and Firebase Auth. Reproducibility & Portability:
With publicly available Docker Hub images and a Docker Compose file, the entire stack can be replicated or deployed on any system with a single command.-
Practical Docker Experience:
Hands-on exposure to Dockerfiles, image building, multi-container networking, and Docker Hub integration, reinforcing real-world DevOps and containerization concepts. -
Documentation & Outreach:
Complete documentation, YouTube demonstration, and this blog post for project explanation and community sharing. -
Docker Showdown Experience:
Live presentation improved deployment, debugging, and presentation skills through peer and mentor feedback. -
Presentation-Ready:
Final slides and demo script prepared for Docker Showdown showcase.
Conclusion
This three-phase journey transformed a concept into a modular, containerized web application that’s reproducible, demonstrable, and extensible. Starting from Docker fundamentals, moving through a frontend-only prototype, and culminating in a multi-container architecture with an AI microservice, the DA demonstrates both hands-on skills and an understanding of modern deployment practices.
Participating in the Docker Showdown workshop offered invaluable insights into real-world DevOps workflows, optimization of multi-container systems, and best practices in image management and orchestration. It also provided an opportunity to present the project to peers and mentors, refining both the technical and communication aspects of deployment. Meanwhile, the creation of the YouTube demonstration video reinforced the importance of clear visualization, step-by-step explanation, and audience-oriented presentation — enhancing the overall documentation quality. Together, these experiences strengthened the understanding of containerized development, collaborative evaluation, and reproducible research practices.
References
-
Docker documentation and official images (
nginx,node,python). -
Firebase docs (Auth & Firestore).
-
FastAPI & HTTPX docs.
-
Hugging Face API docs.
YouTube for tutorials.
Acknowledgements
This project acknowledges the original authors and contributors of the base Docker images and template repositories used, including the developers of nginx:alpine, node:18-alpine, and python:3.11-slim. I extend my gratitude to VIT Chennai, the School of Computer Science and Engineering (SCOPE), and Dr. Subbulakshmi T for their constant guidance and insights throughout the Fall Semester 2025–26 as part of the course BCSE408L – Cloud Computing. I would also like to acknowledge the IIT Bombay Spoken Tutorial for Docker for providing valuable learning resources, and express heartfelt thanks to my friends and family for their continuous feedback and support during the development of this project.
-Sarvepalli Krishna Pranav
VIT CHENNAI
Comments
Post a Comment