We are still working with great course – https://www.udemy.com/course/microservices-clean-architecture-ddd-saga-outbox-kafka-kubernetes/ of Ali Gelender and now our focus is on local environment configuration. Any developer needs to configure some env before publishing code to the non-prod / prod env and with course’s code we work with:
- Zookeeper
- Kafka
- Confluent Schema Registry
- Kafka Manager (CMAK)
- PostgreSQL
We can start up virtual machines on a laptop, but virtualization has significant resource overhead, so containerization is a better option. We will use Docker and Docker Compose to create containers with the needed services. Also, we are still staying with Zookeeper, even though Kafka Kraft is its replacement.
All configuration files can be found here.
We need to start Zookeeper first, followed by Kafka, and then Kafka Manager. To achieve this, we use the "depends_on"
configuration parameter. PostgreSQL is the only component that does not depend on any others.
Additionally, we will use a single Docker network to connect these services.
The one Docker network
networks:
food-ordering-system:
driver: bridge
allows containers to interact with each other. Ports directive, like
ports:
- "39092:39092"
expose container’s port to the host network.
Another aspect is persisting data by mapping local directories to directories in containers, such as for Zookeeper:
volumes:
- "./volumes/zookeeper/data:/var/lib/zookeeper/data"
- "./volumes/zookeeper/transactions:/var/lib/zookeeper/log"
for Kafka broker-3
volumes:
- "./volumes/kafka/broker-3:/var/lib/kafka/data"
for PostgreSQL
volumes:
- postgres_data:/var/lib/postgresql/data
This allows restarting Docker containers without losing data stored in Kafka and PostgreSQL.
To run or stop Docker Compose, just use the run.sh
script.
When you start debugging all your services, all your containers should be running:
