Installation Guide
Welcome to the Installation Guide! This page will help you set up and configure the system using Docker, Podman, or Kubernetes. Use the tabs below to navigate through each setup method.
Installation Methods:
- Docker Setup: Quick and easy containerized installation.
- Podman Setup: Docker-compatible, rootless containers.
- Kubernetes Setup: For scalable, production-grade deployments.
- Docker Setup
- Podman Setup
- Kubernetes Setup
To install using Docker
:
- Download the provided
docker-compose.yml
file below. - Pull the container image from the registry:
docker pull registry.frafos.net/mon:<tag>
- Update your
docker-compose.yml
to use the registry image:image: registry.frafos.net/mon:<tag>
- Run:
docker-compose up -d
- Access the dashboard at
http://localhost:3000
Container images are available at: Frafos Container Registry
Show example docker-compose.yml
⬇️ Download docker-compose.yml
docker-compose.yml
# Example Docker Compose file for Frafos monitoring stack
# Each service below represents a containerized application.
services:
ccm: # Call Control Manager (CCM) service
image: gitlab.frafos.net:5050/sbc/sbc/ccm:5.5
container_name: ccm
ports:
- "443-444:443-444" # Expose ports 443 and 444
networks:
- monitoring # Connect to monitoring network
- signaling # Connect to signaling network
restart: always # Always restart on failure
volumes:
- ccm-data:/data # Persist data in named volume
cap_add:
- AUDIT_CONTROL # Add audit control capability
- AUDIT_WRITE # Add audit write capability
elastic: # Elasticsearch service for log and metric storage
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.28
container_name: elastic
ports:
- "9200:9200" # HTTP API
- "9300:9300" # Transport protocol
environment:
- discovery.type=single-node # Run as single node
- network.host=_local_,_site_ # Bind to local and site interfaces
- path.repo=/usr/share/elasticsearch/snapshots # Path for snapshots
#- thread_pool.search.queue_size=10000 # (optional) Increase search queue size
- xpack.ml.enabled=false # Disable ML features
- xpack.security.enabled=false # Disable security
- xpack.security.http.ssl.enabled=false# Disable HTTP SSL
#- cluster.max_shards_per_node=166 # (optional) Increase max shards
#- indices.lifecycle.history_index_enabled=false # (optional) Disable ILM history
networks:
- monitoring
restart: always
ulimits:
nofile:
soft: 65536
hard: 65536
memlock:
soft: -1
hard: -1
mem_limit: 4g
volumes:
- es-data:/usr/share/elasticsearch/data # Data volume
- es-snapshots:/usr/share/elasticsearch/snapshots # Snapshots volume
#- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro # (optional) Custom config
chrome:
# Headless Chrome for PDF generation or browser automation
image: zenika/alpine-chrome:latest
#image: registry.frafos.net/contrib/alpine-chrome:latest # (alternative image)
container_name: chrome
networks:
- monitoring
expose:
- "9222" # Expose remote debugging port
command:
- "--no-sandbox"
- "--remote-debugging-address=0.0.0.0"
- "--remote-debugging-port=9222"
mon: # Monitoring service (MON)
image: registry.frafos.net/abc/mon:10.2
container_name: mon
ports:
- "5000:5000" # SERVER_PORT
- "5044:5044" # LOGSTASH_BEATS_PORT
- "3042:3042" # UPLOAD_API_PORT
- "1873:1873" # UPLOAD_API_RSYNC_PORT
- "3000:3000" # UI_PORT
environment:
#- CCM=ccm # (optional) CCM service name
#- ES=http://elastic:9200 # (optional) ES endpoint
#- REPORT_URL=http://127.0.0.1:5000/report # (optional) Report URL
#- ES_USER=monitor # (optional) ES user
#- ES_PASSWORD=password # (optional) ES password
volumes:
- mon-data:/data # Persist MON data
networks:
- monitoring
tty: true # Enable TTY
stdin_open: true # Keep STDIN open
volumes:
es-data:
es-snapshots:
mon-data-fresh-master:
networks:
monitoring:
driver: bridge
signaling:
driver: bridge
note
Docker is the recommended way for quick setup and easy updates.
To install using Podman
:
- Download the provided
docker-compose.yml
file below. - Pull the container image from the registry:
podman pull registry.frafos.net/mon:<tag>
- Run the container:
podman run -d -p 3000:3000 registry.frafos.net/mon:<tag>
- Or use Compose:
podman-compose up -d
- Access the dashboard at
http://localhost:3000
Container images are available at: Frafos Container Registry
Show example docker-compose.yml
⬇️ Download docker-compose.yml
docker-compose.yml
# Example Docker Compose file for Frafos monitoring stack
# Each service below represents a containerized application.
services:
ccm: # Call Control Manager (CCM) service
image: gitlab.frafos.net:5050/sbc/sbc/ccm:5.5
container_name: ccm
ports:
- "443-444:443-444" # Expose ports 443 and 444
networks:
- monitoring # Connect to monitoring network
- signaling # Connect to signaling network
restart: always # Always restart on failure
volumes:
- ccm-data:/data # Persist data in named volume
cap_add:
- AUDIT_CONTROL # Add audit control capability
- AUDIT_WRITE # Add audit write capability
elastic: # Elasticsearch service for log and metric storage
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.28
container_name: elastic
ports:
- "9200:9200" # HTTP API
- "9300:9300" # Transport protocol
environment:
- discovery.type=single-node # Run as single node
- network.host=_local_,_site_ # Bind to local and site interfaces
- path.repo=/usr/share/elasticsearch/snapshots # Path for snapshots
#- thread_pool.search.queue_size=10000 # (optional) Increase search queue size
- xpack.ml.enabled=false # Disable ML features
- xpack.security.enabled=false # Disable security
- xpack.security.http.ssl.enabled=false# Disable HTTP SSL
#- cluster.max_shards_per_node=166 # (optional) Increase max shards
#- indices.lifecycle.history_index_enabled=false # (optional) Disable ILM history
networks:
- monitoring
restart: always
ulimits:
nofile:
soft: 65536
hard: 65536
memlock:
soft: -1
hard: -1
mem_limit: 4g
volumes:
- es-data:/usr/share/elasticsearch/data # Data volume
- es-snapshots:/usr/share/elasticsearch/snapshots # Snapshots volume
#- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro # (optional) Custom config
chrome:
# Headless Chrome for PDF generation or browser automation
image: zenika/alpine-chrome:latest
#image: registry.frafos.net/contrib/alpine-chrome:latest # (alternative image)
container_name: chrome
networks:
- monitoring
expose:
- "9222" # Expose remote debugging port
command:
- "--no-sandbox"
- "--remote-debugging-address=0.0.0.0"
- "--remote-debugging-port=9222"
mon: # Monitoring service (MON)
image: registry.frafos.net/abc/mon:10.2
container_name: mon
ports:
- "5000:5000" # SERVER_PORT
- "5044:5044" # LOGSTASH_BEATS_PORT
- "3042:3042" # UPLOAD_API_PORT
- "1873:1873" # UPLOAD_API_RSYNC_PORT
- "3000:3000" # UI_PORT
environment:
#- CCM=ccm # (optional) CCM service name
#- ES=http://elastic:9200 # (optional) ES endpoint
#- REPORT_URL=http://127.0.0.1:5000/report # (optional) Report URL
#- ES_USER=monitor # (optional) ES user
#- ES_PASSWORD=password # (optional) ES password
volumes:
- mon-data:/data # Persist MON data
networks:
- monitoring
tty: true # Enable TTY
stdin_open: true # Keep STDIN open
volumes:
es-data:
es-snapshots:
mon-data-fresh-master:
networks:
monitoring:
driver: bridge
signaling:
driver: bridge
note
Podman is Docker-compatible and supports rootless containers. You can use podman-compose
for multi-container setups.
To install using Kubernetes
:
- Download the provided manifest file below.
- Make sure your manifest uses the registry image:
image: registry.frafos.net/mon:<tag>
- Apply the Kubernetes manifests:
kubectl apply -f manifest.example.yaml
- Monitor pods and services:
kubectl get pods
,kubectl get svc
- Access the dashboard via the exposed service (see your cluster's configuration)
Container images are available at: Frafos Container Registry
Show example manifest (manifest.yaml
)
⬇️ Download manifest.yaml
manifest.yaml
---
# CCM Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ccm
spec:
replicas: 1
selector:
matchLabels:
app: ccm
template:
metadata:
labels:
app: ccm
spec:
containers:
- name: ccm
image: gitlab.frafos.net:5050/sbc/sbc/ccm:5.5
ports:
- containerPort: 443
- containerPort: 444
volumeMounts:
- name: ccm-data
mountPath: /data
securityContext:
capabilities:
add: ["AUDIT_CONTROL", "AUDIT_WRITE"]
volumes:
- name: ccm-data
persistentVolumeClaim:
claimName: ccm-data-pvc
---
# Elastic Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: elastic
spec:
replicas: 1
selector:
matchLabels:
app: elastic
template:
metadata:
labels:
app: elastic
spec:
containers:
- name: elastic
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.28
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: "single-node"
- name: network.host
value: "_local_,_site_"
- name: path.repo
value: "/usr/share/elasticsearch/snapshots"
- name: xpack.ml.enabled
value: "false"
- name: xpack.security.enabled
value: "false"
- name: xpack.security.http.ssl.enabled
value: "false"
resources:
limits:
memory: "4Gi"
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
- name: es-snapshots
mountPath: /usr/share/elasticsearch/snapshots
volumes:
- name: es-data
persistentVolumeClaim:
claimName: es-data-pvc
- name: es-snapshots
persistentVolumeClaim:
claimName: es-snapshots-pvc
---
# Chrome Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: chrome
spec:
replicas: 1
selector:
matchLabels:
app: chrome
template:
metadata:
labels:
app: chrome
spec:
containers:
- name: chrome
image: zenika/alpine-chrome:latest
ports:
- containerPort: 9222
args:
- "--no-sandbox"
- "--remote-debugging-address=0.0.0.0"
- "--remote-debugging-port=9222"
---
# Kibana Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.17.28
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://elastic:9200"
---
# Mon Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mon
spec:
replicas: 1
selector:
matchLabels:
app: mon
template:
metadata:
labels:
app: mon
spec:
containers:
- name: mon
image: gitlab.frafos.net:5050/sbc/sbc/mon:10.1.2
ports:
- containerPort: 5000 # SERVER_PORT
- containerPort: 5044 # LOGSTASH_BEATS_PORT
- containerPort: 3042 # UPLOAD_API_PORT
- containerPort: 1873 # UPLOAD_API_RSYNC_PORT
- containerPort: 3000 # UI_PORT
volumeMounts:
- name: mon-data
mountPath: /data
tty: true
stdin: true
volumes:
- name: mon-data
persistentVolumeClaim:
claimName: mon-data-pvc
---
# Example PersistentVolumeClaims
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ccm-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-snapshots-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mon-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
# Example Services
apiVersion: v1
kind: Service
metadata:
name: ccm-service
spec:
selector:
app: ccm
ports:
- protocol: TCP
port: 443
targetPort: 443
- protocol: TCP
port: 444
targetPort: 444
---
apiVersion: v1
kind: Service
metadata:
name: elastic-service
spec:
selector:
app: elastic
ports:
- protocol: TCP
port: 9200
targetPort: 9200
- protocol: TCP
port: 9300
targetPort: 9300
---
apiVersion: v1
kind: Service
metadata:
name: chrome-service
spec:
selector:
app: chrome
ports:
- protocol: TCP
port: 9222
targetPort: 9222
---
apiVersion: v1
kind: Service
metadata:
name: kibana-service
spec:
selector:
app: kibana
ports:
- protocol: TCP
port: 5601
targetPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: mon-service
spec:
selector:
app: mon
ports:
- protocol: TCP
port: 445
targetPort: 445
- protocol: TCP
port: 5044
targetPort: 5044
- protocol: TCP
port: 5045
targetPort: 5045
- protocol: TCP
port: 80
targetPort: 80
- protocol: TCP
port: 873
targetPort: 873
info
Kubernetes setup is ideal for scalable, resilient, and production-grade deployments.