Elasticsearch Snapshots
This guide explains how to create and restore Elasticsearch snapshots for instances running inside Docker (single-node or small clusters). It covers filesystem-based repositories (host volumes) and S3 repositories. Commands include docker exec and curl examples; adapt hostnames, ports, credentials, and paths for your environment.
Important: snapshots are point-in-time backups of Elasticsearch indices and cluster metadata. They do not replace a full disaster recovery plan, but are appropriate for index-level restore and migrations.
Estimated restore time
As a very rough rule of thumb, restoring or migrating about 5GB of index data takes on the order of 10 minutes on typical developer hardware (SSD + moderate CPU). Actual time depends heavily on:
- disk speed (HDD vs SSD/NVMe)
- CPU and available threads
- cluster load and network (for remote repositories)
- number of shards and index settings (refresh_interval, replicas, etc.)
Use this simple scaling estimate to plan: Estimated time ≈ (data_size_bytes / 5GB) * 10 minutes. Treat it as a guideline only — measure on representative hardware when possible.
Overview
-
Repository types:
fs(filesystem),s3(AWS S3). Usefsfor local/host backups,s3for off-host durable storage. -
High-level flow (create):
- Ensure repository path or S3 plugin is available
- Register repository with Elasticsearch
- Create snapshot
- Verify and copy snapshot files if needed.
-
High-level flow (restore):
- Register repository (if needed)
- List snapshots
- Optionally close indices or stop writes
- Restore snapshot
- Verify cluster and indices.
Assumptions / variables
Replace these with your environment values where appropriate:
ES_HOST: hostname or container name where Elasticsearch HTTP API is reachable (defaultlocalhostif port 9200 published).ES_PORT: HTTP port (default9200).ES_USER/ES_PASS: credentials if security is enabled (theelasticuser, for example).REPO_NAME: snapshot repository name (e.g.,moki_backup).SNAPSHOT_NAME: snapshot name (e.g.,snapshot_2025_12_04).
Example quick set:
ES_HOST=localhost
ES_PORT=9200
ES_USER=elastic # only if security is enabled
ES_PASS=changeme # only if security is enabled
REPO_NAME=moki_backup
SNAPSHOT_NAME="snapshot_$(date +%F)"
A: Filesystem (fs) repository (recommended for local Docker setups)
This approach uses a directory accessible by the Elasticsearch container(s) (via a Docker volume or bind mount) as the snapshot repository.
-
Prepare host directory and mount it into Elasticsearch container
- On the Docker host, create a directory for snapshots (example uses
/var/lib/es-snapshots):
sudo mkdir -p /var/lib/es-snapshots
sudo chown 1000:1000 /var/lib/es-snapshotsnoteElasticsearch container images usually run as UID
1000. Adjust owner or permissions if your image uses a different UID.sudo chown -R 1000:1000 /var/lib/es-snapshots- Docker Compose snippet (mount into container at
/usr/share/elasticsearch/backups):
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.*
volumes:
- /var/lib/es-snapshots:/usr/share/elasticsearch/backups
environment:
- discovery.type=single-node
- path.repo=/usr/share/elasticsearch/backups
# other env vars
ports:
- "9200:9200" - On the Docker host, create a directory for snapshots (example uses
-
Register the
fsrepository with ElasticsearchIf your cluster is unsecured or you have a user and password, use
-ufor auth.curl -sS -X PUT "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}" \
-H 'Content-Type: application/json' \
-d '{"type":"fs", "settings":{"location":"/usr/share/elasticsearch/backups","compress":true}}'If security is enabled:
curl -sS -u ${ES_USER}:${ES_PASS} -X PUT "https://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}" \
-H 'Content-Type: application/json' \
-d '{"type":"fs", "settings":{"location":"/usr/share/elasticsearch/backups","compress":true}}' -k -
Create a snapshot
curl -sS -X PUT "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}?wait_for_completion=true" \
-H 'Content-Type: application/json' \
-d '{"indices":"*","ignore_unavailable":true,"include_global_state":true}'Add
-u ${ES_USER}:${ES_PASS}andhttps://if security/TLS are enabled.wait_for_completion=trueblocks the request until snapshot finishes. Without it you can poll snapshot status. -
Verify snapshot
curl -sS "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}?pretty"
curl -sS "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/_all?pretty" -
Copy or offload snapshot files (optional)
- The snapshot files are stored under the host directory you mounted (
/var/lib/es-snapshots). You can copy them, tar them, or rsync them to another host.
sudo tar -czf /tmp/es-snapshots-$(date +%F).tgz -C /var/lib es-snapshots
scp /tmp/es-snapshots-$(date +%F).tgz user@backup-host:/path/
Or, to extract repository contents from the running container:
docker cp $(docker ps -qf "name=elasticsearch"):/usr/share/elasticsearch/backups /tmp/es-snapbacks
B: S3 repository (recommended for cloud/off-host durable storage)
-
Install
repository-s3plugin in your Elasticsearch imageIf using the official image, extend it to install plugin in a Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:8.*/
RUN bin/elasticsearch-plugin install --batch repository-s3 -
Provide AWS credentials (best: use IAM role for the instance; otherwise use access key/secret)
Environment variables or keystore settings are common; for quick testing you can provide access keys in the
elasticsearch.ymlor environment. Example with environment variables in Docker Compose (not recommended for production):environment:
- s3.client.default.access_key: "YOUR_AWS_ACCESS_KEY"
- s3.client.default.secret_key: "YOUR_AWS_SECRET_KEY"
- s3.client.default.region: "eu-west-1" -
Register the S3 repository
curl -sS -X PUT "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}" \
-H 'Content-Type: application/json' \
-d '{"type":"s3","settings":{"bucket":"my-es-backups","region":"eu-west-1","compress":true}}' -
Create snapshot (same as FS example)
curl -sS -X PUT "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}?wait_for_completion=true" \
-H 'Content-Type: application/json' \
-d '{"indices":"*","ignore_unavailable":true,"include_global_state":true}'
Restore snapshots (single node)
General restore strategy:
- Register the same repository (or make the repository files available to the Elasticsearch nodes).
- Confirm the snapshot exists and list contained indices.
- Option A: Restore into the same index names (careful: will fail if index already exists unless
rename_patternis used). - Option B: Restore into new index names using
rename_pattern/rename_replacement.
-
List available snapshots
curl -sS "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/_all?pretty" -
Inspect snapshot contents (indices)
curl -sS "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}?pretty" -
Prepare cluster for restore (recommended)
- Stop index writes or put the application into read-only mode.
- If restoring into the same index names, delete existing indices first or close them:
curl -sS -X DELETE "http://${ES_HOST}:${ES_PORT}/my-index-to-overwrite" # DELETE existing indexOr close index (you can then restore with the same name if index is closed or removed):
curl -sS -X POST "http://${ES_HOST}:${ES_PORT}/my-index/_close" -
Restore snapshot
- Restore all indices from snapshot:
- if restoring to a single node the number of replicas should be set to 0 during restore to avoid unassigned shards.
- if restoring to a cluster, adjust as needed.
curl -sS -X POST "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}/_restore?wait_for_completion=true" \
-H 'Content-Type: application/json' \
-d '{"indices":"*","include_global_state":true, "index_settings": {"index.number_of_replicas": 0}}'- Restore but rename indices (to avoid clobbering existing indices). Example renaming
log-*torestored-log-*:
curl -sS -X POST "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}/_restore?wait_for_completion=true" \
-H 'Content-Type: application/json' \
-d '{"indices":"log-*","rename_pattern":"log-(.+)","rename_replacement":"restored-log-$1","include_global_state":false}'Add
-u ${ES_USER}:${ES_PASS}andhttps://where required. - Restore all indices from snapshot:
-
Verify restored indices and health
curl -sS "http://${ES_HOST}:${ES_PORT}/_cat/indices?v"
curl -sS "http://${ES_HOST}:${ES_PORT}/_cluster/health?pretty" -
Re-enable writes / restore application traffic
Common notes, tips and troubleshooting
- Permissions: the Elasticsearch process must be able to read/write the repository location. For
fsrepo ensure the container sees the same path you configured aslocation. wait_for_completion=trueis convenient for scripting; polling the snapshot status (GET_snapshot/...) is useful for async flows.- If the restore fails with
index_already_exists_exception, either delete/close the existing index(s) or restore withrename_pattern/rename_replacement. - If using S3, ensure
repository-s3plugin is installed on every node that will access the repository. - Global state: when
include_global_stateistrue, cluster-level metadata (templates, persistent settings) is restored — use with care in production. - Snapshots are incremental: after the first full snapshot, subsequent snapshots only store changed segments.
Quick verification commands (summary)
# Register repository (fs)
curl -X PUT "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}" -H 'Content-Type: application/json' -d '{"type":"fs","settings":{"location":"/usr/share/elasticsearch/backups","compress":true}}'
# Create snapshot
curl -X PUT "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}?wait_for_completion=true" -H 'Content-Type: application/json' -d '{"indices":"*","include_global_state":true}'
# List snapshots
curl "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/_all?pretty"
# Restore snapshot
curl -X POST "http://${ES_HOST}:${ES_PORT}/_snapshot/${REPO_NAME}/${SNAPSHOT_NAME}/_restore?wait_for_completion=true" -H 'Content-Type: application/json' -d '{"indices":"*","include_global_state":true}'
References
- Elasticsearch Snapshot and Restore API
- repository-s3 plugin
If you want, I can:
- Add a ready-to-drop Dockerfile and Compose example that installs
repository-s3and exposes a mountedfsrepo. - Add an automated script that runs snapshots and uploads the
fsrepo tarball to an offsite host.
End of guide.