Docker
Docker greatly simplifies the process of configuring and managing your Lucenia clusters. You can pull official images from Docker Hub and quickly deploy a cluster using Docker Compose and any of the sample Docker Compose files included in this guide. Experienced Lucenia users can further customize their deployment by creating a custom Docker Compose file.
Docker containers are portable and will run on any compatible host that supports Docker (such as Linux, MacOS, or Windows). The portability of a Docker container offers flexibility over other installations methods, like Tarball installation, which requires additional configuration after downloading and unpacking.
This guide assumes that you are comfortable working from the Linux command line interface (CLI). You should understand how to input commands, navigate between directories, and edit text files. For help with Docker or Docker Compose, refer to the official documentation on their websites.
Install Docker and Docker Compose
Visit Get Docker for guidance on installing and configuring Docker for your environment. If you are installing Docker Engine using the CLI, then Docker, by default, will not have any constraints on available host resources. Depending on your environment, you may wish to configure resource limits in Docker. See Runtime options with Memory, CPUs, and GPUs for information.
Docker Desktop users should set host memory utilization to a minimum of 4 GB by opening Docker Desktop and selecting Settings → Resources.
Docker Compose is a utility that allows users to launch multiple containers with a single command. You pass a file to Docker Compose when you invoke it. Docker Compose reads those settings and starts the requested containers. Docker Compose is installed automatically with Docker Desktop, but users operating in a command line environment must install Docker Compose manually. You can find information about installing Docker Compose on the official Docker Compose GitHub page.
If you need to install Docker Compose manually and your host supports Python, you can use pip to install the Docker Compose package automatically.
Configure important host settings
Before installing Lucenia using Docker, configure the following settings. These are the most important settings that can affect the performance of your services, but for additional information, see important system settings.
Linux settings
For a Linux environment, run the following commands:
- Disable memory paging and swapping performance on the host to improve performance.
sudo swapoff -a
- Increase the number of memory maps available to Lucenia.
# Edit the sysctl config file sudo vi /etc/sysctl.conf # Add a line to define the desired value # or change the value if the key exists, # and then save your changes. vm.max_map_count=262144 # Reload the kernel parameters using sysctl sudo sysctl -p # Verify that the change was applied by checking the value cat /proc/sys/vm/max_map_count
Windows settings
For Windows workloads using WSL through Docker Desktop, run the following commands in a terminal to set the vm.max_map_count
:
wsl -d docker-desktop
sysctl -w vm.max_map_count=262144
Run Lucenia in a Docker container
Official Lucenia images are hosted on Docker Hub. If you want to inspect the images you can pull them individually using docker pull
, such as in the following examples.
docker pull lucenia/lucenia:0.1.0
docker pull opensearchproject/opensearch-dashboards:2
To download a specific version of Lucenia, modify the image tag where it is referenced (either in the command line or in a Docker Compose file). For example, lucenia/lucenia:0.1.0
will pull Lucenia version 0.1.0. Refer to the official image repositories for available versions.
Before continuing, you should verify that Docker is working correctly by deploying Lucenia in a single container.
- Run the following command, (
-e "LUCENIA_INITIAL_ADMIN_PASSWORD
sets a new custom admin password before installation):docker run -d -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" -e "LUCENIA_INITIAL_ADMIN_PASSWORD=<custom-admin-password>" lucenia/lucenia:0.1.0
- Send a request to port 9200. The default username and password are
admin
.curl https://localhost:9200 -ku 'admin:<custom-admin-password>'
- You should get a response that looks like this:
{ "name" : "a937e018cee5", "cluster_name" : "docker-cluster", "cluster_uuid" : "GLAjAG6bTeWErFUy_d-CLw", "version" : { "distribution" : "lucenia", "number" : <version>, "build_type" : <build-type>, "build_hash" : <build-hash>, "build_date" : <build-date>, "build_snapshot" : false, "lucene_version" : <lucene-version>, "minimum_wire_compatibility_version" : "2.15.0", "minimum_index_compatibility_version" : "2.0.0" }, "tagline" : "[SEARCH]...on your terms." }
- You should get a response that looks like this:
- Before stopping the running container, display a list of all running containers and copy the container ID for the Lucenia node you are testing. In the following example, the container ID is
a937e018cee5
:$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a937e018cee5 lucenia/lucenia:0.1.0 "./lucenia-docker…" 19 minutes ago Up 19 minutes 0.0.0.0:9200->9200/tcp, 9300/tcp, 0.0.0.0:9600->9600/tcp, 9650/tcp wonderful_boyd
- Stop the running container by passing the container ID to
docker stop
.docker stop <containerId>
Remember that docker container ls
does not list stopped containers. If you would like to review stopped containers, use docker container ls -a
. You can remove unneeded containers manually with docker container rm <containerId_1> <containerId_2> <containerId_3> [...]
(pass all container IDs you wish to stop, separated by spaces), or if you want to remove all stopped containers, you can use the shorter command docker container prune
.
Deploy a Lucenia cluster using Docker Compose
Although it is technically possible to build a Lucenia cluster by creating containers one command at a time, it is far easier to define your environment in a YAML file and let Docker Compose manage the cluster. The following section contains example YAML files that you can use to launch a predefined cluster with Lucenia. These examples are useful for testing and development, but are not suitable for a production environment. If you don’t have prior experience using Docker Compose, you may wish to review the Docker Compose specification for guidance on syntax and formatting before making any changes to the dictionary structures in the examples.
The YAML file that defines the environment is referred to as a Docker Compose file. By default, docker compose
commands will first check your current directory for a file that matches any of the following names:
docker-compose.yml
docker-compose.yaml
compose.yml
compose.yaml
If none of those files exist in your current directory, the docker compose
command fails.
You can specify a custom file location and name when invoking docker compose
with the -f
flag:
# Use a relative or absolute path to the file.
docker compose -f /path/to/your-file.yml up
If this is your first time launching a Lucenia cluster using Docker Compose, use the following example docker-compose.yml
file. Save it in the home directory of your host and name it docker-compose.yml
. This file creates a cluster that contains two containers running the Lucenia service. These containers communicate over a bridge network called lucenia-net
and use two volumes, one for each Lucenia node. Because this file does not explicitly disable the demo security configuration, self-signed TLS certificates are installed and internal users with default names and passwords are created.
Configuring your license
To obtain a trial license, follow the instructions in the Lucenia Trial License Activation guide. After you receive your trial license, save it as trial.crt
in the same directory as your docker-compose.yml
file.
Setting a custom admin password
A custom admin password is required to set up a demo security configuration. Do one of the following:
- Before running
docker-compose.yml
, set a new custom admin password using the following command:export LUCENIA_INITIAL_ADMIN_PASSWORD=<custom-admin-password>
- Create an
.env
file in the same folder as yourdocker-compose.yml
file with theLUCENIA_INITIAL_ADMIN_PASSWORD
and a strong password value.
Sample docker-compose.yml
version: '3'
services:
lucenia-node1:
image: lucenia/lucenia:0.1.0
container_name: lucenia-node1
environment:
- cluster.name=lucenia-cluster
- node.name=lucenia-node1
- discovery.type=single-node
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- network.host=0.0.0.0
- plugins.license.certificate_filepath=config/trial.crt
- "LUCENIA_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
- LUCENIA_INITIAL_ADMIN_PASSWORD=MyStrongPassword123!
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Lucenia user, set to at least 65536 on modern systems
hard: 65536
volumes:
- lucenia-data:/usr/share/lucenia/data
- ./trial.crt:/usr/share/lucenia/config/trial.crt
ports:
- 9200:9200
networks:
- lucenia-net
volumes:
lucenia-data:
lucenia-config:
networks:
lucenia-net:
If you override opensearch_dashboards.yml
settings using environment variables in your compose file, use all uppercase letters and replace periods with underscores (for example, for lucenia.hosts
, use LUCENIA_HOSTS
). This behavior is inconsistent with overriding lucenia.yml
settings, where the conversion is just a change to the assignment operator (for example, discovery.type: single-node
in lucenia.yml
is defined as discovery.type=single-node
in docker-compose.yml
).
From the home directory of your host (containing docker-compose.yml
), create and start the containers in detached mode:
docker compose up -d
Verify that the service containers started correctly:
docker compose ps
If a container failed to start, you can review the service logs:
# If you don't pass a service name, docker compose will show you logs from all of the nodes
docker compose logs <serviceName>
Verify access to OpenSearch Dashboards by connecting to http://localhost:5601 from a browser. The default username and password are admin
. We do not recommend using this configuration on hosts that are accessible from the public internet until you have customized the security configuration of your deployment.
Remember that localhost
cannot be accessed remotely. If you are deploying these containers to a remote host, then you will need to establish a network connection and replace localhost
with the IP or DNS record corresponding to the host.
Stop the running containers in your cluster:
docker compose down
docker compose down
will stop the running containers, but it will not remove the Docker volumes that exist on the host. If you don’t care about the contents of these volumes, use the -v
option to delete all volumes, for example, docker compose down -v
.
Configure Lucenia
Unlike the RPM distribution of Lucenia, which requires a large amount of post-installation configuration, running Lucenia clusters with Docker allows you to define the environment before the containers are even created. This is possible whether you use Docker or Docker Compose.
For example, take a look at the following command:
docker run \
-p 9200:9200 -p 9600:9600 \
-e "discovery.type=single-node" \
-v /path/to/custom-lucenia.yml:/usr/share/lucenia/config/lucenia.yml \
-v /path/to/license.crt:/usr/share/lucenia/config/license.crt \
lucenia/lucenia:0.1.0
By reviewing each part of the command, you can see that it:
- Maps ports
9200
and9600
(HOST_PORT
:CONTAINER_PORT
). - Sets
discovery.type
tosingle-node
so that bootstrap checks don’t fail for this single-node deployment. - Uses the -v flag to pass a local file called
custom-lucenia.yml
to the container, replacing thelucenia.yml
file included with the image. - Requests the
lucenia/lucenia:0.1.0
image from Docker Hub. - Runs the container.
If you compare this command to the Sample docker-compose.yml file, you might notice some common settings, such as the port mappings and the image reference. The command, however, is only deploying a single container running Lucenia and will not create a container for OpenSearch Dashboards. Furthermore, if you want to use custom TLS certificates, users, or roles, or define additional volumes and networks, then this “one-line” command rapidly grows to an impractical size. That is where the utility of Docker Compose becomes useful.
When you build your Lucenia cluster with Docker Compose you might find it easier to pass custom configuration files from your host to the container, as opposed to enumerating every individual setting in docker-compose.yml
. Similar to how the example docker run
command mounted a volume from the host to the container using the -v
flag, compose files can specify volumes to mount as a sub-option to the corresponding service. The following truncated YAML file demonstrates how to mount a file or directory to the container. Refer to the official Docker documentation on volumes for comprehensive information about volume usage and syntax.
services:
lucenia-node1:
volumes:
- lucenia-data1:/usr/share/lucenia/data
- ./custom-lucenia.yml:/usr/share/lucenia/config/lucenia.yml
- /path/to/license.crt:/usr/share/lucenia/config/license.crt
lucenia-node2:
volumes:
- lucenia-data2:/usr/share/lucenia/data
- ./custom-lucenia.yml:/usr/share/lucenia/config/lucenia.yml
- /path/to/license.crt:/usr/share/lucenia/config/license.crt
opensearch-dashboards:
volumes:
- ./custom-opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml
Sample Docker Compose file for development
If you want to build your own compose file from an example, review the following sample docker-compose.yml
file. This sample file creates two Lucenia nodes and one OpenSearch Dashboards node with the Security implementation disabled. You can use this sample file as a starting point while reviewing Configuring basic security settings.
version: '3'
services:
lucenia-node1:
image: lucenia/lucenia:0.1.0
container_name: lucenia-node1
environment:
- cluster.name=lucenia-cluster # Name the cluster
- node.name=lucenia-node1 # Name the node that will run in this container
- discovery.seed_hosts=lucenia-node1,lucenia-node2 # Nodes to look for when discovering the cluster
- cluster.initial_cluster_manager_nodes=lucenia-node1,lucenia-node2 # Nodes eligibile to serve as cluster manager
- bootstrap.memory_lock=true # Disable JVM heap memory swapping
- "LUCENIA_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
- "DISABLE_INSTALL_DEMO_CONFIG=true" # Prevents execution of bundled demo script which installs demo certificates and security configurations to Lucenia
- "DISABLE_SECURITY_PLUGIN=true" # Disables Security implementation
- network.host=0.0.0.0
- plugins.license.certificate_filepath=config/trial.crt
- LUCENIA_INITIAL_ADMIN_PASSWORD=MyStrongPassword123!
ulimits:
memlock:
soft: -1 # Set memlock to unlimited (no soft or hard limit)
hard: -1
nofile:
soft: 65536 # Maximum number of open files for the lucenia user - set to at least 65536
hard: 65536
volumes:
- lucenia-data1:/usr/share/lucenia/data # Creates volume called lucenia-data1 and mounts it to the container
- ./trial.crt:/usr/share/lucenia/config/trial.crt
ports:
- 9200:9200 # REST API
networks:
- lucenia-net # All of the containers will join the same Docker bridge network
lucenia-node2:
image: lucenia/lucenia:0.1.0
container_name: lucenia-node2
environment:
- cluster.name=lucenia-cluster # Name the cluster
- node.name=lucenia-node2 # Name the node that will run in this container
- discovery.seed_hosts=lucenia-node1,lucenia-node2 # Nodes to look for when discovering the cluster
- cluster.initial_cluster_manager_nodes=lucenia-node1,lucenia-node2 # Nodes eligibile to serve as cluster manager
- bootstrap.memory_lock=true # Disable JVM heap memory swapping
- "LUCENIA_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
- "DISABLE_INSTALL_DEMO_CONFIG=true" # Prevents execution of bundled demo script which installs demo certificates and security configurations to Lucenia
- "DISABLE_SECURITY_PLUGIN=true" # Disables Security implementation
- network.host=0.0.0.0
- plugins.license.certificate_filepath=config/license.crt
- LUCENIA_INITIAL_ADMIN_PASSWORD=MyStrongPassword123!
ulimits:
memlock:
soft: -1 # Set memlock to unlimited (no soft or hard limit)
hard: -1
nofile:
soft: 65536 # Maximum number of open files for the lucenia user - set to at least 65536
hard: 65536
volumes:
- lucenia-data2:/usr/share/lucenia/data # Creates volume called lucenia-data2 and mounts it to the container
- ./license.crt:/usr/share/lucenia/config/license.crt
networks:
- lucenia-net # All of the containers will join the same Docker bridge network
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:latest
container_name: opensearch-dashboards
ports:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
environment:
- 'LUCENIA_HOSTS=["http://lucenia-node1:9200","http://lucenia-node2:9200"]'
- "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true" # disables security dashboards plugin in OpenSearch Dashboards
networks:
- lucenia-net
volumes:
lucenia-data1:
lucenia-data2:
networks:
lucenia-net:
Configuring basic security settings
Before making your Lucenia cluster available to external hosts, it’s a good idea to review the deployment’s security configuration. You may recall from the first Sample docker-compose.yml file that, unless disabled by setting DISABLE_SECURITY_PLUGIN=true
, a bundled script will apply a default demo security configuration to the nodes in the cluster. Because this configuration is used for demo purposes, the default usernames and passwords are known. For that reason, we recommend that you create your own security configuration files and use volumes
to pass these files to the containers. For specific guidance on Lucenia security settings, see Security configuration.
To use your own certificates in your configuration, add all of the necessary certificates to the volumes section of the compose file:
volumes:
- ./root-ca.pem:/usr/share/lucenia/config/root-ca.pem
- ./admin.pem:/usr/share/lucenia/config/admin.pem
- ./admin-key.pem:/usr/share/lucenia/config/admin-key.pem
- ./node1.pem:/usr/share/lucenia/config/node1.pem
- ./node1-key.pem:/usr/share/lucenia/config/node1-key.pem
When you add TLS certificates to your Lucenia nodes with Docker Compose volumes, you should also include a custom lucenia.yml
file that defines those certificates. For example:
volumes:
- ./root-ca.pem:/usr/share/lucenia/config/root-ca.pem
- ./admin.pem:/usr/share/lucenia/config/admin.pem
- ./admin-key.pem:/usr/share/lucenia/config/admin-key.pem
- ./node1.pem:/usr/share/lucenia/config/node1.pem
- ./node1-key.pem:/usr/share/lucenia/config/node1-key.pem
- ./custom-lucenia.yml:/usr/share/lucenia/config/lucenia.yml
Remember that the certificates you specify in your compose file must be the same as the certificates defined in your custom lucenia.yml
file. You should replace the root, admin, and node certificates with your own. For more information see Configure TLS certificates.
plugins.security.ssl.transport.pemcert_filepath: node1.pem
plugins.security.ssl.transport.pemkey_filepath: node1-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.http.pemcert_filepath: node1.pem
plugins.security.ssl.http.pemkey_filepath: node1-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.authcz.admin_dn:
- CN=admin,OU=SSL,O=Test,L=Test,C=DE
After configuring security settings, your custom lucenia.yml
file might look something like the following example, which adds TLS certificates and the distinguished name (DN) of the admin certificate, defines a few permissions, and enables verbose audit logging:
plugins.security.ssl.transport.pemcert_filepath: node1.pem
plugins.security.ssl.transport.pemkey_filepath: node1-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: node1.pem
plugins.security.ssl.http.pemkey_filepath: node1-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
- CN=A,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA
plugins.security.nodes_dn:
- 'CN=N,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
plugins.security.audit.type: internal_lucenia
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
opendistro_security.audit.config.disabled_rest_categories: NONE
opendistro_security.audit.config.disabled_transport_categories: NONE
For a full list of settings, see Security.
Use the same process to specify a Backend configuration in /usr/share/lucenia/config/lucenia-security/config.yml
as well as new internal users, roles, mappings, action groups, and tenants in their respective YAML files.
After replacing the certificates and creating your own internal users, roles, mappings, action groups, and tenants, use Docker Compose to start the cluster:
docker compose up -d
Working with plugins
To use the Lucenia image with a custom plugin, you must first create a Dockerfile
. Review the official Docker documentation for information about creating a Dockerfile.
FROM lucenia/lucenia:0.1.0
RUN /usr/share/lucenia/bin/lucenia-plugin install --batch <pluginId>
Then run the following commands:
# Build an image from a Dockerfile
docker build --tag=lucenia-custom-plugin .
# Start the container from the custom image
docker run -p 9200:9200 -p 9600:9600 -v /usr/share/lucenia/data lucenia-custom-plugin
Alternatively, you might want to remove a plugin from an image before deploying it. This example Dockerfile removes the Security plugin:
FROM lucenia/lucenia:0.1.0
RUN /usr/share/lucenia/bin/lucenia-plugin remove lucenia-security
You can also use a Dockerfile to pass your own certificates for use with the Security implementation:
FROM lucenia/lucenia:0.1.0
COPY --chown=lucenia:lucenia lucenia.yml /usr/share/lucenia/config/
COPY --chown=lucenia:lucenia my-key-file.pem /usr/share/lucenia/config/
COPY --chown=lucenia:lucenia my-certificate-chain.pem /usr/share/lucenia/config/
COPY --chown=lucenia:lucenia my-root-cas.pem /usr/share/lucenia/config/