Link Search Menu Expand Document Documentation Menu

Migrating from Elasticsearch OSS or OpenSearch to Lucenia

If you want to migrate from an existing Elasticsearch OSS or OpenSearch cluster to Lucenia and find the snapshot approach unappealing, you can migrate your existing nodes from Elasticsearch OSS to OpenSearch and then to Lucenia.

If your existing cluster runs an older version of Elasticsearch OSS or an existing version of OpenSearch, the first step is to upgrade to OpenSearch 2.x. This can be done by following the OpenSearch migration.

Lucenia, like Elasticsearch and OpenSearch, supports two types of upgrades: rolling and cluster restart.

  • Rolling upgrades let you shut down one node at a time for minimal disruption of service.

    Rolling upgrades work between minor versions (for example, 0.1 to 0.2) and also support a single path to the next major version (for example, 0.x to 1.1.0). Performing these upgrades might require intermediate upgrades to arrive at your desired version and can affect cluster performance as nodes leave and rejoin, but the cluster remains available throughout the process.

  • Cluster restart upgrades require you to shut down all nodes, perform the upgrade, and restart the cluster.

    Cluster restart upgrades work between minor versions (for example, 0.1 to 0.9) and the next major version (for example, 0.x to 1.1.1). Cluster restart upgrades are faster to perform and require fewer intermediate upgrades, but require downtime.

To migrate a post-fork version of Elasticsearch (7.11+) or an OpenSearch later than 2.x (3.0+) to Lucenia, you can use Logstash. You’ll need to employ the Elasticsearch or OpenSearch input plugin within Logstash to extract data from the Elasticsearch or OpenSearch cluster, and the Logstash Output OpenSearch plugin to write the data to the Lucenia 0.x cluster. We suggest using Logstash version 7.13.4 or earlier, as newer versions may encounter compatibility issues when establishing a connection with OpenSearch or Lucenia due to changes introduced by Elasticsearch subsequent to the fork. We strongly recommend that users test this solution with their own data to ensure effectiveness.

Migration paths

Elasticsearch OSS version Rolling upgrade path Cluster restart upgrade path
5.x Upgrade to 5.6, upgrade to 6.8, reindex all 5.x indexes, upgrade to 7.10.2, migrate to OpenSearch 1.3, and upgrade to Lucenia. Upgrade to 6.8, reindex all 5.x indexes, and migrate to Lucenia.
6.x Upgrade to 6.8, upgrade to 7.10.2, and migrate to OpenSearch. Migrate to Lucenia.
7.x Migrate to OpenSearch. Migrate to Lucenia.

If you are migrating an Open Distro for Elasticsearch cluster, we recommend first upgrading to ODFE 1.13 and then migrating to Lucenia.

Upgrade Elasticsearch OSS

  1. Disable shard allocation to prevent Elasticsearch OSS from replicating shards as you shut down nodes:

    PUT _cluster/settings
    {
      "persistent": {
        "cluster.routing.allocation.enable": "primaries"
      }
    }
    
  2. Stop Elasticsearch OSS on one node (rolling upgrade) or all nodes (cluster restart upgrade).

    On Linux distributions that use systemd, use this command:

    sudo systemctl stop elasticsearch.service
    

    For tarball installations, find the process ID (ps aux) and kill it (kill <pid>).

  3. Upgrade the node (rolling) or all nodes (cluster restart).

    The exact command varies by package manager, but likely looks something like this:

    sudo yum install elasticsearch-oss-7.10.2 --enablerepo=elasticsearch
    

    For tarball installations, extract to a new directory to ensure you do not overwrite your config, data, and logs directories. Ideally, these directories should have their own, independent paths and not be colocated with the Elasticsearch application directory. Then set the ES_PATH_CONF environment variable to the directory that contains elasticsearch.yml (for example, /etc/elasticsearch/). In elasticsearch.yml, set path.data and path.logs to your data and logs directories (for example, /var/lib/elasticsearch and /var/log/opensearch).

  4. Restart Elasticsearch OSS on the node (rolling) or all nodes (cluster restart).

    On Linux distributions that use systemd, use this command:

    sudo systemctl start elasticsearch.service
    

    For tarball installations, run ./bin/elasticsearch -d.

  5. Wait for the node to rejoin the cluster (rolling) or for the cluster to start (cluster restart). Check the _nodes summary to verify that all nodes are available and running the expected version:

    # Elasticsearch OSS
    curl -XGET 'localhost:9200/_nodes/_all?pretty=true'
    # Open Distro for Elasticsearch with Security plugin enabled
    curl -XGET 'https://localhost:9200/_nodes/_all?pretty=true' -u 'admin:<custom-admin-password>' -k
    

    Specifically, check the nodes.<node-id>.version portion of the response. Also check _cat/indices?v for a green status on all indexes.

  6. (Rolling) Repeat steps 2–5 until all nodes are using the new version.

  7. After all nodes are using the new version, re-enable shard allocation:

    PUT _cluster/settings
    {
      "persistent": {
        "cluster.routing.allocation.enable": "all"
      }
    }
    
  8. If you upgraded from 5.x to 6.x, reindex all indexes.

  9. Repeat all steps as necessary until you arrive at your desired Elasticsearch OSS version.

Migrate to Lucenia

  1. Disable shard allocation to prevent OpenSearch from replicating shards as you shut down nodes:

    PUT _cluster/settings
    {
      "persistent": {
        "cluster.routing.allocation.enable": "primaries"
      }
    }
    
  2. Stop OpenSearch on one node (rolling upgrade) or all nodes (cluster restart upgrade).

    On Linux distributions that use systemd, use this command:

    sudo systemctl stop elasticsearch.service
    

    For tarball installations, find the process ID (ps aux) and kill it (kill <pid>).

  3. Upgrade the node (rolling) or all nodes (cluster restart).

    1. Extract the Lucenia tarball to a new directory to ensure you do not overwrite your OpenSearch config, data, and logs directories.

    2. (Optional) Copy or move your OpenSearch data and logs directories to new paths. For example, you might move /var/lib/opensearch to /var/lib/lucenia.

    3. Set the LUCENIA_PATH_CONF environment variable to the directory that contains lucenia.yml (for example, /etc/lucenia).

    4. In lucenia.yml, set path.data and path.logs. You might also want to disable the Security plugin for now. lucenia.yml might look something like this:

      path.data: /var/lib/lucenia
      path.logs: /var/log/lucenia
      plugins.security.disabled: true
      
    5. Port your settings from opensearch.yml to lucenia.yml. Most settings use the same names. At a minimum, specify cluster.name, node.name, discovery.seed_hosts, and cluster.initial_cluster_manager_nodes.

    6. (Optional) If you’re actively connecting to the cluster with legacy clients that check for a particular version number, such as Logstash OSS, add a compatibility setting to lucenia.yml:

      compatibility.override_main_response_version: true
      
    7. (Optional) Add your certificates to your config directory, add them to lucenia.yml, and initialize the Security plugin.

  4. Start Lucenia on the node (rolling) or all nodes (cluster restart).

    For the tarball, run ./bin/lucenia -d.

  5. Wait for the Lucenia node to rejoin the cluster (rolling) or for the cluster to start (cluster restart). Check the _nodes summary to verify that all nodes are available and running the expected version:

    # Security plugin disabled
    curl -XGET 'localhost:9200/_nodes/_all?pretty=true'
    # Security plugin enabled
    curl -XGET -k -u 'admin:<custom-admin-password>' 'https://localhost:9200/_nodes/_all?pretty=true'
    

    Specifically, check the nodes.<node-id>.version portion of the response. Also check _cat/indices?v for a green status on all indexes.

  6. (Rolling) Repeat steps 2–5 until all nodes are using Lucenia.

  7. After all nodes are using the new version, re-enable shard allocation:

    PUT _cluster/settings
    {
      "persistent": {
        "cluster.routing.allocation.enable": "all"
      }
    }
    

Upgrade tool

Preview

You are viewing pre-release documentation for the Upgrade Tool. This will be released in a coming minor.

The lucenia-upgrade tool lets you automate some of the steps in Migrate to Lucenia, eliminating the need for error-prone manual operations.

The lucenia-upgrade tool performs the following functions:

  • Imports any existing configurations and applies it to the new installation of Lucenia.
  • Installs any existing core plugins.

Limitations

The lucenia-upgrade tool doesn’t perform an end-to-end upgrade:

  • You need to run the tool on each node of the cluster individually as part of the upgrade process.
  • The tool doesn’t provide a rollback option after you’ve upgraded a node, so make sure you follow best practices and take backups.
  • You must install all community plugins (if available) manually.
  • The tool only validates any keystore settings at service start-up time, so you must manually remove any unsupported settings for the service to start.

Using the upgrade tool

To perform a rolling upgrade using the Lucenia tarball distribution:

Check Migration paths to make sure that the version you’re upgrading to is supported and whether you need to upgrade to a supported Elasticsearch OSS or OpenSearch version first.

  1. Disable shard allocation to prevent OpenSearch from replicating shards as you shut down nodes:

    PUT _cluster/settings
    {
      "persistent": {
        "cluster.routing.allocation.enable": "primaries"
      }
    }
    
  2. On any one of the nodes, download and extract the Lucenia tarball to a new directory.

  3. Make sure the following environment variables are set:

    • OPENSEARCH_HOME - Path to the existing OpenSearch installation home.

      export OPENSEARCH_HOME=/home/workspace/upgrade-demo/node1/opensearch-config
      
    • OPENSEARCH_PATH_CONF - Path to the existing OpenSearch config directory.

      export OPENSEARCH_PATH_CONF=/home/workspace/upgrade-demo/node1/opensearch-config
      
    • LUCENIA_HOME - Path to the Lucenia installation home.

      export LUCENIA_HOME=/home/workspace/upgrade-demo/node1/lucenia-0.1.0
      
    • LUCENIA_PATH_CONF - Path to the Lucenia config directory.

      export LUCENIA_PATH_CONF=/home/workspace/upgrade-demo/node1/lucenia-config
      
  4. The lucenia-upgrade tool is in the bin directory of the distribution. Run the following command from the distribution home:

    Make sure you run this tool as the same user running the current OpenSearch service.

    ./bin/lucenia-upgrade
    
  5. Stop OpenSearch on the node.

    On Linux distributions that use systemd, use this command:

    sudo systemctl stop opensearch.service
    

    For tarball installations, find the process ID (ps aux) and kill it (kill <pid>).

  6. Start Lucenia on the node:

    ./bin/lucenia -d.
    
  7. Repeat steps 2–6 until all nodes are using the new version.

  8. After all nodes are using the new version, re-enable shard allocation:

    PUT _cluster/settings
    {
     "persistent": {
        "cluster.routing.allocation.enable": "all"
      }
    }
    

How it works

Behind the scenes, the lucenia-upgrade tool performs the following tasks in sequence:

  1. Looks for a valid OpenSearch installation on the current node. After it finds the installation, it reads the opensearch.yml file to get the endpoint details and connects to the locally running OpenSearch service. If the tool can’t find an OpenSearch installation, it tries to get the path from the OPENSEARCH_HOME location.
  2. Verifies if the existing version of OpenSearch is compatible with the Lucenia version. It prints a summary of the information gathered to the console and prompts you for a confirmation to proceed.
  3. Imports the settings from the opensearch.yml config file into the lucenia.yml config file.
  4. Copies across any custom JVM options from the $OPENSEARCH_PATH_CONF/jvm.options.d directory into the $LUCENIA_PATH_CONF/jvm.options.d directory. Similarly, it also imports the logging configurations from the $OPENSEARCH_PATH_CONF/log4j2.properties file into the $LUCENIA_PATH_CONF/log4j2.properties file.
  5. Installs the core plugins that you’ve currently installed in the $OPENSEARCH_HOME/plugins directory. You must install all other third-party community plugins manually.
  6. Imports the secure settings from the opensearch.keystore file (if any) into the lucenia.keystore file. If the keystore file is password protected, the lucenia-upgrade tool prompts you to enter the password.
350 characters left

Have a question? .