Link Search Menu Expand Document Documentation Menu

Migrating to remote-backed storage

This is an experimental feature and is not recommended for use in a production environment.

Introduced 0.1.0

Remote-backed storage offers a new way to protect against data loss by automatically creating backups of all index transactions and sending them to remote storage. To use this feature, segment replication must be enabled.

You can migrate a document-replication-based cluster to remote-backed storage through the rolling upgrade mechanism.

Rolling upgrades, sometimes referred to as node replacement upgrades, can be performed on running clusters with virtually no downtime. Nodes are individually stopped and migrated in place. Alternatively, nodes can be stopped and replaced, one at a time, by remote-backed hosts. During this process you can continue to index and query data in your cluster.

Preparing to migrate

Review Upgrading Lucenia for recommendations about backing up your configuration files and creating a snapshot of the cluster state and indexes before you make any changes to your Lucenia cluster.

Lucenia nodes cannot be migrated back to document replication. If you need to revert the migration, then you will need to perform a fresh installation of Lucenia and restore the cluster from a snapshot. Take a snapshot and store it in a remote repository before beginning the upgrade procedure.

Performing the upgrade

  1. Verify the health of your Lucenia cluster before you begin using the Cluster Health API. Resolve any index or shard allocation issues prior to upgrading to ensure that your data is preserved. A status of green indicates that all primary and replica shards are allocated. You can query the _cluster/health API endpoint using a command similar to the following:

    GET "/_cluster/health?pretty"
    

    You should receive a response similar to the following:

    {
        "cluster_name":"lucenia-dev-cluster",
        "status":"green",
        "timed_out":false,
        "number_of_nodes":4,
        "number_of_data_nodes":4,
        "active_primary_shards":1,
        "active_shards":4,
        "relocating_shards":0,
        "initializing_shards":0,
        "unassigned_shards":0,
        "delayed_unassigned_shards":0,
        "number_of_pending_tasks":0,
        "number_of_in_flight_fetch":0,
        "task_max_waiting_in_queue_millis":0,
        "active_shards_percent_as_number":100.0
    }
    
  2. Disable shard replication to prevent shard replicas from being created while nodes are being taken offline. This stops the movement of Lucene index segments on nodes in your cluster. You can disable shard replication by querying the _cluster/settings API endpoint, as shown in the following example:

    PUT "/_cluster/settings?pretty"
    {
        "persistent": {
            "cluster.routing.allocation.enable": "primaries"
        }
    }
    

    You should receive a response similar to the following:

    {
      "acknowledged" : true,
      "persistent" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "primaries"
            }
          }
        }
      },
      "transient" : { }
    }
    
  3. Perform the following flush operation on the cluster to commit transaction log entries to the Lucene index:

    POST "/_flush?pretty"
    

    You should receive a response similar to the following:

    {
      "_shards" : {
        "total" : 4,
        "successful" : 4,
        "failed" : 0
      }
    }
    
  4. Set the remote_store.compatibility_mode setting to mixed to allow remote-backed storage nodes to join the cluster. Then set migration.direction to remote_store, which allocates new indexes to remote-backed data nodes. The following example updates the aforementioned setting using the Cluster Settings API:

    PUT "/_cluster/settings?pretty"
    {
        "persistent": {
            "remote_store.compatibility_mode": "mixed",
             "migration.direction" :  "remote_store"
        }
    }
    

    You should receive a response similar to the following:

    {
      "acknowledged" : true,
      "persistent" : { 
       "remote_store" : {
       "compatibility_mode" : "mixed",
       "migration.direction" :  "remote_store"
       },
       "transient" : { }
    }
    }
    
  5. Review your cluster and identify the first node to be upgraded.
  6. Provide the remote store repository details as node attributes in lucenia.yml, as shown in the following example:

    # Repository name
    node.attr.remote_store.segment.repository: my-repo-1
    node.attr.remote_store.translog.repository: my-repo-2
    node.attr.remote_store.state.repository: my-repo-3
       
    # Segment repository settings
    node.attr.remote_store.repository.my-repo-1.type: s3
    node.attr.remote_store.repository.my-repo-1.settings.bucket: <Bucket Name 1>
    node.attr.remote_store.repository.my-repo-1.settings.base_path: <Bucket Base Path 1>
    node.attr.remote_store.repository.my-repo-1.settings.region: us-east-1
       
    # Translog repository settings
    node.attr.remote_store.repository.my-repo-2.type: s3
    node.attr.remote_store.repository.my-repo-2.settings.bucket: <Bucket Name 2>
    node.attr.remote_store.repository.my-repo-2.settings.base_path: <Bucket Base Path 2>
    node.attr.remote_store.repository.my-repo-2.settings.region: us-east-1
       
    # Enable Remote cluster state cluster setting
    cluster.remote_store.state.enabled: true
       
    # Remote cluster state repository settings
    node.attr.remote_store.repository.my-remote-state-repo.type: s3
    node.attr.remote_store.repository.my-remote-state-repo.settings.bucket: <Bucket Name 3>
    node.attr.remote_store.repository.my-remote-state-repo.settings.base_path: <Bucket Base Path 3>
    node.attr.remote_store.repository.my-remote-state-repo.settings.region: <Bucket region>
       
    
  7. Stop the node you are migrating. Do not delete the volume associated with the container when you delete the container. The new Lucenia container will use the existing volume. Deleting the volume will result in data loss.

  8. Deploy a new container running the same version of Lucenia and mapped to the same volume as the container you deleted.

  9. Query the _cat/nodes endpoint after Lucenia is running on the new node to confirm that it has joined the cluster. Wait for the cluster to become green again.

  10. Repeat steps 2 through 5 for each node in your cluster.

  11. Reenable shard replication, using a command similar to the following:

    PUT "/_cluster/settings?pretty"
    {
        "persistent": {
            "cluster.routing.allocation.enable": "all"
        }
    }
    

    You should receive a response similar to the following:

    {
      "acknowledged" : true,
      "persistent" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "all"
            }
          }
        }
      },
      "transient" : { }
    }
    
  12. Confirm that the cluster is healthy by using the Cluster Health API, as shown in the following command:

    GET "/_cluster/health?pretty"
    

    You should receive a response similar to the following:

    {
      "cluster_name" : "lucenia-dev-cluster",
      "status" : "green",
      "timed_out" : false,
      "number_of_nodes" : 4,
      "number_of_data_nodes" : 4,
      "discovered_master" : true,
      "active_primary_shards" : 1,
      "active_shards" : 4,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 100.0
    }
    
  13. Clear the remote_store.compatibility_mode and migration.direction settings by using the following command so that non-remote nodes are not allowed to join the cluster:

    PUT "/_cluster/settings?pretty"
    {
        "persistent": {
            "remote_store.compatibility_mode": strict,
             "migration.direction" :  none
        }
    }
    

    You should receive a response similar to the following:

    {
      "acknowledged" : true,
      "persistent" : { strict },
       "transient" : { }
    }
    

The migration to the remote store is now complete.

Use the following cluster settings to enable migration to a remote-backed cluster.

Field Data type Description
remote_store.compatibility_mode String When set to strict, only allows the creation of either non-remote or remote nodes, depending upon the initial cluster type. When set to mixed, allows both remote and non-remote nodes to join the cluster. Default is strict.
migration.direction String Creates new shards only on remote-backed storage nodes. Default is None.
350 characters left

Have a question? .