Introduction

There are scenarios where you may need to transfer an existing instance between Availability Zones (also called zones or AZs). Examples of these scenarios include:

  • Launching an instance in one zone and later realising that the resources you require for the instance are only available in another zone. Access to advanced networking and GPUs are examples of resources that are only available in specific zones.
  • The Node managing the hardware that your instance runs on may ask you to switch AZs as part of an internal reorganisation. (This is likely to be the case on some nodes during 2018 as NeCTAR merges with ANDS and RDS.)

Unfortunately, live migrations between zones are currently not possible. Instead, migration is accomplished by shutting down the current instance, creating a snapshot, and starting a new instance in the new zone from the snapshot.

A side-effect of this process is that there is usually no way to keep the old IP address. If you have an allocation for advanced networking, and already have a floating IP associated with the instance you intend to migrate, then it is possible to keep the reserved IP address by re-associating it with the new post-migration instance. In all other situations it is simply impossible to keep an IP address, and so you will need to accept a change of IP address when migrating between AZs.

Although the basic process is straight forward, some scenarios have complicating factors, such as different types of storage associated with the instance, or advanced networking requirements. These complicating factors do not usually prevent a migration, but they will need to be taken into account and planned for.

The rest of this document provides:

  1. Step-by-step instructions for the simplest scenario where you have an instance with no complicating factors.
  2. A flowchart to help you navigate your way through more complicated scenarios.
  3. Details for each of the non-trivial steps you can encounter when working through the flowchart.

The simplest scenario: no storage, no complicated networking, no database services

Even if you have a more complicated situation, the simplest scenario offers a useful case study that highlights the most important parts of a migration. If you have a little spare room in your project, you could spin up a small instance for testing, and then treat this first section as a tutorial or practice run.

Step One: Collect information

When you create your new instance, even though it will be based on a snapshot of the old instance, it won't automatically inherit all the old instance's settings.

In particular, you will need to choose the Security Groups and Flavor from scratch.

You should also make sure you know which SSH keys were installed into the original instance. If you are happy with the existing keys, there is no need to add them again to the new instance. However, you will have the option of adding new ones if you wish.

You can get most of the information you need from the detailed information page for the instance.

Select the required instance at https://dashboard.rc.nectar.org.au/project/instances/, or navigate the dashboard menu using Project → Compute → Instances. Once you are on the Instances page, click on the instance you intend to migrate. Here is a screenshot of the detailed information for an instance:

You can see that the Instance Overview includes useful information (highlighted) needed for the migration, such as the Flavor and Security Groups.

This screenshot also shows a potential complication to consider: this instance is running on a Flavor that offers ephemeral storage.

If you are not using the ephemeral storage then you have nothing to worry about. If you are using your ephemeral storage then you need to back it up separately, as ephemeral data is not captured at the snapshotting step. More about storage and backing up ephemeral data will be covered in Section 3.

For this scenario, it is important to confirm that you are not using any ephemeral storage before you proceed.

Step Two: Shut Off the Instance

Although live snapshotting is possible, we advise you to shut off the instance before you snapshot. This will help ensure your migration goes as smoothly and quickly as possible.

  1. Again go to the Instances page: Project → Compute → Instances.
  2. Find the row associated with the instance that you are migrating.
  3. Open the Actions menu from the button at the right-hand end of the row.
  4. Click Shut Off Instance.

You need to wait for the instance to shut off. The Task column will say Powering Off. Eventually you will end up with a Status of Shutoff and a Power State of Shut Down:


Step Three: Create a Snapshot of the Instance

Once the instance is shut off, select Create Snapshot from the Actions menu:

This wiill open the Create Snapshot dialogue:

Start the snapshotting process by entering a descriptive name and clicking the Create Snapshot button.

This is the most time-consuming part of the migration procedure. It is worse for instances with large Disk space, but even a small instance requires some patience.

Step Four: Launch a new Instance from the Snapshot

  1. Go to the Images page: Project → Compute → Images.
  2. Start typing the snapshot name in the filter area above the snapshot list
  3. When you see the snapshot you just made, click the Launch button for that snapshot, at the end of the row.


Clicking Launch will bring up the dialogue for launching instances.

On the Details tab, enter an Instance Name and select the required Availability Zone. You may want to select an AZ that is only available to your institution, or one that is geographically close to your location.

On the Flavors tab, select a flavor with sufficient capacity for your instance. To select m1.small as used by the original instance, use the filter area, then click the selector button (up-arrow) for the m1.small flavor to allocate it to your instance.

On the Security Groups tab, be sure to allocate the same security groups you noted earlier, HTTPS, SSH, and default in this example.

On the Key Pair tab, select the required key, or add a new key if you wish.

Click the Launch Instance button to launch. It might take a some time but if you go back to the Instances page, you should eventually see the new instance running in its new home:

You should now be able to SSH into the new instance using the new IP address which is displayed on the Instances tab, using the required key and username.

Once you have tested that the new instance is working as expected, please delete the old instance as directed. If you were not directed to delete the instance after migrating, please feel to delete it to avoid tying up resources you no longer need. You should not delete the old instance before testing that the new instance works, as you may need to modify the old instance and/or re-snapshot it to resolve any problems.

Flowchart for navigating non-trivial scenarios

In the flowchart, you can skip any circles that don't match your situation. Everyone will need to do the light blue, green and orange circles (respectively start/end, prepare/communicate/test, shutdown/snapshot/relaunch). Many people can skip purple, pink, dark blue and grey (database stop/start, ephemeral data backup/restore, disassociate/re-associate floating IP, detach/reattach volumes).


Detailed information for important steps

Collecting information

At minimum, repeat Step One from the introductory tutorial in Section One. Other information that you should collect, especially if you know that you have a more complicated situation, includes:

  • What volumes are attached to the instance? (Look in, for example, Project → Volumes → Volumes on the dashboard. If your instance is not listed in the Attached To column of the volumes list, you can skip all instructions concerning volume management.)
  • If volumes are attached, you should see the instance mount point (such as /dev/vdb) listed in the Attached To column of the volumes list. (You should also see the mount point on the instance itself, from the /etc/fstab file and the df -h command.)
  • Is the ephemeral storage actively used? Does a backup plan already exist for managing it? When was the last backup?
  • Who are the instance's stakeholders? This could be services running either on other instances or externally that depend on this instance. You should also review the login patterns of users. (If your instance is running Linux, the last command is useful for this.) Stakeholders should be notified about the expected outage before the migration, and the new IP address afterwards.
  • Is there a domain name associated with the instance's IP address?
  • Does the instance access external services that have firewall/whitelisting rules in place which will block the new IP address upon migration?

Using the Volumes filter area to identify volumes attached to an instance and their mount points.

Once you have all this information, you should have a good understanding of what is needed to complete the migration, and how everything fits together.

One more thing to remember is that if there were any resources such as a volume quota that you needed to apply for in one zone, you will probably need to apply to the Node controlling your new zone for more of that quota.
Some zones might not even offer the type of quota you are looking for, so you should request the quota as soon as possible in case there are additional complications.

Backing up and restoring ephemeral storage

NeCTAR has existing documentation that covers this topic:

Detaching and attaching volumes

So long as you are migrating your instance between zones within the same data centre (for example, from melbourne-qh2 to melbourne-qh2-uom), there is no need to migrate the volume at all. The volume will be visible from both.
Most AZs are in different data centres, which makes migrating volume storage more complicated. Although a snapshotting option exists for volumes as for instances, each volume snapshot is usually constrained to a single zone. This means the solution that works for instances will not work for volumes.
If you really need to transfer the contents of a volume from one zone to another, you need to make sure you have a volume storage allocation at both regions, and even then your options are limited. (In particular, it is not usually possible to do it using the NeCTAR dashboard.)
You can try using the Cinder API client; some of the existing documentation will point you in the right direction:

Some locations even have a Cinder Backup option available in the dashboard.

Even when Cinder options exist, it is usually best to do the following:

  1. Create a new volume in the new zone (after ensuring you have volume allocation in the new zone).
  2. Then use traditional syncing and backing up tools like rsync or scp to transfer the data over. (This is covered in the above links to our Backup documentation.)

Volume reassignment (Within same data centre)

For the simple case of reassigning a volume to a new instance within the same data centre (despite being in a different AZ), a brief walk-through follows.
Some of the steps are identical to the tutorial in Section One of this guide, and are only covered at a high level here.
We consider the scenario where you have an instance bifrost-mel with a volume test-vol attached. Due to a local re-organisation you have been asked to migrate it from melbourne-qh2 to melbourne-qh2-uom. These AZs are both run out of the same data centre, so it will be simple to reassign the volume after migrating the instance it is attached to.

Shutdown the instance bifrost-mel in the usual way.

After bifrost-mel has finished shutting down, detach test-vol from bifrost-mel:

  1. Go to Project → Volumes → Volumes
  2. Identify the row containing test-vol and select Manage Attachments from its Actions menu on the right.
  3. In the dialogue that pops up click the Detach Volume button next to bifrost-mel.
  4. Confirm the detachment when prompted. You may need to refresh the browser window to confirm the volume is no longer attached.

Next, create a snapshot bifrost-mel.

Then go to Project → Compute → Images and launch a new instance from the bifrost-mel snapshot. You may reuse the same name if you wish, but we are using bifrost-uom to avoid any confusion. When creating the new instance, remember to set the Security Groups, and select melbourne-qh2-uom in the Availability Zone list. If you use the same name here as the original instance, you should make a note of the new instance's ID. You will need this in the next step to tell the difference between the two instances.

Attach test-vol to the new instance.

  1. Go to Project → Volumes → Volumes
  2. Identify the row containing test-vol and select Manage Attachments from its Actions menu on the right.
  3. In the drop-down list of possible instances, select the new instance. (Now you can see why you needed the instance id!)
  4. Click the Attach Volume button.

Go to Project → Compute → Instances

  1. If the new instance isn't already started, you may start it now.
  2. Test that the new instance is working correctly and that you can use the attached volume storage in the same manner as you did for the old instance.
  3. Once you are satisfied that everything is okay, the old instance can be deleted.

Instances that boot from Volumes

If the instance you are migrating boots from a volume, the procedure will usually be the same as for a regular instance.

You should be aware that if you selected Delete Volume on Instance Delete when creating the old instance, then you will lose the volume once the migration is complete. However this is not prohibitive: you can still shut the instance down (without deleting it), snapshot it, and boot a new instance from the volume snapshot. Alternatively, create a new volume from the snapshot, and boot from the new volume.

Regardless of whether or not you selected to delete the volume when the instance is deleted, you won't be able to create a new instance directly from the old volume while the old instance is still running.

If you did not select Delete Volume on Instance Delete then you can do the migration slightly more efficiently so long as the volume is accessible from the new Availability Zone. You can terminate the old instance, the volume will retain the latest state of the instance, and you can boot a new instance based on the volume. This is the case, for example, if migrating from melbourne-qh2 to melbourne-qh2-uom.

If the volume is not accessible from the new AZ, a lot more work is required. You need to create a copy of the existing volume, create a new volume in the new AZ, create an instance in each AZ, attach each new volume to each new instance, transfer the data over using the usual methods like rsync, and if all that goes according to plan will you be able to boot from the volume in the AZ. Step by step:

  1. Shutdown and snapshot the old instance. (This will create a Volume Snapshot, which has slightly different behaviour to a usual instance snapshot.)
  2. Go to Project → Volumes → Snapshots.
  3. From the Actions menu associated with your volume snapshot, select Create Volume. This volume is now a duplicate of the original volume, let's call it COPY.
  4. Create a new instance in each AZ and ensure you can SSH into each one from the other. You won't need to use large flavors for these instances, they are just needed for the data transfer and can be deleted later.
  5. Create a new volume in the new AZ at least as large as the old volume. (Remember, you need a volume allocation in the new AZ to do this.) Let's call it NEW.
  6. Attach COPY to the new instance in the old AZ. Attach NEW to the instance in the new AZ.
  7. Use your preferred backup method (eg rsync) to transfer the contents of COPY to NEW, via the instances they are respectively attached to.
  8. Once the transfer is complete, detach NEW.
  9. Go to Project → Compute → Instances and launch a new instance, allocating the NEW volume as the boot source.
  10. After you test that the instance booted from NEW is behaving correctly, remember to clean up: delete the two new instances and the old instance, and delete COPY, the volume snapshot, and the original volume.

The following flowchart illustrates the above instructions:

It would be fair to ask why it is so complicated. It has to do with the fact that the old volume is permanently attached to the original instance that booted from it, until that instance is deleted. Even when you shut the instance down, you can't reattach the volume to some other instance just to do the data transfer. The only alternative methods might be one of the following:

  • Delete the original instance so that you are free to reattach the old volume as you please — but it would be easy to forget whether you clicked Delete Volume on Instance Delete, so it is unwise to rush into doing it this way.
  • Try doing a live data transfer from within the existing instance — but this might never complete, and it might not be an instance equipped with robust data transfer tools.

Therefore, while the prescribed method has more steps, it is the safest option and will work for everyone.

Networking, DNS, Firewall/Whitelisting rules

If you have an allocation for Private Networking and Floating IPs, the procedure is similar to migrating with volumes, but easier:

  1. After shutting down the old instance, disassociate any floating IP that was previously associated with it.
  2. Then after creating the new instance from the old instance's snapshot, associate it with the original floating IP address.

More information about advanced networking and floating IP addresses is available via the following links:

If your instance requires access to any external services that have firewall or whitelisting rules in place, you will need to get in touch with them after you migrate the instance to update them with the new IP address. This is not something NeCTAR can usually provide support for.
Likewise, if your old instance was associated with a domain name, you will probably need to log into the domain name management dashboard provided by the domain name provider and switch the IP address in all the required fields.

If your instance is running Windows or other proprietary (non-free licensed) software

Many proprietary software licenses depend on being able to keep the same MAC address, which will by default be lost when moving to a new instance.

If you need to move between Availability Zones and have such a proprietary license, many of the above steps will be similar, but you will need our help to copy over the MAC address for you. Therefore, if you are in this situation, even if you are an advanced user who feels comfortable with all aspects of the process, you will need to get in touch with the helpdesk (details below) and organise the MAC address transfer. Do this first before undertaking any other parts of the migration process.

If you run into problems or need to ask questions

As with any other occasion when you require support for managing your NeCTAR project, you may send an email to support@ehelp.edu.au.
You may also open a ticket or start a chat via https://support.ehelp.edu.au/support/tickets/new