There are scenarios in which you will seek to transfer an existing instance between Availability Zones (henceforth referred to as zones or AZs). A couple of examples:

  • You start the instance in one zone and later realise that the resources you require for the instance are only available on another zone. Access to complex networking and GPUs are examples of resources that are only available on specific zones.
  • The Node managing the hardware your instance runs on asks you to switch AZs as part of an internal reorganisation. (This is likely to be the case on some nodes through 2018 as NeCTAR merges with ANDS and RDS.)

In fact, it is not possible to fluidly commute running instances around between zones. Instead, at heart, a transition is accomplished by shutting down the current instance, creating a snapshot, and starting a new instance in the new zone based off the snapshot.

Unfortunately, a side-effect of this process is that there is usually no way to keep the old IP address. If you have an allocation for complex networking, and already had a floating IP associated with the instance you intend to migrate, then it is possible to keep that IP address reserved to reassociate with the new post-migration instance. In all other situations it is simply impossible to keep an IP address, and so you will need to accept a change of IP address when migrating between AZs.

Although the heart of the process is simple enough, many people will have complicating factors involved, such as different types of storage associated with the instance, or complex networking requirements. These complicating factors will not usually prevent a transition. They will, however, need to be taken into account and planned for.

This document will have the following structure:

  1. Step-by-step instructions for the simplest case where you have an instance with no complicating factors at all.
  2. A flowchart to help you navigate your way through more complicated scenarios.
  3. Details for each of the non-trivial steps you can encounter when working through the flowchart.

The simplest scenario: no storage, no complicated networking, no database services

Even if you have a more complicated situation, the simplest scenario offers a useful case study that highlights the most important parts of a transition. If you have a little spare room in your project, you could spin up a small instance for testing, and then treat this first section as a tutorial and practice run.

Step One: Collect information

When you create your new instance, even though it will be based on a snapshot of the old instance, it won't automatically inherit all the old instance's settings.

In particular, you will need to choose the Security Groups and Flavor from scratch.

You should also make sure you know which SSH keys were installed into the original instance. So long as you're happy with them, you won't need to add them again to the new instance. However, you will have the option to add new ones if you wish.

You can get most of the information you need from the detailed information page for the instance. It ultimately appears at<instance-id>/, but you will usually access this page by following Project → Compute → Instances through the dashboard.

Once you are on the Instances tab, click on the instance you intend to transition. Here is a screenshot of the detailed information for one instance:

You can see that the Instance Overview includes useful information (highlighted) that you might want to take note of, such as the Flavor and Security Groups.

In this screenshot, you can see that there is also a potential complication to consider: this instance is running on a Flavor that offers ephemeral storage.

If you never use this ephemeral storage then you have nothing to worry about. If you do use your ephemeral storage then you will need to remember to back it up if you don't want to lose it. Ephemeral data won't be captured at the snapshotting step. More about storage and backing up ephemeral data will be covered in Section 3.

For this particular instance, let's say that I am not using the ephemeral storage, so it is safe to proceed.

Step Two: Shut Off the Instance

Although live snapshotting is ostensibly possible, in practice it never works. Every small change to an instance will delay a live snapshot. Our testing has seen live snapshotting last for months without ever completing. Therefore our advice is not to use live snapshotting at all. If you want the transition to go smoothly and quickly, you must shut off the instance.

  1. Again go to the Instances tab: Project → Compute → Instances.
  2. Find the row associated with the instance that you are transitioning.
  3. Dropdown the menu from the button at the right-hand end of the row.
  4. Click Shut Off Instance.

You will need to give the instance a little time to shut off. In the Task column it will say Powering Off. Eventually you will end up with a Status of Shutoff and and a Power State of Shut Down:

Step Three: Create a Snapshot of the Instance

Once the instance is shut off, drop down the same Actions menu as earlier and click Create Snapshot:

Clicking that button will open a dialogue like so:

The dialogue can be progressed straightforwardly by choosing a descriptive name and clicking the blue Create Snapshot button.

Once you click that blue Create Snapshot button, the snapshotting process will begin.
This is perhaps the most time-consuming part of the entire transition procedure. It is worse for instances with large Disk space, but even a small instance requires some patience here.

Step Four: Launch a new Instance based on the Snapshot

  1. Go to the Images tab: Project → Compute → Images.
  2. Find the snapshot you just made and click the Launch Instance button for that snapshot, at the end of the row.

Clicking Launch Instance will bring up the usual dialogue for launching instances:

It might take a little time to launch, but after you go back to the Instances tab, you can eventually bear witness to the new instance running in its new home:

If you followed all steps carefully, you should now be able to SSH into the new instance using the new IP address which is displayed on the Instances tab, using the same key and username as you used for the original instance.

Once you have tested that the new instance is working as expected, you are free to Terminate the old instance if you so choose (or if you are required to). You shouldn't terminate it before doing a test, because if there is a problem you may wish to try modifying the old instance and/or re-snapshot it.

Flowchart for navigating non-trivial scenarios

In the flowchart, you can skip any circles that don't match your situation. Everyone will need to do the light blue, green and orange circles (respectively start/end, prepare/communicate/test, shutdown/snapshot/relaunch). Many people can skip purple, pink, dark blue and grey (database stop/start, ephemeral data backup/restore, disassociate/reassociate floating IP, detach/reattach volumes).

Detailed information for important steps

Collecting information

At minimum, repeat Step One from the introductory tutorial in Section One. Other information that you should collect, especially if you know that you have a more complicated situation:

  • What volumes are attached to the instance? (Look in, for example, Project → Compute → Volumes on the dashboard. If it's not there then you don't have a volume storage allocation, so you can skip all instructions concerning volume management.)
  • If volumes are attached, what are their mount points within the instance? (You can usually get this information from the file /etc/fstab and the command df -h.)
  • Is the ephemeral storage actively used? Does a backup plan already exist for managing it? When was the last backup?
  • Who are the instance's stakeholders? This could be services running either on other instances or externally that depend on this instance. You should also find out the login patterns of users. (If your instance is running Linux, the last command is useful for this.) Stakeholders should be notified about the expected outage before the transition, and let them know the new IP address afterwards.
  • Is there a domain name associated with the instance's IP address?
  • Does the instance access external services that have firewall/whitelisting rules in place which will block the new IP address upon migration?

Once you have all this information at hand, you will have much better intuition about what the transition process will look like; you might find that you don't even need the flowchart above because you will see how everything fits together.

One more thing to remember is that if there were any resources such as a volume quota that you needed to apply for in one zone, you will probably need to remember to apply to the Node controlling your new zone for more of that quota.
Some zones might not even offer the type of quota you are looking for, so you should request the quota as soon as possible in case there are additional complications.

Backing up and restoring ephemeral storage

NeCTAR has existing documentation that covers this topic:

Detaching and attaching volumes

So long as you are migrating your instance between zones within the same data centre (for example, from melbourne-qh2 to melbourne-qh2-uom), there is no need to migrate the volume at all. The volume will be visible from both.
Most AZs are not in the same data centre as each other, and this is when things get more complicated. Although a snapshotting option exists for volumes as for instances, each volume snapshot is usually constrained to a single zone. This means the solution that works for instances will not work for volumes.
If you really, really need to transfer the contents of a volume from one zone to another, you need to make sure you have a volume storage allocation at both regions, and even then your options are limited. (In particular, it is not usually possible to do it using the NeCTAR dashboard.)
You can try using the Cinder API client; some of the existing documentation will point you in the right direction:

Some locations even have a Cinder Backup option available in the dashboard.

Even when Cinder options exist, it is usually best to do the following:

  1. Create a new volume in the new zone (after ensuring you have volume allocation in the new zone).
  2. Then use traditional syncing and backing up tools like rsync or scp to transfer the data over. (This is covered in the above links to our Backup documentation.)

Volume reassignment (Within same data centre)

For the simple case of reassigning a volume to a new instance within the same data centre (despite being in a different AZ), a brief walkthrough follows.
Some of the steps are identical to the tutorial in Section One of this guide. Their full description will be elided in what follows.
We will consider the scenario that I have an instance peregrine with a volume test-vol attached. Due to a local reorganisation I have been asked to transition it from melbourne-qh2 to melbourne-qh2-uom. These AZs are both run out of the same data centre, so we predict it will be simple to reassign the volume after migrating the instance it is attached to.

  1. Shutdown the instance peregrine in the usual way.
  2. After peregrine has finished shutting down, detach test-vol from peregrine:
    1. Go to Project → Compute → Volumes
    2. Identify the row containing test-vol and select Manage Attachments from its dropdown menu on the right.
    3. In the dialogue that pops up select Detach next to peregrine.
    4. You will be asked to confirm the detachment. Do so.
  3. Snapshot peregrine.
  4. Go to Project → Compute → Images and create a new instance from the peregrine snapshot. You may reuse the same name if you wish. When creating the new instance, remember to set the Security Groups, and specify melbourne-qh2-uom in the advanced Availability Zone options. If you use the same name here as the original instance, you should make a note of the new instance's ID. You will need this in the next step to tell the difference between the two instances.
  5. Attach test-vol to the new instance.
    1. Go to Project → Compute → Volumes
    2. Identify the row containing test-vol and select Manage Attachments from its dropdown menu on the right.
    3. In the dropdown list of possible instances, select the new instance. (Now you can see why you needed the instance id!)
    4. Click the Attach Volume button.
  6. Go to Project → Compute → Instances
    1. If the new instance isn't already started, you may start it any time.
    2. Test the new instance is working correctly and you can use the attached volume storage in the same manner as you did for the old instance.
    3. Once you are satisfied that everything is okay, Terminate the old instance.

Instances that boot from Volumes

If the instance you are migrating boots from a volume, the procedure will usually be the same as for a regular instance.

You should be aware that if you ticked "Delete on Terminate" when creating the old instance, then you will lose the volume once the migration is complete. However this is not prohibitive: you can still shut the instance down (without terminating it), snapshot it, and boot a new instance from the volume snapshot. Alternatively, create a new volume from the snapshot, and boot from the new volume.

Regardless of whether or not you ticked the volume to be deleted when the instance is terminated, you won't be able to create a new instance directly from the old volume while the old instance is still running.

If you did not tick "Delete on Terminate" then you can do the migration slightly more efficiently so long as the volume is accessible from the new Availability Zone. You can terminate the old instance, the volume will retain the latest state of the instance, and you can boot a new instance based on the volume. This is the case, for example, if transitioning from `melbourne-qh2` to `melbourne-qh2-uom`.

If the volume is not accessible from the new AZ, a lot more work is required. You need to create a copy of the existing volume, create a new volume in the new AZ, create an instance in each AZ, attach each new volume to each new instance, transfer the data over using the usual methods like rsync, and only if all that goes according to plan will you finally be able to boot from the volume in the AZ. Step by step:

  1. Shutdown and snapshot the old instance. (This will create a Volume Snapshot, which has slightly different behaviour to a usual instance snapshot.)
  2. Go to Project → Compute → Volumes and click the "Volume Snapshots" tab.
  3. From the dropdown menu associated with your volume snapshot, select Create Volume. This volume is now a duplicate of the original volume, let's call it COPY.
  4. Create a new instance in each AZ and ensure you can SSH into each one from the other. You won't need to use large flavors for these instances, they are just needed for the data transfer and can be deleted later.
  5. Create a new volume in the new AZ at least as large as the old volume. (Remember, you need a volume allocation in the new AZ to do this.) Let's call it NEW.
  6. Attach COPY to the new instance in the old AZ. Attach NEW to the instance in the new AZ.
  7. Use your preferred backup method (eg rsync) to transfer the contents of COPY to NEW, via the instances they are respectively attached to.
  8. Once the transfer is complete, detach NEW.
  9. Go to Project → Compute → Volumes and launch a new instance based on NEW.
  10. After you test that the instance booted from NEW is behaving correctly, remember to clean up: terminate the two new instances and the old instance, and delete COPY, the volume snapshot, and the original volume.

The following flowchart illustrates the above instructions:

It would be fair to ask why it is so complicated. It has to do with the fact that the old volume is permanently attached to the original instance that booted from it, until that instance is terminated. Even when you shut the instance down, you can't reattach the volume to some other instance just to do the data transfer. The only alternative methods might be one of the following:

  • Terminate the original instance so that you are free to reattach the old volume as you please — but it would be easy to forget whether you clicked "Delete on Terminate", so it is unwise to rush into doing it this way.
  • Try doing a live data transfer from within the existing instance — but this might never complete, and it might not be an instance equipped with robust data transfer tools.

Therefore, while the prescribed method has more steps, it is the safest option and will work for everyone.

Networking, DNS, Firewall/Whitelisting rules

If you have an allocation for Private Networking and Floating IPs, the procedure is similar to migrating with volumes, but easier:

  1. After shutting down the old instance, disassociate any floating IP that was previously associated with it.
  2. Then after creating the new instance from the old instance's snapshot, associate it with the original floating IP address.

More information about complex networking and floating IP addresses is available via the following links:

If your instance requires access to any external services that have firewall or whitelisting rules in place, you will need to get in touch with them after you migrate the instance to update them with the new IP address. This is not something NeCTAR can usually provide support for.
Likewise, if your old instance was associated with a domain name, you will probably need to log into the domain name management dashboard provided by the domain name provider and switch the IP address in all the required fields.

If your instance is running Windows or other proprietary (non-free licensed) software

Many proprietary software licences depend on being able to keep the same MAC address, which will by default be lost when moving to a new instance.

If you need to move between Availability Zones and have such a proprietary licence, many of the above steps will be similar, but you will need our help to copy over the MAC address for you. Therefore, if you are in this situation, even if you are an advanced user who feels comfortable with all aspects of the process, you will need to get in touch with the helpdesk (details below) and organise the MAC address transfer. Do this first before undertaking any other parts of the migration process.

If you run into problems or need to ask questions

As with any other occasion when you require support for managing your NeCTAR project, you may send an email to
You may also open a ticket or start a chat via