Migrating a normal instance
While Openstack provides a facility for live migrating instances between compute nodes within an availability zone (or AZ), live migration of instances between AZs is not possible. Instead, migration of an instance is performed creating a new instance in a different place that is a clone of the original. The simple (and recommended) way to do that is to use the Nectar Dashboard to create an instance snapshot, and then launch a new instance from the snapshot.
When you migrate an instance, the static IP address (and the MAC address) for the instance will change. A Floating IP created in one Nectar Node can technically be attached to an instance in another Node. However, this causes operational issues, so you should avoid doing this except as a short term stop-gap measure. (For example, it might be necessary if you have neglected to set up a DNS name for the floating IP.) For more information on how to deal with IP address changes, see Dealing with IP addresses, DNS, Firewalls and Whitelisting Rules.
Note that there are a couple of other approaches to migrating instances that we won’t cover here:
You could build a replacement for the instance by launching from a base image in the new AZ and then reinstalling your application software. Important data on the old instance would then need to be transferred to the new instance.
If the instance is a (stateless) worker in a managed cluster, you could kill the instance and use the cluster management tools to spawn a replacement.
The instructions in this page are based on some simplifying assumptions, as follows:
The instance being migrated is NOT a “boot from volume” instance; see “Migrating a “boot from volume” instance”.
The instance does NOT have an m1 or m2 flavor; see “Migrating instances with legacy flavors”.
Your instance has no dependencies on location; e.g. access to license servers, NFS, HPC or other location-specific servers / services.
In addition, these instructions assume that storage and database migrations will be handled independently, as will dealing with any issues that arise from the IP and MAC address changes; see
This procedure needs sufficient VCPU and Instance quota to allow you to launch and run the old and new instance simultaneously. This is for safety. Please ask for a temporary quota increase if you need it.
Back up your files. You should already have a procedure for making regular backups of your valuable files held on your instances and attached storage. Before you start the migration, make sure that your backups are up to date.
Take your instance out of normal service. You are in the best position to know what this will entail.
Detach any volumes from the instance. These will need to be migrated separately; see “Migrating a volume”.
Shut down the instance.
Snapshot the instance. You should verify that the snapshot has been successfully uploaded to the Image catalog.
Launch new instance from snapshot in target AZ as per the linked document. If you run into the “No valid hosts found” issue, you could try using a smaller flavor for the new instance, either temporarily (and resize later) or permanently. Alternatively raise a support ticket.
Attach any migrated volumes. After attaching the volumes, you should verify that the device names (“vdb”, “vdc”, etcetera) for the volumes are as expected and then mount the volumes’ file systems.
Verify the correct operation of the new instance. You are in the best position to know what this will entail.
Restore the instance to service. You are in the best position to know what this will entail.
Wait. It is advisable to leave the old instance in “shutdown” state for a few days, just in case there is a need to “roll back” the migration; see below.
If you perform the migration by following the above steps in the order given, you should be able to “roll back” the migration.
Up to and including step #8, you simply need to restart the old instance and reattach the old volumes to it.
Once you have put the new instance into service (step #8) you may need to take steps to deal with data or database changes made by users since the changeover. In the worst case, these could be abandoned.
Step #11 is the point of no return.