The Cloud

The Nectar Cloud supports thousands of virtual machines using more than 30,000 VCPUs across Australia.

This article aims to provide pointers for information on the resources that are available on the Nectar Cloud. If you are interested in Nectar resources and their allocation units, please login to your project trail and open the New Request form under the Nectar dashboard Allocations menu.

Allocations: Projects and Project Trials

The Nectar Cloud provides access to the cloud via projects, each with an allocation of time & capacity (virtual CPUs).

All researchers initially (and without application) get a project trial with 2 VCPUs allocated for 3 Months. This means you can run 2 Small or 1 Medium virtual machine for three months. Or 1 small VM for a total of 6 months.

Note
Even when an instance is shutoff, Openstack still holds resources for it, and resource time is still counted towards your trial allocation. To avoid this you can create a snapshot and terminate the instance, then launch a new instance when you need it again.

 

If you need more time or computing power, or you need one of the storage or other features of the Nectar Research Cloud, you can submit an allocation request. See our article on Managing an Allocation for further detail.

You can share your Nectar Project (but not your project trial) with users you invite.

Instances available by size

The resources available to an instance are selected by different 'flavors' when launching the instance for the first time.

The flavors define the number of VCPUs, the root disk size and the size of the ephemeral disk. The current flavors use a name pre-fixed with 'm2.'

  • m2.tiny: 1 VCPUs, 768MB RAM, 5GB root disk, no ephemeral disk
  • m2.xsmall: 1 VCPUs, 2GB RAM, 10GB root disk, no ephemeral disk
  • m2.small: 1 VCPUs, 4GB RAM, 30GB root disk, no ephemeral disk
  • m2.medium: 2 VCPUs, 6GB RAM, 30GB root disk, no ephemeral disk
  • m2.large: 4 core, 12GB RAM, 30GB root disk, 80GB ephemeral disk
  • m2.xlarge: 12 core, 48GB RAM, 30GB root disk, 360GB ephemeral disk

Legacy flavors use a prefix of 'm1.' and have a 10G root disk.

  • m1.small: 1 core, 4GB RAM, 10GB root disk, 30GB secondary disk
  • m1.medium: 2 cores, 8GB RAM, 10GB root disk, 60GB secondary disk
  • m1.large: 4 cores, 16GB RAM, 10GB root disk, 120GB secondary disk
  • m1.xlarge: 8 cores, 32GB RAM, 10GB root disk, 240GB secondary disk
  • m1.xxlarge: 16 cores, 64GB RAM, 10GB root disk, 480GB secondary disk

Larger instances require more resources than available in the project trial; a new project with more resources can be requested via an allocation request.

Requesting an Allocation

When you make a request for an allocation, you will be asked to supply information about the research use for the project. You will also be asked to specify the resources you request with options such as:

  • Choose whether to convert your existing project trial to a project, or to start a new project from scratch.
  • Advise the project duration. The maximum is 1 year, but you will have the opportunity to extend the allocation before it ends.
  • Choose the maximium number of instances (virtual machines) that you will run simultaneously
  • Select the maximum number of cores (cpus) that you will require simultaneously, across all running instances.
  • Advise the number of core hours required (number of cores multiplied by the hours they will run). The default value is half the requested maximum cores multiplied by the estimated project duration.
  • Ephemeral storage and memory will be assigned relative to the number of cores requested. However, there are other types of storage that can be requested in addition:
  • As well as volume and object storage, Nectar Research Cloud offers a number of supplementary resources that you may wish to use:
    • The Database Service is a Database-as-a-Service (DBaaS) system that provides a simple interface for database management tasks. MySQL and PostgreSQL datastore support is currently offered (MongoDB support is planned).
    • Advanced Networking allows projects to have more advanced networking configurations for their instances. These include private networks, floating IPs, and Load Balancer-as-a-Service (LBaaS).
    • The Shared File System Service provides a simple interface for provisioning and managing shared file systems that can be mounted on multiple Virtual Machines. Access to these systems can be via NFS, CIFS (SMB) or CEPHFS, depending on the site.

Internet Traffic Quotas (downloads to your instances)