Object storage is a distributed system which means the nodes (servers) are able to physically located anywhere in the world but still be considered part of the same cluster. Regions and zones are a way to organize the nodes.
Regions were designed as a method to distinguish different geographical areas, partly so that copies of data can be stored physically as far apart as possible from each other. Within regions, nodes are further organized into zones. Zones were intended to group nodes by common points of failure. For example, if you have a group of nodes in a rack that all rely on the same top-of-rack switch, that could be treated as a zone. Similarly, if your installation is relying on UPS battery backup then a zone schema could be applied based on nodes connected to a certain UPS or set of UPSes.
Your SwiftStack cluster will have one region with one zone created by default. The default names are "Region 1" and "Zone 1". These can be changed if you have a preferred naming scheme. Each new region and zone is created by name and is assigned a number. The number is always displayed with an "r" or "z" to indicate region or zone. For example the default region and zone are displayed as Region 1 (r1) and Zone 1 (z1).
It’s important to know that regions and zones are just semi-arbitrary numbers, which only serve a purpose to identify a group of nodes within a cluster. From the cluster's perspective there is no requirement that regions be geographically separated and zones be a single point of failure. However, this is the best practice that we recommend and is generally used.
See our technical documentation on Regions and Zones for more information on the creation and management of regions and zones.