A private cloud is an isolated VMware stack (ESXi hosts, vCenter, vSAN, and NSX)environment managed by a vCenter Server in a management domain.Google Cloud VMware Engine deploys private clouds with the following VMware stackcomponents:
- VMware ESXi: hypervisor on dedicated nodes
- VMware vCenter: centralized management of private cloud vSphereenvironment
- VMware vSAN: hyper-converged, software-defined storage platform
- VMware NSX Data Center: network virtualization and security software
- VMware HCX: application migration and workload rebalancing across datacenters and clouds
You canretrieve generated sign-in credentialsfor VMware stack components from the private cloud details page.
VMware component versions
A private cloud VMware stack has the following software versions:
Component | Version | Licensed version |
---|---|---|
ESXi | 7.0 Update 3o | vSphere Enterprise Plus |
vCenter | 7.0 Update 3p | vCenter Standard |
vSAN | 7.0 Update 3o | Advanced + select vSAN Enterprise features |
NSX Data Center | 3.2.3.1.hp | Select features available. See the NSX Data Center section for details. |
HCX | 4.6.2 | Enterprise |
1VMware Engine deploys a version of HCX made available to Google Cloud by VMware. Update HCX after private cloud creation to retrieve the latest version of HCX for your environment.
ESXi
When you create a private cloud, VMware ESXi is installed on provisionedGoogle Cloud VMware Engine nodes. ESXi provides the hypervisor for deploying workloadvirtual machines (VMs). Nodes provide hyper-converged infrastructure (computeand storage) and are a part of the vSphere cluster on your private cloud.
Each node has four physical network interfaces connected to the underlyingnetwork. VMware Engine creates a vSphere distributed switch (VDS) onthe vCenter using these physical network interfaces as uplinks. Networkinterfaces are configured in active mode for high availability.
vCenter Server Appliance
vCenter Server Appliance (VCSA) provides the authentication, management, andorchestration functions for VMware Engine. When you create and deployyour private cloud, VMware Engine deploys a VCSA with an embeddedPlatform Services Controller (PSC) on the vSphere cluster. Each private cloudhas its own VCSA. Adding nodes to a private cloud adds nodes to the VCSA.
vCenter Single Sign-On
The embedded platform services controller on VCSA is associated with a vCenterSingle Sign-On. The domain name is gve.local
. To access vCenter, use thedefault user, [email protected]
, which is created for you to accessvCenter. You can add your on-premises/Active Directory identity sources forvCenter.
vSAN storage
Clusters in private clouds have fully configured all-flash vSAN storage. Theall-flash storage is provided by local SSDs. At least three nodes of the sameSKU are required to create a vSphere cluster with a vSAN datastore. Each node ofthe vSphere cluster has two disk groups. Each disk group contains one cache diskand three capacity disks.
You can enable deduplication and compressionon the vSAN datastore in VMware Engine. This service enables vSANdeduplication and compression by default when a new cluster is created. Eachcluster on your private cloud contains a vSAN datastore. If the stored virtualmachine data isn't suitable for vSAN space efficiency by deduplication andcompression or only by compression, you can change vSAN space efficiency to thechosen configuration on the individual vSAN datastore.
In addition to vSAN Advanced features, VMware Engine also providesaccess to vSAN Enterprise data encryption for data at rest and data in transit.
vSAN storage policies
A vSAN storage policy defines the Failures to tolerate (FTT) and theFailure tolerance method. You can create new storage policies and apply themto VMs. To maintain SLA, you must maintain 20% spare capacity on thevSAN datastore.
On each vSphere cluster, there's a default vSAN storage policy that applies tothe vSAN datastore. The storage policy determines how to provision and allocateVM storage objects within the datastore to ensure a level of service.
The following table shows the default vSAN storage policy parameters:
FTT | Failure tolerance method | Number of nodes in vSphere cluster |
---|---|---|
1 | RAID 1 (mirroring) Creates 2 copies | 3 and 4 nodes |
2 | RAID 1 (mirroring) Creates 3 copies | 5 to 32 nodes |
Supported vSAN storage policies
The following table shows the supported vSAN storage policies and the minimumnumber of nodes required to enable the policy:
FTT | Failure tolerance method | Minimum number of nodes required in vSphere cluster |
---|---|---|
1 | RAID 1 (mirroring) | 3 |
1 | RAID 5 (erasure coding) | 4 |
2 | RAID 1 (mirroring) | 5 |
2 | RAID 6 (erasure coding) | 6 |
3 | RAID 1 (mirroring) | 7 |
NSX Data Center
NSX Data Center provides network virtualization, micro segmentation, and networksecurity capabilities on your private cloud. You can configure servicessupported by NSX Data Center on your private cloud by using NSX.
Available features
The following list describes NSX-T features supported byVMware Engine, organized by category:
- Switching, DNS, DHCP, and IPAM (DDI):
- Optimized ARP learning and broadcast suppression
- Unicast replication
- Head-end replication
- SpoofGuard
- IP address management
- IP blocks
- IP subnets
- IP pools
- IPv4 DHCP server
- IPv4 DHCP relay
- IPv4 DHCP static bindings/fixed addresses
- IPv4 DNS relay/DNS proxy
- Routing:
- Null routes
- Static routing
- Device routing
- BGP route controls using route maps and Prefix-lists
- NAT:
- NAT on North/South and East/West logical routers
- Source NAT
- Destination NAT
- N:N NAT
- Firewall:
- Edge Firewall
- Distributed Firewall
- Common firewall user interface
- Firewall sections
- Firewall logging
- Stateful Layer 2 and Layer 3 firewall rules
- Tag-based rules
- Distributed firewall-based IPFIX
- Firewall policies, tags, and groups:
- Object tagging/security tags
- Network-centric grouping
- Workload-centric grouping
- IP-based grouping
- MAC-based grouping
- VPN:
- Layer 2 VPN
- Layer 3 VPN (IPv4)
- Integrations:
- Container networking and security using Tanzu Kubernetes Grid (TKG) only
- VMware Cloud Director service
- VMware Aria Automation
- VMware Aria Operations for Logs
- Authentication and authorization:
- Direct Active Directory integration using LDAP
- Authentication using OpenLDAP
- Role-based access control (RBAC)
- Automation:
- REST API
- Java SDK
- Python SDK
- Terraform provider
- Ansible modules
- OpenAPI/Swagger specifications and auto-generated API documentation forREST API
- Inspection:
- Port mirroring
- Traceflow
- Switch-based IPFIX
Feature limitations
Some NSX Data Center features have very specific networking and security usecases. Customers who created their Google Cloud account on or before August30, 2022 can request access to features for those use cases by reaching out toCloud Customer Care.
The following table describes those features, their corresponding use cases, andpotential alternatives:
Feature | Use case | Recommended alternative | Google Cloud customers on or before August 30, 2022 | Google Cloud customers after August 30, 2022 |
---|---|---|---|---|
Layer 3 multicast | Multi-hop Layer 3 multicast routing | Layer 2 multicast is supported within a NSX-T subnet. This allows for all multicast traffic to be delivered to workloads on the same NSX-T subnet. | Supported | Unsupported |
Quality of Service (QoS) | VoIP and latency sensitive application where network oversubscription occurs | None required, as VMware Engine delivers a non-oversubscribed network architecture. Further, any QoS tags exiting a private cloud are stripped when entering the VPC through a peering connection. | Supported | Unsupported |
Simple Network Management Protocol (SNMP) traps | Legacy alerting protocol for notifying users of events | Events and alarms can be configured within NSX-T using modern protocols. | Supported | Unsupported |
NAT features such as stateless NAT, NAT logging, and NAT64 | Used for carrier-grade NAT in large telecommunication deployments | NSX-T supports source/destination NAT and N:N NAT on North/South and East/West logical routers. | Supported | Unsupported |
Intent-based networking and security policies | Used in conjunction with VMware Aria to create business-based firewall policies within NSX-T | NSX-T Gateway and Distributed Firewall features can be used to create and enforce security policies. | Supported | Unsupported |
Identity-based groups using Active Directory | VDI deployments where the user logged into a specific VDI guest can be detected and receive a custom set of NSX-T firewall rules | Users can be assigned specific workstations using the dedicated-assignment pool. Use NSX-T tags to then apply specific firewall rules by pool. | Supported | Unsupported |
Layer 7 attribute (App ID) rules | Used in NSX-T firewall rules | Use NSX-T Service Groups to define a set of ports and services for reference when creating one or more firewall rules. | Supported | Unsupported |
Stateless Layer 2 and Layer 3 firewall rules | Used for carrier-grade high speed firewalls in large telecommunication deployments | NSX-T supports stateful high-performance Layer 2 and Layer 3 rules. | Supported | Unsupported |
NSX-T service insertion | Used to automate the North/South or East/West deployment of third-party network services by using NSX-T to secure and inspect traffic | For third-party security vendor deployments,VMware Engine recommends a routed model over service insertion to ensure that routine service upgrades don't impact network availability. | Contact Cloud Customer Care | Unsupported |
Updates and upgrades
This section describes update and upgrade considerations and lifecyclemanagement responsibilities for software components.
HCX
VMware Engine handles initial installation, configuration, andmonitoring of HCX in private clouds. After that, you are responsible forlifecycle management of HCX Cloud and service appliances like HCX-IXInterconnect.
VMware provides updates for HCX Cloud through its HCX service. You can upgradeHCX Manager and deployed HCX service appliances from the HCX Cloud interface. Tofind the end of support date for a product release, refer to the VMware Product Lifecycle Matrix.
Other VMware software
Google is responsible for lifecycle management of VMware software (ESXi,vCenter, PSC, and NSX) in the private cloud.
Software updates include:
- Patches: security patches or bug fixes released by VMware
- Updates: minor version change of a VMware stack component
- Upgrades: major version change of a VMware stack component
Google tests a critical security patch as soon as it becomes available fromVMware. Per SLA, Google rolls out the security patch to private cloudenvironments within a week.
Google provides quarterly maintenance updates to VMware software components. Fora new major version of VMware software version, Google works with customers tocoordinate a suitable maintenance window for upgrade.
vSphere cluster
To ensure high availability of the private cloud, ESXi hosts are configured as acluster. When you create a private cloud, VMware Engine deploysmanagement components of vSphere on the first cluster. VMware Enginecreates a resource pool for management components, and deploys all managementVMs in this resource pool.
The first cluster cannot be deleted to shrink the private cloud. The vSpherecluster uses vSphere HA to provide high availability for VMs. Failures totolerate (FTT) are based on the number of available nodes in the cluster. Theformula Number of nodes = 2N+1
, where N
is the FTT, describes therelationship between available nodes in a cluster and FTT.
For production workloads, use a private cloud that contains at least 3 nodes.
Single node private clouds
For testing and proofs of concept with VMware Engine, you can createa private cloud that contains only a single node and cluster.VMware Engine deletes private clouds that contain only 1 node after60 days, along with any associated workload VMs and data.
You can resize a single node private cloud to contain 3 or more nodes. When youdo so, VMware Engine initiates vSAN data replication and no longerattempts to delete the private cloud. A private cloud must contain at least 3nodes and complete vSAN data replication to be eligible for coverage based onthe SLA.
Features or operations that require more than 1 node won't work with a singlenode private cloud. For example, you won't be able to use vSphere DistributedResource Scheduler (DRS) or High Availability (HA).
vSphere cluster limits
The following table describes vSphere cluster limits in private clouds that meetSLA requirements:
Resource | Limit |
---|---|
Minimum number of nodes to create a private cloud (first cluster) | 3 |
Minimum number of nodes to create a cluster | 3 |
Maximum number of nodes per cluster | 32 |
Maximum number of nodes per private cloud | 96 |
Maximum number of clusters per private cloud | 21 |
Guest operating system support
You can install a VM with any guest operating system supported by VMware for theESXi version in your private cloud. For a list of supported guest operatingsystems, see the VMware Compatibility Guide for Guest OS.
VMware infrastructure maintenance
Occasionally it's necessary to make changes to the configuration of the VMwareinfrastructure. These intervals can occur every 1‑2months, but the frequency is expected to decline over time. This type of maintenance can usually be done without interrupting normal usage of the services.
During a VMware maintenance interval, the following services continue tofunction without any effect:
- VMware management plane and applications
- vCenter access
- All networking and storage
- All cloud traffic
External Storage
You can expand the storage capacity of a Google Cloud VMware Engine cluster by addingmore nodes. Alternatively, you can use external storage if you only want toscale storage. Scaling storage increases the storage capacity without increasingthe compute capacity of the cluster, allowing you to scale your resourceindependently.
Contact Google Support or your sales representative for more information aboutusing external storage.
What's next
- Learn aboutprivate cloud maintenance and updates.