Describe and differentiate Nutanix data protection technologies such as NearSync, Cloud Connect, and Protection Domains

NearSync Building upon the traditional asynchronous (async) replication capabilities mentioned previously; Nutanix has introduced support for near synchronous replication (NearSync). NearSync provides the best of both worlds: zero impact to primary I/O latency (like async replication) in addition to a very low RPO (like sync replication (metro)). This allows users have a very low RPO…

Read more...

Describe and differentiate component, service, and CVM failover processes such as Disk Failure, CVM Failure, and Node Failure

Disk Failure Monitored via SMART data Hades responsible for monitoring VM impact: HA Event: NO Failed I/O: NO Latency: NO In event of failure, Curator scan occurs immediately Scans metadata to find data previously hosted on failed disk Re-replicates (distributes) to nodes throughout cluster All CVM’s participate CVM Failure Failure = I/O’s redirected to other…

Read more...

Identify Data Resiliency requirements and policies related to a Nutanix Cluster

Data Resiliency Levels The following table shows the level of data resiliency (simultaneous failure) provided for the following combinations of replication factor, minimum number of nodes, and minimum number of blocks. Replication Factor Minimum Number of Nodes Minimum Number of Blocks Data Resiliency 2 3 1 1 node or 1 disk failure 2 3 3…

Read more...

Describe the concept of the Redundancy Factor and related requirements

What is the different between Redundancy Factor and Replication Factor? Redundancy Factor (aka FT – Fault Tolerance)  in the simplest terms, is the number of components that a Nutanix cluster can withstand at any time +1. These components include disks, NIC’s, and nodes. For example, in a two block environment and the default Redundancy Factor…

Read more...

Determine and implement storage services based on a given workload

Acropolis Block Services Acropolis Block Services can support use cases including but not limited to: iSCSI for Microsoft Exchange Server. ABS enables Microsoft Exchange Server environments to use iSCSI as the primary storage protocol. Shared storage for Windows Server Failover Clustering (WSFC). ABS supports SCSI-3 persistent reservations for shared storage-based Windows clusters, commonly used with…

Read more...

Configure Acropolis File Services (AFS)

Preparations Ensure each cluster has a minimum configuration of 4 vCPUs and 12 GiB of memory available on each host. Ensure you have configured or defined internal and external networks. An Active Directory, Domain Name Server, and a Network Time Protocol Server. You need Active Directory administrator credentials, enterprise administrator credentials, and at least domain…

Read more...

Configure Acropolis Block Services (ABS)

Requirements and Limitations Ensure that ports 3260 and 3205 are open on any clients accessing the cluster where Acropolis Block Services is enabled. You must configure an external data services IP address in Cluster Details available from the Prism web console. Synchronous Replication or Metro Availability are not currently supported for volume groups. Linux guest VM clustering…

Read more...

Define and differentiate Acropolis Block Services (ABS) and Acropolis File Services (AFS)

Acropolis Block Services (ABS) Exposes backend DSF to external consumers via iSCSI Use Cases: Oracle RAC MSCS Containers Bare-metal Exchange on vSphere Constructs: Data Services IP: Cluster Wide VIP for iSCSI logins Volume Group: iSCSI target/group of disk devices Disks: Devices in Volume Group Attachment: Permissions for IQN access Backend = VG’s disk is just…

Read more...