Assumptions
- DNS-Based VIP Distribution is configured
- vSphere hosts have DNS servers configured
Implementation
1. Add a Dedicated VIP (Virtual IP) Pool
- Sizing the IP Range List with 2 IP Addresses for Every CNode in the cluster
NOTE: There are 4 CNodes in 1 CBox. i.e.: if there are two 2 CBoxes, the VIP Pool should use a range of 16 IP addresses. - Define the Subnet CIDR
- Define the Domain name
- Define the Gateway IP address (if applicable)
- Define the VLAN (if applicable)
2. Add a Dedicated View Policy
- Select Security Flavor of NFS
- Select Group Membership Source of Clients
- Select Path Length Limit of Native Protocol Limit
- Select Allowed Characters of Native Protocol Limit
- OPTIONAL: Select only the VIP Pool created in the first step to limit data access
- In the Host-Based Access tab, define specific vSphere host IP addresses or vSphere host subnets in both the Read/Write and No Squash entries
3. Add a Dedicated View
- Protocols of NFS
- Policy name of what was created in the second step
- Enable Create Directory
- OPTIONAL: Define an NFS Alias as needed
4. Create a New Datastore on any vSphere Host
- In Type, Select NFS
- In NFS version, Select NFS 3
- In Name and configuration:
- Define the Name
- Define the Folder to the path or alias of the View starting with a forward slash "/"
- Define the Server as the Fully Qualified Domain Name
- In Ready to complete, Click Finish
- Right Click on the new Datastore and click Mount Datastore to Additional Hosts...
- Select one host at time and repeat this process for every vSphere Host
- NOTE: Selecting more than one or all hosts during this process may inadvertently connect to a single CNode and not properly align resources for best performance
Recommendations
General Purpose Recommendations
- Use a dedicated VIP Pool for the vSphere mounts
- This will assist with segmentation of control between workloads
- Using VIP Policies will restrict View access and visibility from non-vSphere clients
- Add a Hard Quota on the vSphere datastore directory to limit the usable capacity
- This can be increased on demand at any time
- On large clusters, implement a hard quota less than 2PB. See: https://kb.vmware.com/s/article/84218
- Be aware of the maximum file size of 128TB
- Use Capacity Estimations to determine the data reduction rate of the vSphere datastore
- Configure Storage I/O Control on the datastore
- Select Disable Storage I/O Control but enable statistics collection
- Check Include I/O statistics for SDRS
- Select Disable Storage I/O Control but enable statistics collection
Performance Recommendations
Large MTU
Jumbo Frames of 9000 are suggested to improve IO efficiencies between ESXi Hosts and VAST Data NFSv3 datastores. Please consult with your SE or Co-Pilot before implementation.
Storage DRS for Multiple Datastores
To achieve a better aggregate performance on vSphere NFSv3 mounts to VAST Data clusters, consider SRDS with multiple datastores. For example, each datastore can achieve 2GB/s sequential read performance but multiple datastores in aggregate can achieve much more out of an individual ESXi host. SRDS presents multiple datastores as a group in which vSphere will automatically distribute and provision Virtual Machines dynamically. Example:
SDRS Name | Datastore Name | View |
my_srds_datastore | datastore1 | /vmware/datastore1 |
datastore2 | /vmware/datastore2 | |
datastore3 | /vmware/datastore3 | |
datastore4 | /vmware/datastore4 |
vSphere 8.0 Update 1 - NFSv3 Datastore Connections
vSphere 8.0 Update 1 - vmknic Binding
With vSphere 8.0 Update 1, to isolate NFS traffic, you can bind an NFS3 datastore on an NFS share over the network to a VM kernel adaptor on an ESXi host from a cluster. VMKNIC port binding is not supported in this release for vSphere Virtual Volumes datastores backed with NFS3 configurations. See: Configure VMkernel Binding for NFS 3 Datastore on ESXi Host
Comments
0 comments
Article is closed for comments.