TR 4517: ONTAP Select on VMware Product Architecture and Best Practices

Transcript

1 Technical Report ONTAP Select on VMware Product Architecture and Best Practices , Tudor Pascu NetApp bruary 2019 Fe | TR - 4517

2 TABLE OF CONTENTS Introduction ... ... ... ... ... 5 1 Software - Defined Infrastructure ... 1.1 ... ... 5 ... ... ... Running ONTAP as Software ... 5 1.2 ... ONTAP Select Versus ONTAP Edge ... ... ... ... 6 1.3 1.4 ... ... ... 6 ersus ONTAP Select Medium ONTAP Select Small V ONTAP Select Evaluation Software Versus Running ONTAP Select in Evaluation Mode ... 7 1.5 ONTAP Select Platform and Feature Support ... ... ... . 7 1.6 2 ... ... Architecture Overview ... 13 ... 2.1 VM Properties ... ... ... ... ... 13 2.2 ... ... ... 15 Hardware RAID Services for Local Attached Storage ... ... 2.3 23 VSAN and External Array Configurations for ONTAP Select 9.4 and Later 2.4 Software RAID Services for Local Attached Storage ... ... ... 26 2.5 High Availability Architecture ... ... ... ... 31 ... 3 ... ... Deployment and Management ... 39 ONTAP Select Deploy ... ... ... ... .. 39 3.1 3.2 Licensing ONTAP Select ... ... ... ... 42 43 3.3 ... Capacity Tiers Licensing ... ... ... 3.4 ... ... ... ... 44 Capacity Pools Licensing Modifying ONTAP Select Cluster Properties ... ... ... . 45 3.5 ... ONTAP Mana ... ... ... ... 46 3.6 gement Network Design Considerations ... ... ... ... 46 4 4.1 Network Configuration: Multinode ... ... ... ... 47 ... ... ... Network Configuration: Single Node ... 50 4.2 4.3 Networking: Inte rnal and External ... ... ... ... 51 4.4 Supported Network Configurations ... ... ... ... 53 4.5 ... ... ... ... 55 VMware vSphere: vSwitch Configuration ... ... ... ... 4.6 63 Physical Switch Configuration ... 4.7 paration ... ... Data and Management Se ... 65 5 Use Cases ... ... ... ... ... 67 ... 5.1 ... ... Remote and Branch Offices ... 67 Private Cloud (Data Center) ... ... ... ... 68 5.2 69 ... 5.3 MetroCluster Software De fined Storage (Two - Node Stretched Cluster High Availability) 6 70 ... ... ... Upgrading ONTAP Select and ONTAP Deploy 2 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

3 7 ... ... 71 Increasing the ONTAP Select Capacity Using ONTAP Deploy Increasing Capacity for ONTAP Select vNAS and DAS with Hardware RAID Controllers 71 ... 7.1 Increasing capacity for ONTAP Select with Software RAID ... ... ... 73 7.2 Single - Node to Mul tinode Upgrade and Cluster Expansions ... ... ... 74 7.3 ... ... ... ... 74 ONTAP Select Performance 8 ONTAP Select 9.0 Standard Four - Node with Direct - Attached Storage (SAS) ... ... 74 8.1 - ONTAP Select 9.1 Medium Instance (Premium License) Four - Node with Direct .. Attached Storage (SSD) 8.2 76 8.3 - Node with VSAN AF Storage ... ... 78 ONTAP Select 9.2 Small instance Single ... 8.4 - Attached Storage (SSD) ONTAP Select Premium 9.4 HA Pair with Direct ... 79 8.5 ONTAP Select Premium 9.5 HA Pair with Direct - Attached Storage (SSD) ... ... 81 Where to Find Additional Information ... ... ... ... 83 Version History ... ... ... ... ... 83 LIST OF TABLES Table 1) ONTAP Select versus ONTAP Edge. ... ... ... ... 6 Table 2) ONTAP Software RAID minimum number of drives. ... ... ... 8 Table 3) ONTAP Select storage efficiency configurations. ... 11 ... ... Table 4) ONTAP Select VM properties. ... ... ... 13 ... ... ... ... Table 5) ONTAP Select release comparison. 14 ... ... ... ... 52 Table 6) Internal versus external network quick reference. Table 7) Network configuration support matrix. ... ... ... ... 54 Table 8) Network minimum and recommended configurations. ... ... ... 55 Table 9) ONTAP Deploy versus ONTAP Select support matrix. ... ... ... 70 75 Table 10) Performance results for a single node (four node Small instance) ONTAP Select cluster. ... - Table 11) Performance results for a single node (part of a four - node medium instance) ONTAP Select cluster with ... DAS (SSD). ... ... ... ... .. 77 Table 12) Performance results for a single - ... 79 node ONTAP Select standard cluster on an AF VSAN datastore. - Table 13) Performance results for a single node (part of a four node medium instance) ONTAP Select 9.4 cluster on ... ... ... ... DAS (SSD). .. 80 ... Table 14) Performance results for a single node (part of a four - node medium instance) ONTAP Select 9.5 cluster on DAS (SSD) with software RAID and hardware RAID. ... ... ... .. 82 LIST OF FIGURES ... n with only RAID managed spindles. Figure 1) Server LUN configuratio - ... 17 ... Figure 2) Server LUN configuration on mixed RAID/non - RAID system. ... ... ... 18 Figure 3) Virtual disk to physical disk mapping. ... ... ... ... 21 ... 23 ... ... Figure 4) Incoming writes to ONTAP Select VM. ... ... al deployment of multinode VNAS clusters. Figure 5) Initi 25 ... ... 3 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

4 Figure 6) ONTAP Select software RAID: use of virtualized disks and RDMs. ... 28 ... - ... ... ... Figure 7) RDD disk partitioning for single 29 node clusters. ... ... ... 29 Figure 8) RDD disk partitioning for multinode clusters (HA pairs). Figure 9) Two - node ONTAP Select cluster with remote mediator and using local - attached storage. ... 32 Figure 10) Four node ONTAP Select cluster using local - attached storage. ... ... . 32 - ... Figure 11) ONTAP Select mir ... ... rored aggregate. ... 36 ... Figure 12) ONTAP Select write path workflow. ... ... ... 37 Figure 13) HA heartbeating in a four - ... ... ... 39 node cluster: steady state. .. ... ... Figure 14) ONTAP Select installation VM placement. 41 ... Figure 15) License Manager. ... ... ... ... ... 44 Figure 16) Overview of an ONTAP Select multinode cluster network configuration. ... ... 47 Figure 17) Network configuration of a single node that is part of a multinode ONTAP Select cluster. ... 48 Figure 18) Network configuration of single - node ONTAP Select cluster. ... ... ... 50 Figure 19) Default external port group configuration using a standard vSwitch and four physical ports. ... 57 57 ... Figure 20) Default in ternal port group configuration using a standard vSwitch and four physical ports. ) ONTAP Select vmnic to port group assignments (advanced configuration for multinode clusters using four Figure 21 ... ports and a standard vSwitch). 58 ... ... ... ... Figure 22) Part 1: ONTAP Select external port groups configurations (advanced configuration for multinode clusters using four ports and a standard vSwitch). ... ... ... ... 59 Figure 23) Part 2: ONTAP Select external port groups configurations (advanced configuration for multinode clusters ... ... ... ... 59 using four ports and a standard vSwitch). Figure 24) Part 1: ONTAP Select internal port groups configurations (advanced configuration for multinode clusters ... ... 60 ... ... using four ports and a standard vSwitch). Figure 25) Part 2: ONTAP Select internal port groups configurations (advanced configuration for multinode clusters ... ... ... ... 60 using four ports and a standard vSwitch). Figure 26) Standard vSwitch with two physical ports per node. ... ... ... 61 Figure 27) LAG properties when using LACP. ... ... ... ... 62 ... 62 ... Figure 28) External port group configurations using a distributed vSwitch with LACP enabled. Figure 29) Internal port group configurations using a distr ... ... 63 ibuted vSwitch with LACP enabled. Figure 30 ) Network configuration using shared physical switch. ... ... ... 64 Figure 31) Network configuration using multiple physical switches. ... ... ... 65 ... Figure 32) Data and management separation using VST. ... ... 66 Figure 33) Data and management separation using VGT. ... ... ... 67 Figure 34) Scheduled backup of remote office to c orporate data center. ... ... ... 68 Figure 35) Private cloud built on DAS. ... ... ... ... 69 ... ... 70 Figure 36) MetroCluster SDS. ... ... ... 72 add operation. - 37) Capacity distribution: allocation and free space after a single storage Figure ... ... 73 Figure 38) Capacity distribution: allocat - ion and free space after two additional storage add operation for node 1. 4 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

5 Introduction 1 ® ® ONTAP NetApp Select is the NetApp - solution for the software defined storage (SDS) market. ONTAP Select brings enterprise class storage management features to the software - defined data center and - extends the Data Fabric solution to the extreme edge use cases including IoT and tactical serv ers. This document describes the best practices that should be followed when building an ONTAP Select cluster, from hardware selection to deployment and configuration. Also, it aims to answer the following questions: engineered FAS storage platforms? • How is ONTAP Select different from Why were certain design choices made when creating the ONTAP Select architecture? • • What are the performance implications of the various configuration options? Software - Defined Infrastructure 1.1 of IT services through software enable administrators The implementation and delivery to rapidly provision resources with a speed and agility that was previously impossible. Modern data centers are moving toward software - defined infrastructures as a mechanism to provide IT services with greater agility and efficiency. Separating out IT value from the underlying physical infrastructure allows IT services to react quickly to changing needs by dynamically shifting infrastructure e they are needed most. resources to wher - Software defined infrastructures are built on these three tenets: Flexibility • • Scalability • Programmability So ftware Defined Storage - t has defined infrastructures might be having its greatest impact in an area tha - The shift toward software - only traditionally been one of the least affected by the virtualization movement: storage. Software solutions that separate out storage management services from the physical hardware are becoming more , commonplace. This is especially evident within pr ivate cloud environments . Enterprise - class service - oriented architectures are designed from the ground to be software defined . Many of these environments that have box servers with locally attached storage software are built on commodity hardware: white - controlling the placement and management of user data. - This is also seen within the emergence of hyper converged infrastructures (HCIs), a building block style of bundled with IT design compute, storage, and networking services. The rapid adoption of hyper converged solutions over the past several years has revealed a desire for simplicity and flexibility. enterprise - class storage arrays with a more customized, However, companies have started replac ing . make your own model by building storage management solutions on top of home - grown components d . Therefore, a new set of problems has emerge - fragmented across silos of direct is data in which In a commodity world attached storage (DAS), data have become complex problems . This is where NetApp can help. mobility and management 1.2 Running ONTAP as Software There is a compelling value pro position in allowing customers to determine the physical characteristics of their underlying hardware while still its storage management services. ONTAP and all of consum ing - Decoupling ONTAP from the underlying hardware allows us to provide enterprise class file and replication services within an SDS environment. 5 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

6 Still, one question remains: w hy do we require a hypervisor? ftware on top of another software application allows us to leverage much of the Running ONTAP as so qualification work done by the hypervisor . This capability is for helping us rapidly expand our list of critical supported platforms. Also, positioning ONTAP as a virtual machine (VM) allows customers to plug into which - rapid provisioning and end existing management and orchestration frameworks, to - end allows automation from deployment to sunsetting. This is the goal of ONTAP Select . 1.3 ONTAP Select Versu s ONTAP Edge describes This section differences between ONTAP Select and ONTAP Edge. Although many of the ," differences are covered in detail in " Architecture Overvie w section Table 1 highlights some of the the major differences between the two products. Table 1 ) ONTAP Select versus ONTAP Edge. Description ONTAP Select ONTAP Edge Single node - node HA, four - node, two - le Node count Sing - node, six node, and eight - node HA 4 vCPUs/16GB (small instance) VM CPU/memory 2 vCPUs/8GB 8 vCPUs/64GB (medium instance) vSphere 5.1, 5.5 NetApp Interoperability Check the Hypervisor Matrix Tool (IMT) for the latest supported versions. Yes High availability (HA) No iSCSI/CIFS/NFS Yes Yes Yes Yes ® and NetApp NetApp SnapMirror ® SnapVault technologies Yes s Full suite of fficiency torage e No policies Net A pp Volume Encryption Yes No r Data Yes ompliance No c etention and ® (NetApp SnapLock Enterprise) Up to 400TB per ONTAP Select to 10TB, 25TB, or 50TB Up Capacity limit node starting with 9.5 and ONTAP Deploy 2.10 Select families within qualified Wider support for major vendor Hardware platform support server vendors offerings that meet minimum criteria ONTAP Select Small Versus ONTAP Select Medium 1.4 . ONTAP Select can be deployed in two sizes: a small VM and a medium VM The Premium license can be used with either a small instance or a medium instance, while the Standard license can only be used with a small instance. The difference between the small VM and medium VM consists of the amoun t of resources reserved for each instance of ONTAP Select. For example, the medium VM consumes eight 6 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

7 CPU cores and 64GB of RAM, while the small VM consumes four cores and 16GB of RAM. More information is located in the section " VM Properties ." The number of cores and the amount of memory per ONTAP Select VM cannot be further modified. In addition, the Premium license is required when using solid stat e drives (SSDs) for DAS configurations - ™ s oftware RAID) or for any NetApp Metrocluster (hardware RAID controller or ONTAP SDS constructs. and a two In a four - node cluster, it is possible to have a two - node medium HA system node small HA - system . Within an HA pair, however, the ONTAP Select VM type should be identical. ote that it is not possible to convert from a Standard license to a Premium license. N 1.5 ONTAP Select Evaluation Software Versus Running ONTAP S elect in Evaluation Mode The ONTAP Select version available on the web portal ( D ownloads/ Software ) is a full version of the product that can be run in evaluation mode. This means that the client can test the full solution, including oy, which is the ONTAP Select setup product. ONTAP Deploy checks and enforces all ONTAP Depl minimum requirements for ONTAP Select, which is useful for both documenting the procedure and vetting your environment for suitability. However, at times the test enviro nment does not match the production environment or does not meet the minimum requirements enforced by ONTAP Deploy. For a quick test of only ONTAP Select , we are ownloads/ D ( ONTAP Select P roduct providing an Open Virtualization Format (OVF) download of only download OVF valuation). When using this E , the ONTAP Deploy utility is not used. Instead, you directly install a single t like the single - node node ONTAP Select cluster, which is capacity and time limited, jus - cluster created using the Deploy tool in evaluation mode. The main benefit of the OVF setup is that it lowers the requirements for testing ONTAP Select. ote that once an e valuation trial has expired, the e valuation softwar e cannot be extended. Starting with N is severely limited as follows: ONTAP Select 9.4, the expired trial functionality , • Single node cluster . No new aggregates can be created and , after the first reboot, the aggregates does not come online. Data is inaccessible. • Nodes in an HA pair only the and o new aggregates can be created and after the first reboot, N . remote aggregates available . Remote aggregates are not normally h osted by the node that is are available . 1.6 ONTAP Select Platform and Feature Support The abstraction layer provided by the hypervisor allows ONTAP Select to run on a wide variety of meet minimum hardware commodity platforms from virtually all the major server vendors, providing they criteria. These specifications are detailed in the following sections. Hardware Requirements ONTAP Select Standard VM requires that the hosting physical server meets the following minimum requirements: • Intel Xeon E5 - 26xx v3 (Haswell) CPU or greater see this link ) are supported starting with ONTAP Select 9.3 • Intel CPU Skylake Server processors ( and ONTAP Deploy 2.7.2 • hypervisor) the for 6 x vCPUs (4 x for ONTAP Select; 2 x 24GB RAM (16GB for ONTAP Select; 8GB for OS) the • Starting with ONTAP Select 9.3, some configurations with a single 10Gb port are now qualified and • : supported. For prior ONTAP Select versions, the minimum requirements are still as follows 7 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

8 − - node clusters Minimum of 2 x 1Gb network interface card (NIC) ports for single − Minimum of 4 x 1Gb NIC ports for two - node clusters − 2 x 10GbE NIC ports ( four recommended) for four - node clusters Note: The ONTAP Select medium VM reserves 8 x vCPUs and 64GB of RAM; therefore, the server minimum requirements should be adjusted accordingly. For locally attached storage (DAS), the following requirements also apply: requirements for deploying on DAS with a hardware RAID controller are as follows : The • Hardware RAID controller with 512MB writeback (battery backed - up) cache and 12Gbps of throughput • A total of up to 60 drives or 400TB per node can be supported starting with the minimum versions of ONTAP Deploy 2.7 and ONTAP Select 9.3. To support a large drive count, an external shelf or drive enclosure can be used. It is important to make sure that the hardware RAID controller can support that number of drives and total capacity. For prior versions of ONTAP Select, the limits on the number of drives are as follows: • − 8 to 24 internal disks (SAS, NL - SAS, or SATA) − 4 to 24 SSDs (Premium license required) The requirements for deploying on DAS when using ONTAP s oftware RAID: ONTAP Select 9.5 and Deploy 2.10 or newer • o 60 drives or 400TB per node can be supported. To support a large drive count, an • A total of up t external shelf or drive enclosure can be used. s oftware RAID depends on the • The minimum number of drives required for using ONTAP S configuration. Table 2 . ee Table 2 ) ONTAP Software RAID minimum number of drives. Minimum Drives Cluster Size RAID Type Layout (Disk Types Required 4 Single node RAID - 4* 1 Service** 1 Parity*** 2 Data - DP 6* RAID 1 Service 2 Parity 3 Data ™ - TEC NetApp 8* RAID 1 Service 3 Parity 4 Data 4 - 7* 1 Service Multimode (each node) RAID 2x1 Parity 2x2 Data 11* DP - RAID Service 1 8 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

9 Minimum Drives Cluster Size RAID Type Layout (Disk Types Required 2x2 Parity 2x3 Data RAID - TEC 15* 1 Service 2x3 Parity 2x4 Data pare disk is optional but recommended. To include a Spare disk, add above. The Spare disk one A s * does not count towards the license. This ** A s ervice (or s ystem) disk does not count towards the license. s ervice ( s ystem) disk must be virtualized . I n other words, a VMFS datastore must exist on this drive . arity disk is not counted towards license . the *** A p For shared storage (virtual SAN [VSAN] and some HCI appliances or external arrays), using a RAID controller is no longer a requirement. However, the following restrictions and best practices should be ed when selecting the type of datastore used for hosting ONTAP Select: consider • Support for VSAN and external arrays requires the following minimum versions: ONTAP Select 9.1 and Deploy 2.3. • Support for VMware HA, vMotion, and Distributed Resource Scheduler (DRS) r equires the following minimum versions: ONTAP Select 9.2 and Deploy 2.4. Multinode clusters on shared storage are supported starting with ONTAP Deploy 2.8 and ONTAP • - node clusters are supported with VSAN or extern al array - Select 9.4. For prior releases, only single type datastores. • The VSAN configuration or the external array must be supported by VMware as evidenced by the configuration present on the VMware hardware compatibility list (HCL). ONTAP Select Feature Support except for those features that have ONTAP functionality most ONTAP Select offers full support for hardware - specific dependencies. S upported functionality includes : the following • NFS, CIFS, and iSCSI SnapMirror and SnapVault • ® NetApp FlexClone • technology ® technology • NetApp SnapRestore • NetApp Volume Encryption SnapLock Enterprise (separate license) • • FabricPool (separate license) FlexCache (separate license) • SyncMirror (separate license) • • NetApp Data Availability Services (separate license) - MetroCluster SDS (formerly node stretched cluster; ONTAP Select called an ONTAP Select two • Premium license) 9 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

10 ® In addition, support for the NetApp OnCommand management suite is included. This suite includes most tooling used to manage NetApp FAS arrays, such as OnCommand Unified Mana ger, OnCommand ® . Using SnapCenter, NetApp Insight, OnCommand Workflow Automation, and NetApp SnapCenter ® ® SnapManager , or NetApp SnapDrive with ONTAP Select requires server - based licenses. Consult the IMT for a complete list of supported management applications. The following ONTAP features are not supported by ONTAP Select: • Interface groups (ifgroups) • Service Processor • Hardware - centric features such as the traditional FAS/AFF MetroCluster architecture th at requires dedicated hardware infrastructure between sites, Fibre Channel (FC/FCoE), and full disk encryption (FDE) • NetApp Storage Encryption drives ONTAP Select Storage Efficiency Support ONTAP Select provides storage efficiency options that are similar to the storage efficiency options present on FAS and AFF arrays. - ONTAP Select virtual NAS (vNAS) deployments using all flash VSAN or generic flash arrays should follow the best practices for ONTAP Select with non SSD DAS storage. - like personality is automatically enabled on new installations as long as the FF - In ONTAP Select 9.5, an A following conditions are met: DAS storage with SSD drives and a Premium license. - features are automatically enabled that the following inline SE makes sure like personality The AFF during installation: Inline zero pattern detection • • Volume inline deduplication • Volume background deduplication • Adaptive inline compression • Inline data compaction Aggregate inline deduplication • Aggregate background deduplication • s To verify that ONTAP Select has enabled all the default fficiency policies, run the following e torage command on a newly created volume: ::> set diag Warning: These diagnostic commands are for use by NetApp perso nnel only. Do you want to continue? {y|n}: y twonode95IP15::*> sis config Vserver: SVM1 Volume: _export1_NFS_volume - Schedule: auto Policy: Compression: true Inline Compression: true Compression Type: adaptive Application IO Size: 8K Compression Algo rithm: lzopro Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: true Cross Volume Background Deduplication: true 10 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

11 Note: you must install ONTAP Select 9.4 on DAS SSD For ONTAP Selec t upgrades from 9.4 to 9.5, . In addition, the Enable Storage Efficiencies check box must be storage with a Premium license initial cluster ins tallation with ONTAP Deploy . Enabling an AFF - like personality checked during - have not been prior conditions post met requires the manual creation of a ONTAP upgrade when C boot argument and a node reboot. ontact technical support for further details. 3 summarizes the various Table , or not enabled storage efficiency options available, enabled by default by default but recommended, depending on the ONTAP Select version and media type. Table 3 ) ONTAP Select storage efficiency configurations. ONTAP Select 9.5 Premium (DAS 9.5 / 9.41 / 9.32 9.5 / 9.41 / 9.32 .41 / 9.32 Premium /Features SSD) Premium or Premium or (DAS SSD) Standard Standard (vNAS)3 (DAS HDD) Inline zero upported s Yes (default) Yes Not Yes detection Enabled by user Enabled by user per - volume basis per - volume basis upported s Not Volume inline vailable Yes (default) Yes a Not (recommended) deduplication Enabled by user - volume basis per Not s Yes 32K inline Yes Yes upported compression Enabled by user Enabled by user Enabled by user (secondary per volume basis per volume basis. - - volume basis per compression) 8K inline upported s Not Yes Yes Yes (default) compression (recommended) Enabled by user (adaptive per volume basis Enabled by user compression) - per volume basis s upported Not Background Yes Not supported Not supported compression Enabled by user per volume basis Yes Not Yes Compression upported s Yes scanner Enabled by user volume basis - per Yes (default) upported s Not Inline data Yes Yes (recommended) compaction Enabled by user Enabled by user per volume basis - volume basis per Yes Not upported Compaction Yes Yes s scanner Enabled by user per - volume basis upported s Not N/A Yes Yes (default) Aggregate inline (recommended) deduplication Enabled by user per volume basis with space guarantee = none) © 2019 NetApp, Inc. All ONTAP Sel e ct © 2016 NetApp, Inc. All rights reserved. rights reserved. Product Architecture and Best : 11 Practices

12 ONTAP Select 9.5 Premium (DAS 9.5 / 9.41 / 9.32 .41 / 9.32 Premium 9.5 / 9.41 / 9.32 /Features SSD) Premium or Premium or (DAS SSD) Standard Standard (DAS HDD) (vNAS)3 s Volume background upported Yes (default); Yes Not Yes (recommended) deduplication Enabled by user per volume basis Aggregate upported s Yes(default) Yes Not N/A (recommended) background Enabled by user deduplication per volume basis with space guarantee = none) 1 ONTAP Select 9.4 on DAS SSDs (requires Premium license) allows existing data in an aggregate to be deduped using aggregate - level background cross volume scanners. This one - time operation is performed manually for volumes created before 9.4. 2 ONTAP Select 9.3 on DAS SSDs (requires Premium license) supports aggregate - level background deduplication; however, this feature mu st be enabled after creating the aggregate. 3 e orage ONTAP Select vNAS by default does not support any st fficiency policies. Notes on U Behavior for DAS SSD Configurations pgrade - command to revert show system node upgrade After upgrading to ONTAP Select 9.5, wait for the indicate that the upgrade has completed before verifying the storage efficiency values for existing volumes. On a system upgraded to ONTAP Select 9.5, a new volu me created on an existing aggregate or a newly created aggregate same behavior as a volume created on a fresh deployment on ONTAP Select has the 9.5. have most of the same storage Existing volumes that undergo the ONTAP Select code upgrade fficiency policies as a newly created volume on ONTAP Select 9.5 with some variations: e . If no storage efficiency policies were enabled on a volume prior to the upgrade, then: Scenario 1 compaction, aggregate inline space guarantee = volume do not have inline data - Volumes with • , deduplication - and aggregate background deduplication enabled. These options can be enabled post upgrade. • Volumes with space guarantee = none do not have background compression enabled. This option can be enab led post upgrade. • Storage efficiency policy on the existing volumes is set to auto after upgrade. Scenario 2 . If some storage efficiencies are already enabled on a volume prior to the upgrade then: • V olumes with space guarantee = volume do not see any difference after upgrade . . • Volumes with space guarantee = none have aggregate background deduplication turned on only - storage policy inline with Volumes • . have their policy set to auto have no change in policy, with the exception of Volumes with user defin ed storage efficiency policies • have volumes with space guarantee = none . These volumes aggregate background . deduplication enabled 12 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

13 Notes on U Behavior for DAS HDD Configuration pgrade torage efficiency features enabled prior to the upgrade are retained after the upgrade to ONTAP Select S 9.5. If no storage efficiencies were enabled prior to the upgrade, no storage efficiencies are enabled post - upgrade. 2 Architecture Overvie w ONTAP Select is ONTAP deployed as a VM. It provides storage management services on a virtualized commodity server. ONTAP Select can be deployed two ways: Non - HA (single node). The single - node version of ONTAP Select is well suited for storage • . These infrastructures that provide their own storage resiliency include VSAN datastores or external arrays that offer data protection at the array layer and work along with VMware HA. The single - node the data is protected by in which Select cluster can also be used for remote and branch offices replication to a core location. HA (multinode). • ct The multinode version of ONTAP Select uses two, four, six, or eight ONTAP Sele nodes and adds support for HA and Data ONTAP nondisruptive operations, all within a shared - nothing environment. When choosing a solution, resiliency requirements, environment restrictions, and cost factors should be ONTAP and support many of the same core features, the considered. Although both versions ru n multinode solution provides HA and supports nondisruptive operations, a core value proposition for ONTAP. The single - Note: nt options, not node and multinode implementations of ONTAP Select are deployme separate products. Although the multinode solution requires the purchase of additional node licenses, both share the same product model, FDvM300. This section provides a detailed analysis of the various aspects of the system architecture fo r both the single node and multinode solutions while highlighting important differences between the two variants. - 2.1 VM Properties The ONTAP Select VM has a fixed set of properties, described in Table 4 . Increasing or decreasing the amount of resources allocated to the VM is not supported. Additionally, the ONTAP Select instance hard that e physical resources backed by the VM are th reserves the CPU and memory resources, meaning unavailable to any other VMs hosted on the server. Table 4 ) ONTAP Select VM properties. Description Single Node Multinode (per Node) CPU/memory 4 vCPUs/16GB RAM or 4 vCPUs/16GB RAM or 1 8 vCPUs /64GB RAM 8 vCPUs/64GB RAM1 Virtual network interfaces 3 (2 for ONTAP Select versions 7 (6 for ONTAP Select versions before before 9.3) 9.3) 4 4 SCSI controllers 10GB System boot disk 10GB 120GB 120GB System core dump disk N/A Mailbox disk 556MB 13 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

14 Description Multinode (per Node) Single Node 2 68GB 68GB x 2 Cluster root disk 2 4 GB (ONTAP Select 9.5 on ESX 6.5 4 GB (ONTAP Select 9.5 on ESX 6.5 NVRAM partition U2 and higher only) U2 and higher only) 1 ONTAP Select Premium (version 9.1 and later). 2 The root aggregate no longer counts against the capacity license starting with ONTAP Select 9.4. Note: The core dump disk partition is separate from the system boot disk. Because the core file size is directly rel ated to the amount of memory allocated to the ONTAP instance, this allows NetApp to support larger - sized memory instances in the future without requiring a redesign of the system boot disk. Note: installed on The NVRAM partition was separated as its own disk starting with ONTAP Select 9.5 ESX 6.5 U2 and l ater . ONTAP Select 9.5 installed on older versions of ESX, or ONTAP Select 9.5 installations that were upgraded from older versions and all prior versions of ONTAP Select , collocated the NVRAM partition on the boot disk. were Starting with ONTAP Select 9.2, the ONTAP console i s accessible through the VM video console tab in the vSphere client. Note: T he serial ports were removed from the ONTAP Select 9.2 VM, which allows ONTAP Select 9.2 to support and install on any vSphere license. Before ONTAP Select 9.2, only the vSphere Enterpri se/Enterprise+ licenses were supported. Table 5 lists the differences between the ONTAP Select releases 9.3 through 9.5. There are no differences in the properties of ONTAP Select 9.3 and 9.4. Table 5 ) ONTAP Se lect release comparison. ONTAP ONTAP Description Select 9.3 / 9.4 Select 9.5 Standard or Premium ONTAP Select license Standard or Premium 4 vCPUs/16GB or CPU/memory 4 vCPUs/16GB or 8 vCPUs/64GB 8 vCPUs/64GB 3 3 SAS, SATA, or SSD SAS, NL - Disk type SAS, NL - SAS, SATA, or SSD SATA, or 4 or SAS, 8 SAS, NL SATA, or 4 - or SAS, - 8 SAS, NL Minimum number of disks 3 3 SSD SSD (with ontroller c ) ardware AID h R 3 N/A drives (single node with Minimum number of disks 4 SSD RAID 4 and no parity) s oftware RAID) (with ONTAP 3 node with drives (multi - 7 SSD RAID 4 and no parity) 60 Maximum number of disks 60 None Network serial ports None All vSphere licenses are All vSphere licenses are vSphere license requirements 2 2 supported supported only (requires ONTAP vNAS VMware HA vNAS only (requires ONTAP Deploy 2.4) Deploy 2.4) 14 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

15 1 Yes Yes VMware storage vMotion Cluster size Single node Single node node - Two Two - node - Four node Four - node - node Six Six - node node Eight Eight - node - Maximum capacity per node 400TB 400TB 1 Requires ONTAP Deploy 2.7 and ONTAP Select 9.3 2 The ESXi f ree license is not supported. 3The i um Li cense is required for all SSDs. Prem When using locally attached storage (DAS), certain restrictions apply to the ONTAP Select VM, specifically: • Only one ONTAP Select VM can reside on a single server. • vSphere fault tolerance (FT) is not supported. 2.2 Hardware RAID Services for Local Attached Storage - defined solutions require the presence of an SSD to act as a higher - S ome software staging - speed write device . , on th e other hand, uses a hardware RAID controller to achieve both a write ONTAP Select performance boost and the added benefit of protection against physical drive failures . It does this by moving RAID services to the hardware controller. As a result, RAID protection f or all nodes within the ONTAP Select cluster is provided by the locally attached RAID controller and not through ONTAP software RAID. Note: ONTAP Select data aggregates are configured to use RAID 0, because the physical RAID controller is providing RAID striping to the underlying drives. No other RAID levels are supported. RAID Controller Configuration for Local Attached Storage All locally attached disks that provide ONTAP Select with backing storage must sit behind a RAID controller. Most commodity servers come with multiple RAID controller options across multiple price points, each with varying levels of functionality. The intent is to support as many of these options as possible, providing they meet certain minimum requirements placed on the controller. the ONTAP Select disks must meet the following requirements: es manag Th e RAID controller that • The hardware RAID controller must have a battery backup unit (BBU) or flash - backed write cache (FBWC) and support 12Gbps of throughput. • st support a mode that can withstand at least one or two disk failures (RAID 5 The RAID controller mu and RAID 6). must The drive cache be set to disabled. • be configured for writeback mode with a fallback to write through upon BBU or must • The write policy flash failure. The I/O policy for reads must be set to cached. • All locally attached disks that provide ONTAP Select with backing storage must be placed into RAID groups running RAID 5 or RAID 6. For SAS drives and SSDs, using RAID groups of up to 24 drives . allows ONTAP to reap the benefits of spreading incoming read requests across a higher number of disks nfigurations, performance testing a significant gain in performance. With SAS/SSD co es provid Doing so - against single performed was LUN configurations. No significant differences were - LUN versus multi 15 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

16 found, so for simplicity’s sake, NetApp recommends creating the fewest number of LUNs necessary to , support your config uration needs. NL - SAS and SATA drives require a different set of best practices. For performance reasons, the minimum number of disks is still eight , but the RAID group size should not be larger than 12 drives. NetApp also recommends using one spare per RAID group; however, global spares for all RAID groups can also be each RAID group used. For example, you can use two spares for every three RAID groups, with consisting of eight to 12 drives. Note: The maximum extent and datastore size for ESX 5.5/ 6.x is 64TB, which can affect the number of LUNs necessary to support the total raw capacity provided by these large capacity drives. RAID Mode Many RAID controllers support up to three modes of operation, each representing a significant difference in the data path taken by write requests. These three modes are as follows : • All incoming I/O requests are written to the RAID controller cache and then Writethrough. immediately flushed to disk before acknowledging the request back to the host. • incoming I/O requests are written directly to disk, circumventing the RAID controller All Writearound. cache. Writeback. • All incoming I/O requests are written directly to the controller cache and immediately acknowledged back to the host. Data blocks are flushed to disk as ynchronously using the controller. Writeback mode offers the shortest data path, with I/O acknowledgment occurring immediately after the throughput for mixed read/write blocks enter cache . This mode provides the low est latency and high est workloads. However, without the presence of a BBU or nonvolatile flash technology, users run the risk of . losing data if the system incurs a power failure when operating in this mode t requires the presence of a battery backup or flash unit; therefore, we can be confident that ONTAP Selec cached blocks are flushed to disk in the event of this type of failure. For this reason, it is a requirement that the RAID controller be configured in writeback mode. Best Practice The server RAID controller should be configured to operate in writeback mode. If write workload performance issues are seen, check the controller settings and make sure that writethrough or writearound is not enabled. Local Disks Shared Between ONTAP Sele ct and OS The most common server configuration is one in which all locally attached spindles sit behind a single a You should provision : one for the hypervisor and RAID controller. one for the minimum of two LUNs Select VM. ONTAP For example, an HP DL380 g8 with six internal drives and a single Smart Array P420i RAID consider . A ll internal drives are managed by this RAID controller, and no other storage is present on the controller system. 1 Figure shows this style of configuration. In this example, no other storage is present on the system; must therefore, the hypervisor share storage with the ONTAP Select node. 16 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

17 managed spindles. 1 ) Server LUN configuration with only RAID - Figure Provisioning the OS LUNs from the same RAID group as ONTAP Select allows the hypervisor OS (and any client VM that is also provisioned from that storage) to benefit from RAID protection . This - a single s prevent drive failure from bringing down the entire system. configuration Best Practice If the physical server contains a single RAID controller managing all locally attached disks, NetApp recommends creating a separate LUN for the server OS and one or more LUNs for ONTAP Select. In the event of boot disk corruption, this best practice allows the administrator to recreate the OS LUN without affecting ONTAP Select. Local Disks Split Between ONTAP Select and OS e other possible configuration provided by server vendors involves configuring the system with multiple Th RAID or disk controllers. In this configuration, a set of disks is managed by one disk controller, which . A managed by a hardware RAID controller is might or might not offer RAID services second set of disks that is able to offer RAID 5/6 services. With this style of configuration, the set of spindles that sits behind the RAID controller that can provide RAID 5/6 services should be used exc lusively by the ONTAP Select VM. Depending on the total storage capacity under management, into one or more RAID groups and the disk spindles you should configure more datastores, with all datastores one or more LUNs. These LUNs would then be used to create one or being protected by the RAID controller. The first set of disks is reserved for the hypervisor OS and any client VM that is not using ONTAP . 2 Figure , as shown in storage 17 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

18 2 ) Server LUN configuration on mixed RAID/non - RAID system. Figure Multiple LUNs LUN configurations must change. When using There are two cases for which single - RAID group/single - In addition a single LUN SA - NL can , S or SATA drives, the RAID group size must not exceed 12 drives. either individual file system extent maximum become larger than the underlying hypervisor storage limits maximum size hen the underlying physical storage must be broken up into T . size or total storage pool multiple LUNs to enable successful file system creation. Best Practice ONTAP Select receives no performance benefits by increasing the number of LUNs within a RAID group. SAS configurations or - Multiple LUNs should only be used to follow best practices for SATA/NL to bypass hypervisor file system limitations. VMware vSphere Virtual Machine File System Limits 64TB. A VMFS file system cannot use The maximum extent size on a VMware vSphere 5.5/6.x server is disks or LUNs that are larger than this size. The maximum size of an ESX 5.5/6.x hosted datastore is also 64TB. This datastore can consist of one large extent or multiple smaller extents. B of storage attached, multiple LUNs must be provisioned for the host, each If a server has more than 64T - smaller than 64TB. Creating multiple RAID groups to improve the RAID rebuild time for SATA/NL SAS drives also results in multiple LUNs being provisioned. quired, a major point of consideration is making sure that these LUNs have When multiple LUNs are re similar and consistent performance. This is especially important if all the LUNs are to be used in a single tinctly different performance ONTAP aggregate. Alternatively, if a subset of one or more LUNs has a dis profile, we strongly recommend isolating these LUNs in a separate ONTAP aggregate. Multiple file system extents can be used to create a single datastore up to the maximum size of the ty that requires an ONTAP Select license, make sure to specify datastore. To restrict the amount of capaci a capacity cap during the cluster installation. This functionality allows ONTAP Select to use (and therefore require a license for) only a subset of the space in a datastore. 18 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

19 Alternatively, on e can start by creating a single datastore on a single LUN. When additional space requir ing a larger ONTAP Select capacity license is needed, then that space can be added to the same datastore as an extent, up to the maximum size of the datastore . After the maximum size is reached, new datastores can be created and added to ONTAP Select. Both types of capacity extension operations are supported and can be achieved by using the ONTAP Deploy storage - add functionality. Each ONTAP configured to support up to 400TB of storage. This capacity cannot be addressed in a Select node can be single datastore; therefore, it cannot be configured as part of the initial cluster creation workflow. Provisioning to any capacity point beyond the 64TB per datastore li Note: mit requires a two - step process. The initial cluster create can be used to create an ONTAP Select cluster with up to 64TB of storage per using additional datastores node. A second step is to perform one or more capacity addition operations the ired total capacity is reached. This functionality is detailed in until the des Increasing the “ section ONTAP Select C apacity Using .” ONTAP Deploy VMFS overhead is nonzero (see VMware KB 1001618), and attempting to use the entire space Note: e operations. reported as free by a datastore has resulted in spurious errors during cluster creat Starting with ONTAP Deploy 2.7, a 2% buffer is left unused in each datastore. This space does not require a capacity license because it is not used by ONTAP Select. ONTAP Deploy automatically calculates the exact number of gigabytes for the buffer, as long as a capacity cap is not specified. If a capacity cap is specified, that size is enforced first. If the capacity cap size falls within the buffer size, the cluster create fails with an error message specifying the correct maximum size param eter that can be used as a capacity cap: - - ontap “ - storage select “ InvalidPoolCapacitySize: Invalid capacity specified for storage pool pool ” , Specified value: 34334204 GB. Available (after leaving 2% overhead space): 30948 ” Starting with ONTAP Select 9.3 and ONTAP Deploy 2.7, VMFS 6 is supported for both new installations and as the target of a Storage vMotion operation of an existing ONTAP Deploy or ONTAP Select VM. VMware does not support in is the - place upgrades from VMFS 5 to VMFS 6. Therefore, Storage vMotion only mechanism that allows any VM to transition from a VMFS 5 datastore to a VMFS 6 datastore. However, support for Storage vMotion with ONTAP Select and ONTAP Deploy was expanded to cover g from VMFS 5 to VMFS 6. other scenarios besides the specific purpose of transitionin support for Storage vMotion includes both single node and multinode clusters - For ONTAP Select VMs, and includes both storage only and compute and storage migrations. operation should be used to trigger a cluster refresh a At the end of , ONTAP Deploy Storage vMotion this operation of purpose . The operation is to update the ONTAP Deploy database of the ONTAP Select node’s new location. 19 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

20 support for Storage vMotion provides a lot of flexibility, it is important that the new host Although Note: can appropriately support the ONTAP Select node. If a RAID controller and DAS storage were used on the original host, a similar setup should exist on the new host. Severe performance issues can result if the O NTAP Select VM is rehosted on an unsuitable environment. Best Practice Available capacity on a new host is not the only factor when deciding whether to use VMware Storage and network vMotion with an ONTAP Select node. The underlying storage type, host configuration, capabilities should be able to sustain the same workload as the original host. When using Storage vMotion, complete the following procedure: 1. failover first. Shut down the ONTAP Select VM. If this node is part of an HA pair, perform a storage 2. Clear the CD/DVD drive option. Note: This step does not apply if you installed ONTAP Select without using ONTAP Deploy. After the Storage vMotion 3. operation completes, power on the ONTAP Select VM. Note: If this node is part of an HA pair, you can pe rform a manual giveback. and make sure that it is successful. with ONTAP Deploy ssue a cluster refresh operation I 4. Back up the ONTAP Deploy database. 5. 20 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

21 ONTAP Select Virtual Disks At its core, ONTAP Select presents ONTAP with a set of vir tual disks provisioned from one or more storage pools. ONTAP is presented with a set of virtual disks that it treats as physical, and the remaining portion of the storage stack is abstracted by the hypervisor. Figure 3 shows this relationship in more detail, highlighting the relationship between the physical RAID controller, the hypervisor, and the ONTAP Select VM. Notes: • ccur from within the server RAID group and LUN configuration o ’ s RAID controller software. This configuration is not required when using VSAN or external arrays. • Storage pool configuration occurs from within the hypervisor. • Virtual disks are created and owned by individual VMs in this exa mple, by ONTAP Select. ; ) Virtual disk to physical disk mapping. Figure 3 Virtual Disk Provisioning To provide for a more streamlined user experience, the ONTAP Select management tool, ONTAP Deploy, provisions virtual disks from the associated storage pool and attaches them to the ONTAP automatically Select VM. This operation occurs automatically during both initial setup and during storage - add operations. If the ONTAP Select node is part of an HA pair, the virtua l disks are automatically assigned to a local and mirror storage pool. Because all virtual disks on the ONTAP Select VM are striped across the underlying physical disks, there larger number of virtual disks. In addition , shifting is no performance gain in building configurations with a the responsibility of virtual disk creation and assignment from the administrator to the management tool prevents the user from inadvertently assigning a virtual disk to an incorrect storage pool. ONTAP Select breaks up the underlying attached storage into equal - sized virtual disks, each not exceeding 16TB. If the ONTAP Select node is part of an HA pair, a minimum of two virtual disks are created on each cluster node and assigned to the local and mi rror plex to be used within a mirrored aggregate. For example, an ONTAP Select can assigned a datastore or LUN that is 31TB ( the space remaining after the VM is deployed and the system and root disks are provisioned ). Then four ~7.75TB virtual dis ks are created and assigned to the appropriate ONTAP local and mirror plex. in VMDKs of different sizes. For details, Note: Adding capacity to an ONTAP Select VM likely result s “ section the see Unlike FAS .” ONTAP Deploy Using apacity C Increasing the ONTAP Select 21 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

22 d VMDKs can exist in the same aggregate. ONTAP Select uses a RAID 0 systems, different size in each VMDK stripe across these VMDKs, which results in the ability to fully use all the space regardless of its size. Best Practice Similar to creating multiple LUNs, ONTAP Select does n ot receive performance benefits by increasing the number of virtual disks used by the system. Virtualized NVRAM , a high - performing card NetApp FAS systems are traditionally fitted with a physical NVRAM PCI card containing nonvolatile flash memory . This card provides a significant boost in write performance by mmediately acknowledge incoming writes back to the client. It can also granting ONTAP with the ability to i known as s chedule the movement of modified data blocks back to the slower storage media in a process destaging . is fitted with this type of equipment. Therefore, the functionality of th typically Commodity systems are not NVRAM card has been virtualized and placed into a partition on the ONTAP Select system boot disk. It is for This is also . this reason that placement of the system virtual disk of the instance is extremely important why the product requires the presence of a physical RAID controller with a resilient cache for local attached storage configurations. Starting with new installations of ONTAP Select 9.5 on ESXi 6.5 U 2 and newer, NVRAM is placed on its hat are upgraded to version 9.5, or instances of , instances t own VMDK. Prior versions of ONTAP Select ONTAP Select 9.5 on older versions of ESXi use the 9.9GB boot disk for their NVRAM partitions. Splitting the NVRAM in its own VMDK allows the ONTAP Select VM to use the vNVMe driver to communicate with its NVRAM VMDK . It also requir es that s hardware version 13, which is the ONTAP Select VM use compatible with ESX 6.5 and newer. Data Path Explained: NVRAM and RAID Controller The interaction between the virtualized NVRAM system partition and the RAID controller can be best highlighted by walking through the data path taken by a write request as it enters the system. Incoming write requests to the ONTAP Select VM are targeted a t the VM’s NVRAM partition. At the virtualization layer, this partition exists within an ONTAP Select system disk , a VMDK attached to the ONTAP Select VM. At the physical layer, these requests are cached in the local RAID controller, like all block change s targeted at the underlying spindles. From here, the write is acknowledged back to the host. At this point , p hysically, the block resides in the RAID controller cache, waiting to be flushed to disk. Logically, the block resides in NVRAM destaging to the appropriate user data disks. waiting for Because changed blocks are automatically stored within the RAID controller’s local cache, incoming writes to the NVRAM partition are automatically cached and periodically flushed to physical storage should not be confused with the periodic flushing of NVRAM contents back to ONTAP data media. This disks. These two events are unrelated and occur at different times and frequencies. Figure the difference between the physical show s the I/O path an incoming write takes . It highlight s 4 ) represented by the RAID controller cache and disks ( VM ’ by represented ( s the virtual layer and layer the . ) NVRAM and data virtual disks Although blocks changed on the NVRAM VMDK are cached in the local RAID controller cache, Note: the cache is not aware of the VM construct or its virtual disks. It stores all changed blocks on the system, of which NVRAM is onl y a part. This includes write requests bound for the hypervisor, if it is provisioned from the same backing spindles. 22 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

23 Figure ) Incoming writes to ONTAP Select VM. 4 Best Practice he RAID controller cache is used to store all incoming block changes , not just those targeted toward T n. Therefore, when choosing a RAID controller, select one with the largest cache the NVRAM partitio available. A larger cache allows less frequent disk flushing and an increas e in performance for the ONTAP Select VM, the hypervisor, and any compute VMs collocated on the server. N ote that , starting with new installations of ONTAP Select 9.5 on ESX version 6.5 U2 or later, the NVRAM partition is separated on its own VMDK. That VMDK is attached using the vNVME driver available in ESX versions of 6.5 or later. This change is most significa nt for ONTAP Select installations with software RAID, which do not benefit from the RAID controller cache. VSAN and External Array Configurations for ONTAP Select 9.4 and 2.3 L ater technology HCI pp NetA , , and ONTAP Select clusters are supported on VSAN, some HCI products external array types of datastores. This deployment model is generally referred to as virtual NAS or vNAS. In these configurations, datastore resiliency is assumed to be provided by the underlying The minimum requirement is that the underlying configuration is supported by VMware infrastructure. and, therefore, should be listed on the respective VMware HCLs. rchitectures vNAS A DAS The vNAS nomenclature is used for all setups that do not use . For multinode ONTAP Select clusters, this includes architectures for which the two ONTAP Select nodes in the same HA pair share a single datastore (including vSAN datastores) installed on separate datastores nodes can also be . The side storage efficiencies to reduce the overall - he same shared external array. This allows for array from t footprint of the entire ONTAP Select HA pair. The architecture of ONTAP Select vNAS solutions is very RAID controller. That is to say that each ONTAP similar to that of ONTAP Select on DAS with a local Select node continues to have a copy of its HA partner’s data. ONTAP storage efficiency policies are because node scoped. Therefore, array side storage efficiencies are preferable they can be applied s data sets from both ONTAP Select nodes. acros a separate external array. This is a s use It is also possible that each ONTAP Select node in an HA pair common choice when using ONTAP Select Metrocluster SDS with external storage. . or vNAS requires ONTAP Select 9.5 and ONTAP Deploy 2.10.1 Metrocluster SDS support f Note: 23 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

24 When using separate external arrays for each ONTAP Select node, it is very important that the two arrays provide similar performance characteristics to the ONTAP Select VM. Architectures Versus vNAS Hardware RAID Controllers local DAS with The vNAS architecture is logically most similar to the architecture of a server with DAS and a RAID controller. In both cases ONTAP Select consumes datastore space. That datastore space is carved into , VMDKs , and these VMDKs form the traditional ONTAP data aggregates. ONTAP Deploy makes sure that the VMDKs are properly sized and assig ned to the correct plex (in the case of HA pairs) during cluster - and storage create add operations. - There are two major difference s between vNAS and DAS with a RAID controller. The most immediate difference is that vNAS does not require a RAID control ler. vNAS assumes that the underlying external array provides the data persistence and resiliency that a DAS with a RAID controller setup would provide . The second and more subtle difference has to do with NVRAM performance. vNAS NVRAM The ONTAP Select NVRAM is a VMDK. In other words, ONTAP Select emulates a byte addressable space (traditional NVRAM) on top of a block addressable device (VMDK). However, the performance of the NVRAM is absolutely critical to the o verall performance of the ONTAP Select node. For DAS setups with a hardware RAID controller, the hardware RAID controller cache acts as the de facto NVRAM cache , because all writes to the NVRAM VMDK are first hosted in the RAID controller cache. For V NAS architectures, ONTAP Deploy automatically configures ONTAP Select nodes with a boot ) . When this boot argument is present, ONTAP argument called Single Instance Data Logging ( SIDL es the data aggregate. The NVRAM is s the NVRAM and write bypass Select the data payload directly to only used to record the address of the blocks changed by the WRITE operation. The benefit of this : one write to NVRAM and a second write when the NVRAM is feature is that it avoids a double write destaged. This feature is only enabled for vNAS because local writes to the RAID controller cache have a negligible additional latency. 3 See Table for an The SIDL feature is not compatible with ONTAP Select storage efficiency features. overview of all stora ge efficiency policies available with ONTAP Select. If array - side storage efficiencies are not sufficient and you would like to use ONTAP Select storage aggregate level using the following command efficiencies , you can disable the SIDL feature at the : - data - instance - single aggregate aggr - logging off storage aggregate modify - name - - if the SIDL feature is turned off. It is possible to re e the N ote that write performance is affected enabl SIDL feature after all the storage efficiency policies on all the volumes in that aggregate are disabled: - volume * (all volumes in the affected aggregate) volume efficiency stop - all true - vserver * Collocating ONTAP Select odes When Us ing vNAS N ONTAP Select 9.4 and ONTAP Deploy 2.8 include support for multinode ONTAP Select clusters on shared storage. ONTAP Deploy 2.8 enables the configuration of multiple ONTAP Select nodes on the same cluster. Note that this configuration is same ESX host as long as these nodes are not part of the only valid for VNAS environments (shared datastores). Multiple ONTAP Select instances per host are not compete for the same hardware RAID supported when using DAS storage because these instances . controller ONTAP Deploy 2.8 makes sure that the initial deployment of the multinode VNAS cluster does not place Figure shows for an example 5 multiple ONTAP Select instances from the same cluster on the same host. of a correct deployment of two four node clusters that intersect on two hosts. - 24 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

25 Figure 5 ) Initial deployment of multinode VNAS clusters. migrated between hosts. This could result in After deployment, the ONTAP Select nodes can be two or more ONTAP Select nodes from the same non optimal and unsupported configurations for which - cluster share the same underlying host. NetApp recommends the manual creation of VM anti affinity rules the nodes of the same cluster s maintain not , that VMware automatically physical separation between so just the nodes from the same HA pair . - is enabled on the ESX cluster. Anti affinity rules require that DRS Note: affinity rule for the ONTAP Select VMs. If the ONTAP - how to create an anti See the following example on Select cluster contains more than one HA pair, all nodes in the cluster must be included in this rule. 25 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

26 could potentially be found on the Two or more ONTAP Select nodes from the same ONTAP Select cluster same ESX host for one of the following reasons: DRS is not present due to VMware vSphere license limitations or if DRS is not enabled . • a VMware HA operation or administrator The DRS anti - affinity rule is bypassed because - • initiated VM migration tak precedence . es Note that ONTAP Deploy does not proactively monitor the ONTAP Select VM locations. However, a cluster refresh operation reflect s this unsupported configuration in the ONTAP Deploy logs: 2.4 Software RAID Services for Local Attached Storage of also oftware RAID option. s provides a Independent ardware RAID configurations, ONTAP Select the h Software RAID the ONTAP software stack. It provides the is a RAID abstraction layer implemented within same functionality as the RAID layer within a traditional ONTAP platform such as FAS. The RAID layer performs drive parity calculations and provides protection against individual drive failures within an ct node. A ardware RAID controller ONTAP Sele might not be available or might be undesirable in h certain environments, such as when ONTAP Select is deployed on a small form - factor commodity hardware. Software RAID expands the available deployment options to include such environments. To enable oftware RAID in your environment, here are some points to remember: s • eature is available starting with ONTAP Select 9.5 and Deploy 2.10 , with the ESX hypervisor . This f vailable with a It is a . remium license P • • . SSD drives for ONTAP root and data disks upports It only s . ystem disk for the ONTAP Select VM s eparate It requires a s • 26 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

27 to create a datastore for the − , either an SSD or an NVMe drive , e disk sy stem Choose a separat node setup) disks (NVRAM, Boot/CF card, Coredump, and Mediator in a multi . - Note: The terms s ervice disk and s ystem disk are used interchangeably. Service disks are the VMDKs that are used within the ONTAP Select VM to service various items such as clustering, booting , . Service disks are physically located on a single physic and so on al disk (collectively called the s / s ystem physical disk) as seen from the host. That physical disk must contain a DAS ervice datastore. ONTAP Deploy creates these service disks for the ONTAP Select VM during cluster deployment. Note: With the current release , it is not possible to further separate the ONTAP Select system disks across multiple datastores or across multiple physical drives. Note: Hardware RAID is not deprecated. Software RAID Configuration for Local Attached Storage hardware the absence of a if a system does , but , RAID controller is ideal When using software RAID , have an existing RAID controller, it must adhere to the following requirements: to the • The hardware RAID controller must be disabled such that disks can be presented directly system (a JBOD). This change can usually be made in the RAID controller BIOS • Or the hardware RAID controller should be in the SAS HBA mode. For example, some BIOS configurations allow an “AHCI” mode in addition to RAID, which could be chosen to enable the JBOD mode. This enables a passthrough, so that the physical drives can be seen as is on the host. Depending on maximum number of drives supported by the controller, an additional controller may be required. With the SAS HBA mode, ensure that the IO controller (SAS HBA) is supported with a minimum NetApp of 6Gb/s speed. However, recommend s a 12Gbps speed. No other hardware RAID controller modes or configurations is supported. For example, some controllers allow a RAID 0 support that can artificially enable disks to pass - through but the implications can be – 16TB undesirable. The support ed size of physical disks (SSD only) is between 200GB Note: need to keep track of which drives are in use by the ONTAP Select VM, and Administrators prevent inadvertent use of those drives on the host. ONTAP Select Virtual and Physical Disks For co nfigurations by the RAID hardware RAID controllers, physical disk redundancy is provided with controller . ONTAP Select is presented with one or more VMDKs from which the ONTAP admin can configure data aggregates. These VMDKs are striped in a RAID 0 format because using ONTAP software redundant, inefficient and ineffective due to is RAID , resiliency provided at the hardware level. in the same datastore as the VMDKs used to store are Furthermore, the VMDKs used for system disks user data. software When using RAID, ONTAP Deploy presents ONTAP Select with a set of virtual disks (VMDKs) ). [RDM s ] and physical disks ( Raw Device Mappings shows this relationship in more detail, highlighting the difference between the virtualized disks Figure 6 r the ONTAP Select VM internals and the physical disks used to store user data. used fo 27 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

28 Figure 6 software RAID: use of virtualized disks and RDMs . ) ONTAP Select The system disks (VMDKs) reside in the same datastore and on the same physical disk. The virtual NVRAM disk requires a fast and durable media. Therefore, only NVMe and SSD - type datastores are supported. Note: With the current release , it is not possible to furt her separate the ONTAP Select system disks across multiple datastores or multiple physical drives. RDMs The data disks are presented to the ONTAP Select VM as raw disks through . RDMs contain metadata for managing and redirecting d isk access to the physical device, which allows the host to pass SCSI commands from the VM directly to the physical disk drives. Each raw disk exposed is divided into : a small root partition (stripe) - sized partitions to create three parts two data disks seen and two equal within the ONTAP Select VM. Partitions use the Root Data Data (RD2) schemes as shown in Figure 7 for a single node cluster and in Figure 8 for a node in an HA pair. DP denotes a parity drive. P denotes a spare drive. S arity drive and p ual d denotes a 28 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

29 - 7 ) RD D disk partitioning for single node clusters. Figure Figure 8 ) RDD disk partitioning for multinode clusters (HA pairs) . and RAID - , TEC. These are DP - ONTAP software RAID supports the following RAID types: RAID 4, RAID - structs used by FAS and AFF platforms. However, ONTAP Select HA uses a shared the same RAID con nothing architecture that replicates each node’s configuration to the other node. That means each node has a root partition and a copy of the its peer’s root partition. Each raw disk has a single root partition, RAID. which means that ONTAP Select has a minimum number of disks required to support software T he as depending on whether the ONTAP Select node is part of an HA pair var ies minimum number of disks ee Table 2 for the specific number of drives required by each configuration. well as the RAID type. S Physical and Virtual Disk Provisioning a more streamlined user experience, automatically provisions the system ONTAP Deploy To provide (virtual) disks from the specified datastore (physical system disk) and attaches them to the ONTAP Select boot. VM. This operation occurs automatically during the initial setup so that the ONTAP Select VM can The RDMs are partitioned and the root aggregate is automatically built. If the ONTAP Select node is part storage pool of an HA pair, the data partitions are automatically assigned to a local mirror storage a and 29 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

30 storage pool. This assignment occurs automatic creation operations and - - add ally during both cluster operations. Because the data disks on the ONTAP Select VM are associated with the underlying physical disks, there are performance implications for creating configurations with a larger number of physical disks. Best Practice s NetApp recommend ei ght to 12 drives as the optimal RAID - group size. The m aximum number of drives per RAID group is 24. Note: The root aggregate’s RAID group type depends on the number of disks available . ONTAP Deploy s the appropriate RAID group type . If it has sufficient disks allocated to the node, it uses pick - root aggregate. a RAID 4 RAID - DP, otherwise it c reates consider the When adding capacity to an ONTAP Select VM using software RAID , the administrator must physical drive size and the number of drives required. For details, see the section, “ Increasing capacity for ONTAP Select with Software RAID ”. Similar to FAS and AFF systems, only drives with equal or larger capacities can be added to an existing RAID gro up. Larger capacity drives right sized. If you are creating new RAID groups, the new RAID are group size should match the existing RAID group size to make sure that the overall aggregate performance does not deteriorate. Matching an ONTAP Sel ect D isk to the C orresponding ESX D isk ONTAP Select disks are usually labeled NET x.y. You can use the following ONTAP command to obtain the disk UUID: - 1.1 ::> disk show NET Disk: NET - 1.1 Model: Micron_5100_MTFD Serial Number: 1723175C0B5E UID: 500A0751:175C0B5E :00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000 BPS: 512 Physical Size: 894.3GB Position: shared Checksum Compatibility: advanced_zoned - Aggregate: Plex: This UID can be matched with the device UID displayed in the ‘storage - devices’ tab for the ESX host enter ESXi shell, you can the following command to blink the LED for a given physical disk In the ) id - naa.unique (identified by it's . 30 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

31 esxcli storage core device set - l=locator - L= d - Virtualized NVRAM itionally fitted with a physical NVRAM PCI card . Thi s card is a high - NetApp FAS systems are trad performing card containing nonvolatile flash memory that provides a significant boost in write . It does this by granting ONTAP the ability to immediately acknowledge incoming writes performance It can also chedule the movement of modified data blocks back to slower storage s back to the client. process known as destaging . media in a Commodity systems are not typically fitted wi th this type of equipment. Therefore, the functionality of the NVRAM card has been virtualized and placed into a partition on the ONTAP Select system boot disk. It is for tremely important. For this reason that placement of the system virtual disk of the instance is ex uses a virtual NVME driver for accessing the system environments using ESX 6.5, ONTAP Select 9.5 . disks regardless of whether the underlying disk is SSD or NVMe However, NetA pp only supports NVMe for t he physical s y stem disk . Best Practice NetApp recommend s using ESX 6.5 U2 or later and an NVMe disk for the datastore hosting the system disks. This configuration provides the best performance for the NVRAM partition. utilize ote that when installing the vNVME driver s on ESX 6.5 U2 and higher, ONTAP Select N the VM regardless of whether the system disk resides on an SSD or This set s an NVME disk. on hardware level to 13 , which is compatible with ESX 6.5 and newer. 2.5 High Availability A rchitecture Although customers are starting to move application workloads from enterprise class storage appliances - to software - based solutions running on commodity hardware, the expectations and needs around resiliency and fault tolerance have not changed. An HA solut ion providing a zero recovery point objective (RPO) protects the customer from data loss due to a failure from any component in the infrastructure stack. A large portion of the SDS market is built on the notion of shared - nothing sto rage, with software replication providing data resiliency by storing multiple copies of user data across different storage silos. ® ONTAP Select builds on this premise by using the synchronous replication features (RAID SyncMirror ) provided by ONTAP to store an extra copy of user data within the cluster. This occurs within the context of an HA pair. Every HA pair stores two copies of user data: one on storage provided by the local node , and one on storage provided by the HA partner. Within an ONTAP Select cluster, HA and synchronous replication are tied together, and the functionality of the two cannot be decoupled or used independently. As a result, the synchronous replication functionality is only available in the multinode offering. Note: In an O NTAP Select cluster, synchronous replication functionality is a function of HA implementation, not a replacement for the asynchronous SnapMirror or SnapVault replication engines. Synchronous replication cannot be used independently from HA. There are two ONTAP Select HA deployment models: the multinode clusters ( four , six , or eight nodes) and the two - node clusters. The salient feature of a two - node ONTAP Select cluster is the use of an external mediator service to resolve split - brain scenarios. The ONTAP Deploy VM serves as the default node HA pairs that it configures. mediator for all the two - There are minimum version requirements for these HA configurations: Four • node HA is supported with all ONTAP Select and ONTAP Deploy releases. - node HA r - Two • minimum versions of ONTAP Select 9.2 and ONTAP Deploy 2.4. the equires 31 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

32 and eight Six - node - node clusters require minimum versions of ONTAP Select 9.3 and ONTAP Deploy • 2.7. The two architectures are represented in Figure 9 and Figure 10 . - 9 ) Two attached storage. - node ONTAP Select cluster with remote mediator and using local Figure The two Note: one HA pair and a mediator. Within the HA node ONTAP Select cluster is composed of - pair, data aggregates on each cluster node are synchronously mirrored, and , in the event of a failover, there is no loss of data. hed storage. - node ONTAP Select cluster using local Figure 10 ) Four - attac - The four Note: and eight - node - node ONTAP Select cluster is composed of two HA pairs. Six node clusters are composed of three and four HA pairs, respectively. Within each HA pair, data in the event of a failover, there aggregates on each cluster node are synchronously mirrored, and , is no loss of data. Note: Only one ONTAP Select instance can be present on a physical server when using DAS storage. ONTAP Select requires unshared access to the local RAID controller of the system and is 32 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

33 designed to manage the locally attached disks, which would be impossible without physical connectivity to the storage. Two - Node HA Versus Multinode HA Unlike FAS arrays, ONTAP Select nodes in an HA pair communicate exclusively over the IP network. That means that the IP network is a si ngle point of failure ( SPOF ) , and protecting against network partitions and split - brain scenarios becomes a n important aspect of the design . The multinode cluster can uster quorum can be established by the three or more sustain single node failures because the cl - surviving nodes. The two - node cluster relies on the mediator service hosted by the ONTAP Deploy VM to - node achieve the same result. The minimum version of the ONTAP Deploy VM required to support a two uster with the mediator service is 2.4. cl The heartbeat network traffic between the ONTAP Select nodes and the ONTAP Deploy mediator service is minimal and resilient so that the ONTAP Deploy VM can be hosted in a different data center than the ONTAP Select t wo - node cluster. Note: The ONTAP Deploy VM becomes an integral part of a two - node cluster when serving as the mediator for that cluster. If the mediator service is not available, the two - node cluster continues serving data, but the storage failover capabilities of the ONTAP Select cluster are disabled. Therefore, the ONTAP Deploy mediator service must maintain constant communication with each a trip round maximum - ONTAP Select node in the HA pair. A minimum bandwidth of 5Mbps and are required to allow proper functioning of the cluster quorum. latency of 125ms ) RTT ( time If the ONTAP Deploy VM acting as a mediator is temporarily or potentially permanently unavailable, a secondary ONTAP Deploy VM (minimum version 2.4) can be used to restore the two - node cluster quorum. This results in a configuration in which the new ONTAP Deploy VM is unable to manage the ONTAP Select nodes, but it successfully participates in the cluster quorum algorithm. The communication between the ONTAP Select nodes and the ONTAP Deploy VM is done by using the iSCSI protocol over IPv4. The ONTAP Select node management IP address is the initiator, and the ONTAP Deploy VM IP address is the target. Therefore, it is not possible to support IPv6 addresses for the node management IP n creating a two node cluster. The ONTAP Deploy hosted mailbox disks are automatically - addresses whe created and masked to the proper ONTAP Select node management IP addresses at the time of - two node cluster creation. The entire configuration is automatically during setup, and no further performed administrative action is required. The ONTAP Deploy instance creating the cluster is the default mediator for that cluster. d. It is possible to be change must An administrative action is required if the original mediator location recover a cluster quorum even if the original ONTAP Deploy VM is lost. However, NetApp recommends node cluster is instantiated. - that you back up the ONTAP Deploy database after every two For a complete list of steps required to config ONTAP Select 9 ure a new mediator location, see the . Installation and Cluster Deployment Guide Node HA Versus Two - Node Stretched HA (MetroCluster SDS) Two - ect 9.3 and ONTAP Deploy, it is possible to stretch a two node, active/active HA - Starting with ONTAP Sel cluster across larger distances and potentially place each node in a different data center. The only - node stretched cluster (a lso referred to as MetroCluster node cluster and a two - distinction between a two SDS) is the network connectivity distance between nodes. node cluster is defined as a cluster for which both nodes are located in the same data center - The two within a distance of 300m. In general, both nodes have uplinks to the same network switch or set of interswitch link (ISL) network switches. node MetroCluster SDS is defined as a cluster for which T nodes are physically separated (different wo - each node’s uplink . In addition, rooms, different buildings, and different data centers) by more than 300m connections are connected to separate network switches. The MetroCluster SDS does not require 33 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

34 dedicated hardware requirements for adhere to latency ( a maximum of owever, the environment should . H RTT and 5ms for jitter, for a total of 10ms) and physical distance ( a maximum of 10km). 5ms for MetroCluster SDS is a premium feature and requires a Premium license. The Premium license supports the creation of bo th small and medium VMs, as well as HDD and SSD media. Note: Starting with ONTAP Select 9.5 and ONTAP Deploy 2.10. MetroCluster SDS is supported with both local attached storage (DAS) and shared storage (vNAS). N ote that vNAS configurations usually have a higher innate latency because of the network between the ONTAP Select VM and shared storage. MetroCluster SDS configurations must provide a maximum of 10ms of latency includ , between the nodes measuring the only the shared storage latency. In other words, ing latency between the Select VMs is not adequate because shared storage latency is no t negligible . for these configurations - Node Stretched HA (MetroCluster SDS) Best Practices Two Before you create a MetroCluster SDS, use the ONTAP Deploy connectivity checker functionality to make the network latency between the two data centers sure that falls within the acceptable range . used to 1. After installing ONTAP Deploy, define two ESX hosts (one in each data center) that are measure the latency between the two sites. Select Administration (top of screen) > Network > Connectivity Checker (left panel). The default 2. settings are appropriate. Note: Therefore, he connectivity checker does not mark the test as failed if the latency exceeds 10ms. T you must check the value of the latency instead of the status of the connectivity checker test run. the latency between nodes is under a in which The following example shows connectivity checker output 1ms. The connectivity checker does not check the latency between the ONTAP Select VM and the Note: using external storage for MetroCluster SDS, the VM storage. When storage latency is not - to - negligible and the total latency . ms RTT be under 10 must The connectivity checker has the additional benefit of making sure that the internal network is properly gured to support a large MTU size. Starting with ONTAP Select 9.5 and ONTAP Deploy 2.10.1, the confi S witch. However, the default MTU value can default MTU size is determined by querying the upstream v account be manually overwritten to for network ove rlay protocol overhead. The internal network MTU can to This is a requirement for all HA traffic, whether the ONTAP between 7,500 and 9,000. be configured Select cluster consist of two, four, six, or eight nodes. 34 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

35 virtual guest tagging (VGT) and two - node clusters. In two - node There is an extra caveat when using cluster configurations, the node management IP address is used to establish early connectivity to the external switch tagging (EST ) mediator before ONTAP is fully available. Therefore, only and virtual switch tagging (VST) tagging is supported on the port group mapped to the node management LIF (port e0a). and Furthermore, if both the management and the data traffic are using the same port group, only EST VST are supported for the entire two - node cluster. Synchronous Replication The ONTAP HA model is built on the concept of HA partners. As explained earlier in this document, ONTAP Select extends this architecture into the nonshared commodity server world by using the RAID ONTAP to replicate data blocks between cluster nodes, SyncMirror (RSM) functionality that is present in providing two copies of user data spread across an HA pair. mediato Starting with ONTAP Deploy 2.7 and ONTAP Select 9.3, a two - node cluster with a r can be used Node - to span two data centers. For more information, see the section " Two - Node HA Versus Two Stretched HA (MetroCluster SDS) ." Mirrored Aggregates An ONTAP Select cluster is composed of two to eight nodes. Each HA pair contains two copies of user r, data, synchronously mirrored across nodes over an IP network. This mirroring is transparent to the use and it is a property of the data aggregate, automatically configured during the data aggregate creation process. All aggregates in an ONTAP Select cluster must be mirrored for data availability in the event of a node se of hardware failure. Aggregates in an ONTAP Select cluster are failover and to avoid an SPOF in ca built from virtual disks provided from each node in the HA pair and use the following disks: • A local set of disks (contributed by the current ONTAP Select node) • A mirrored set of disks (cont ributed by the HA partner of the current node) Note: The local and mirror disks used to build a mirrored aggregate must be the same size. These aggregates are referred to as plex 0 and plex 1 (to indicate the local and remote mirror pairs, respectively). The act ual plex numbers can be different in your installation. This approach is fundamentally different from the way standard ONTAP clusters work. This applies to all root and data disks within the ONTAP Select cluster. The aggregate contains both local and mirro r copies of data . T herefore, an aggregate that contains N virtual disks offers N/2 disks’ worth of unique storage, because the second copy of data resides on its own unique disks. shows an HA pair within a four node ONTAP Select cluster. Within this cluster is a single - Figure 11 t est ) that uses storage from both HA partners. This data aggregate is composed of two sets of aggregate ( P virtual disks: a local set, contributed by the ONTAP Select owning cluster node ( 0), and a remote lex set, contributed by the failover partner ( P lex 1). Plex 0 is the bucket that holds all local disks. Plex 1 is the bucket that holds mirror disk s, or disks responsible for storing a second replicated copy of user data. The node that owns the aggregate Plex 1. 0, and the HA partner of that node contributes disks to contributes disks to Plex In 11 Figure , there is a mirrored aggregate with two disks. The contents of this aggregate are mirrored - 1.1 placed into the - 0 bucket and remote disk NET across our two cluster nodes, with local disk NET Plex 2 est t 1 bucket. In this example, aggregate Plex is owned by the cluster node to the left .1 placed into the 2.1. - 1.1 and HA partner mirror disk NET - and uses local disk NET 35 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

36 Figure 11 ) ONTAP Select mirrored aggregate. Note: When an ONTAP Select cluster is deployed, all virtual disks present on the system are automatically assigned to the correct plex, requiring no additional step from the user regarding disks to an incorrect plex and disk assignment. This prevents the accidental assignment of disk configuration. provides optimal mirror Best Practice - date (RPO 0) copy of to Although the existence of the mirrored aggregate is needed to provide an up - the primary aggregate, care should be taken that th e primary aggregate does not run low on free space. A low - space condition in the primary aggregate can cause ONTAP to delete the common Snapshot™ copy used as the baseline for storage giveback. This works as designed to NetApp . accommodate client writes H owever, the lack of a common Snapshot copy on failback requires the ONTAP Select node to do a full baseline from the mirrored aggregate. This operation can take a significant amount of time in a shared - nothing environment. A good baseline for monitoring aggregate space utilization is up to 85%. Write Path no data loss Synchronous mirroring of data blocks between cluster nodes and the requirement for a with takes as it propagates through an system failure have a significant impact on the path an incoming write ONTAP Select cluster. This process consists of two stages: Acknowledgment • Destaging • Writes to a target volume occur over a data LIF and are committed to the virtualized NVRAM partition, ONTAP Select node, before being acknowledged back to the client. On present on a system disk of the an HA configuration, an additional step occurs, because these NVRAM writes are immediately mirrored to the HA partner of the target volume’s owner before being acknowledged. This process m akes sure of the file system consistency on the HA partner node, if there is a hardware failure on the original node. After the write has been committed to NVRAM, ONTAP periodically moves the contents of this partition to ocess known as destaging. This process only happens once, on the the appropriate virtual disk, a pr cluster node owning the target volume, and does not happen on the HA partner. 12 Figure te path of an incoming write request to an ONTAP Select node. shows the wri 36 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

37 Figure 12 ) ONTAP Select write path workflow. Incoming write acknowledgment includes the following steps: 1. ONTAP Select node A. Writes enter the system through a logical interface owned by Writes are committed to the NVRAM of node A and mirrored to the HA partner, node B. 2. After the I/O request is present on both HA nodes, the request is then acknowledged back to the 3. client. ) includes the following steps: aggregate (ONTAP CP ONTAP Select destaging from NVRAM to the data Writes are destaged from virtual NVRAM to virtual data aggregate. 1. 2. Mirror engine synchronously replicates blocks to both plexes. Disk Heartbeating of the code paths used by the traditional Although the ONTAP Select HA architecture leverages many FAS arrays, some exceptions exist. One of these exceptions is in the implementation of disk - based heartbeating, a non network - based method of communication used by cluster nodes to prevent network A s is the result of cluster partitioning, plit brain brain behavior. scenario isolation from causing split - typically caused by network failures, whereby each side believes the other is down and attempts to take over cluster resources. AP does this ONT Enterprise - class HA implementations must gracefully handle this type of scenario . disk through a customized - based method of heartbeating. This is the job of the HA mailbox, a location on , physical storage that is used by cluster nodes to pass heartbeat messages. This helps the cluster fine quorum in the event of a failover. determine connectivity and therefore de - On FAS arrays, which use a shared storage HA architecture, ONTAP resolves split in the brain issues following ways: SCSI persistent reservations • Persistent HA metadata • HA state sent over HA interconnect • 37 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

38 However, within the shared - nothing architecture of an ONTAP Select cluster, a node is only able to see its own local storage and not that of the HA partner. Therefore, when network partitioning isolates each ermining cluster quorum and failover behavior are side of an HA pair, the preceding methods of det unavailable. Although the existing method of split brain detection and avoidance cannot be used, a method of - nothing environmen mediation is still required, one that fits within the constraints of a shared - t. ONTAP Select extends the existing mailbox infrastructure further, allowing it to act as a method of mediation in the event of network partitioning. Because shared storage is unavailable, mediation is accomplished NAS. These disks are spread throughout the cluster, including through access to the mailbox disks over intelligent failover decisions can - . Therefore, the mediator in a two node cluster, using the iSCSI protocol e mailbox disks of be made by a cluster node based on access to these disks. If a node can access th other nodes outside of its HA partner, it is likely up and healthy. based heartbeating method of resolving cluster quorum and - The mailbox architecture and disk Note: split - brain issues are the reasons the multinode variant of ONTAP Select req uires either four - node cluster. separate nodes or a mediator for a two HA Mailbox Posting The HA mailbox architecture uses a message post model. At repeated intervals, cluster nodes post mediator, stating that the node is up messages to all other mailbox disks across the cluster, including the at any point in time, a single mailbox disk on a cluster node has and running. Within a healthy cluster messages posted from all other cluster nodes. Attached to each Select cluster node is a virtual disk that is u sed specifically for shared mailbox access. This disk is referred to as the mediator mailbox disk, because its main function is to act as a method of cluster mediation in the event of node failures or network partitioning. This mailbox disk contains partit ions for each cluster node and is mounted over an iSCSI network by other Select cluster nodes. Periodically, these nodes post health statuses to the appropriate partition of the mailbox disk. Using accessible mailbox disks spread throughout the clu network - ster allows you to infer node health through a reachability matrix. For example, cluster nodes A and B can post to the mailbox of cluster node D, but not likely cluster node D cannot post to the mailbox of node C, . In addition, node C to the mailbox of so it is that node C is either down or network isolated and should be taken over. HA Heartbeating Like with NetApp FAS platforms, ONTAP Select periodically sends HA heartbeat messages over the HA cluster, this is performed over a TCP/IP network connection that interconnect. Within the ONTAP Select based heartbeat messages are passed to all HA mailbox - exists between HA partners. Additionally, disk ead back disks, including mediator mailbox disks. These messages are passed every few seconds and r periodically. The frequency with which these are sent and received allows the ONTAP Select cluster to detect HA failure events within approximately 15 seconds, the same window available on FAS platforms. g read, a failover event is triggered. When heartbeat messages are no longer bein shows the process of sending and receiving heartbeat messages over the HA interconnect and 13 Figure rspective of a single ONTAP Select cluster node, node C. mediator disks from the pe Note: Network heartbeats are sent over the HA interconnect to the HA partner, node D, while disk heartbeats use mailbox disks across all cluster nodes, A, B, C, and D. 38 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

39 HA heartbeating in a four Figure 13 ) - node cluster: steady state. Deployment and Management 3 This section describes the deployment and management aspects of the ONTAP Select product. 3.1 ONTAP Select Deploy The ONTAP Select cluster is deployed using specialized tooling t hat provides the administrator with the ability to build the ONTAP cluster and manage various aspects of the virtualized server. This utility, called ONTAP Select Deploy, comes packaged inside of an installation VM along with the ONTAP Select OS image. Bun dling the deployment utility and ONTAP Select bits inside a single VM allows NetApp to help s Bundling also reduce the complexity of the include all the necessary support libraries and modules . interoperability matrix between various versions of ONTAP Select and the hypervisor. The ONTAP Deploy application can be accessed through the following methods: CLI • • REST API GUI • The ONTAP Deploy CLI is shell to the installation VM - based and immediat ely accessible upon connecti on navigation of shell is like ONTAP shell, with commands bundled into the using SSH. Navigation of th is group s that provide related functionality (for example, network create, network sho w, and network delete). For automated deployments and integration into existing orchestration frameworks, ONTAP Deploy can through a REST API. All functionality available through the shell based - also be invoked programmatically he API. The entire list of API calls is documented using the Open API CLI is available through t Specification (originally known as Swagger Specification). ONTAP Deploy 2.8 uses v3 of the API. This version is not backward compatible with the prior versions of the API used with older ONTAP Deploy releases. Deploy Upgrades The Deploy utility can be upgraded separately from the Select cluster. Similarly, the Select cluster can be upgraded separately from the Deploy utility. See the upgrade section for the ONTAP Deploy and ONTAP is 2 upgrade path TAP Deploy 2.8, an N Select interoperability matrix. Starting with ON enforced. In other - supported for the two prior releases of ONTAP Deploy. This are words, ONTAP Deploy direct upgrades 39 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

40 ment and no has no bearing on the versions of ONTAP Select that are running in the client environ ONTAP upgrade is required. Server Preparation Although ONTAP Deploy provides the user with functionality that enables configuration of portions of the underlying physical server, there are several requirements that must be met before attempting to manage the server. This can be thought of as a manual preparation phase, because many of the steps are difficult to orchestrate through automation. This preparation phase involves the following tasks: , if using ONTAP or 1. For local storage, configure the RAID controller an d attached local storage , RAID , verify that the correct drive type and number of drives is available. software - 2. For VSAN or external array hosted datastores, verify that the configurations are supported by VMware HCL , and follow the specific vendor best practices. For external arrays, 3. network resiliency, speed, and Verify physical network connectivity to the server. throughput are critical to the performance of the ONTAP Select VM. 4. r. Install the hyperviso 5. Configure the virtual networking constructs (vSwitches and port groups). Note: After the ONTAP Select cluster has been deployed, the appropriate ONTAP management tooling ( NTAP , LIFs, volumes, and so on. O should be used to configure storage virtual machines ) SVMs Deploy does not provide this functionality. The ONTAP Deploy utility and ONTAP Select software are bundled together into a single VM, which is Support then made available as an .OVA file for VMware vSphere. The bits are available from the NetApp . site This installation VM runs the Debian Linux OS and has the following properties: • Two vCPUs • 4GB RAM • 40GB virtual disk ONTAP Select Deploy Placement in the Environment hould be given to the placement of the ONTAP Deploy installation VM, because Careful consideration s the Deploy VM is used to verify hypervisor minimum requirements, deploy ONTAP Select clusters, and apply the license. Optionally, it ivity between Select nodes used to troubleshoot network connect can be during the setup process. ONTAP Deploy must also be able to communicate with the ONTAP Select node and cluster management IP addresses as follows: • Ping SSH (port 22) • SSL (port 443) • cate with the vCenter and/or the ESX host as VMware VIX API to communi the ONTAP Deploy uses follows: HTTPS/SOAP on TCP port 443. This is the port for secure HTTP over TLS/SSL. • Secondly, a connection to the ESX host is opened on a socket on TCP port 902. Data going over this • encrypted with SSL. connection is • In addition , ONTAP Deploy issues a ping command to verify that there is an ESX host responding at the IP address specified by the user. 40 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

41 VM Placement The ONTAP Select installation VM can be placed on any virtualized server in the customer environment. node clusters, the ONTAP Deploy VM can be collocated on the same host as an ONTAP Select - For four instance or on a separate virtualized server. For two the ONTAP Deploy VM is for which node clusters - al so the cluster mediator, the collocation model is not supported because it would become a cluster SPOF. The ONTAP Deploy VM can be installed in the same data center as the ONTAP Select cluster, or it can be centrally deployed in a core data center. The onl y requirement is that network connectivity exists between the ONTAP Deploy VM and the targeted ESX host and the future ONTAP Select cluster management IP address. Note: Creating an ONTAP Select cluster over the WAN can take a considerably longer amount of time because the copying of the ONTAP Select binary files depends on the latency and bandwidth available between data centers. The maximum supported latency for creating remote ONTAP node ONTAP Select cluster is sup ported on a - Select clusters is 500ms RTT. Deploying a two WAN network in which the maximum latency and minimum bandwidth can support the more stringent mediator service traffic (minimum throughput 5Mbps; maximum latency 125ms RTT). shows these deployment options. Figure 14 Figure 14 ) ONTAP Select installation VM placement. Collocating the ONTAP Deploy VM and one of the ONTAP Select instances is not supported for Note: node clusters. - two e ONTAP Select Deploy Instances Multipl Depending on the complexity of the environment, it might be beneficial to have more than one ONTAP Deploy instance managing the ONTAP Select environment. For this scenario, make sure that each ONTAP Select cluster is managed by a single ONTAP Deploy instance. ONTAP Deploy stores cluster metadata within an internal database, so managing an ONTAP Select cluster using multiple ONTAP Deploy instances is not recommended. 41 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

42 When deciding whether to use multiple installation VMs, kee p in mind that while ONTAP Deploy attempts to create unique MAC addresses by using a numeric hash based on the IP address of the installation VM, the uniqueness of the MAC address can only occur within that Deploy instance. Because there is no n across Deploy instances, it is theoretically possible for two separate instances to assign communicatio multiple ONTAP Select network adapters with the same MAC address. Best Practice To eliminate the possibility of having multiple Deploy instances assign duplicate MAC addresses, one Deploy instance per layer - 2 network should be used to manage an existing Select cluster or node or to or node. e a new Select cluster creat Note: addresses. Each ONTAP Each ONTAP Deploy instance can generate up to 64,000 unique MAC Select node consumes four MAC addresses for its internal communication network schema. Each Deploy instance is also limited to managing 100 Select clusters and 400 hosts (a host is equivalent to one hypervisor server). - node cl usters, the ONTAP Deploy VM that creates the cluster is also the default mediator, and it For two requires no further configuration. However, it is critical that the mediator service is continuously available ties. For configurations in which the network latency, for proper functioning of the storage failover capabili bandwidth, or other infrastructure issues require the repositioning of the mediator service closer to the es node cluster, another ONTAP Deploy VM can be used to host the mediator mailbox - ONTAP Select two temporarily or permanently. Best Practice - node cluster should be carefully monitored for EMS messages indicating that The ONTAP Select two storage failover is disabled. These messages indicate a loss of connectivity to the mediator service and uld be rectified immediately. sho Licensing ONTAP Select 3.2 When deploying ONTAP Select in a production environment, you must license the storage capacity used - by the cluster nodes. Each ONTAP Select license is based on a flexible, consumption based licensing del designed to allow customers to only pay for the storage they need. With ONTAP Select’s original mo capacity tiers model, you must purchase a separate license for each node. Beginning with ONTAP Select g capacity pools licensing instead. In both cases, 9.5 using Deploy 2.10, you now have the option of usin you must use ONTAP Select Deploy to apply the licenses to the ONTAP Select nodes that are created by each instance of the Deploy utility. Feature Evolution ONTAP Select licensing have continued to evolve. As mentioned above, The features and functionality of Several changes ONTAP Select 9.5 using Deploy 2.10 now includes support for capacity pools licensing. were introduced with ONTAP Deploy 2.8 and ONTA P Select 9.4. One such change is that the ONTAP Select root aggregate no longer counts against the capacity license. Also, the cluster create workflow in the web user interface now requires you to have a capacity license file at the time of deployment. Wit h the capacity tiers model, it is no longer possible to create a production cluster using a serial number and then to apply a capacity license in the future. There is a CLI override le, but the corresponding license available for the rare case in which a production serial number is availab file is not yet available. In these situations, a valid license file must be applied within 30 days. The biggest change introduced in ONTAP Deploy 2.8 and ONTAP Select 9.4 involve s the license enforcement mechanism. With earlier versions of ONTAP Select, the virtual machines in a license 42 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

43 violation situation reboot at midnight every day. The updated enforcement mechanism relies on blocking the aggregate operations (aggregate create and aggregate online). Although takeover o perations are allowed, the giveback is blocked until the node comes into compliance with its capacity license. When using a datastore to store the user data (in other words when using a hardware RAID controller as as the option to consume only a portion of a datastore. This opposed to ONTAP software RAID) the user h functionality can be useful when the server capacity exceeds the desired Select license. Allocation Characteristics and Overhead The capacity license relates to the total size of the virtual data disks (VMDKs) attached to the ONTAP . It also relates to Select VM when using hardware RAID controllers the size of the data aggregates when using ONTAP software RAID. he active data on that node capacity license must cover both t - In the case of multinode clusters, the per node as well as the RAID SyncMirror copy of the active data on its HA peer. The actual amount of data stored on ONTAP Select is not relevant in the capacity license Note: ncy ratios. The amount of raw conversation; it can vary depending on data type and storage efficie storage (defined as physical spindles inside the server) is also irrelevant because the datastore in which Select is installed can consume only a portion of the total space. For VSAN and external storage arrays, there is an ad . T he total space consumed by the ditional aspect to keep in mind and storage efficiency settings enabled at the ONTAP Select VM varies depending on FTT/FTM and VSAN external storage array level. In these configurations, the ONTAP Select capacity license is not an indication of how much physical space the ONTAP Select VM consumes. Administration You can manage the capacity licenses through the Deploy web user interface by clicking the Administration tab and then clicking Licenses. You can also display all the nodes in a cluster and the respective licensing status using the system license show - status CLI command. Common Characteristics for the Storage Capacity Licenses The capacity tier and capacity pool licenses have several common characte ristics, including the following: • Storage capacity for a license is purchased in 1TB increments. • Both the standard and premium performance tiers are supported. The nodes in an HA pair must have the same storage and license capacity. • • ense files to the Deploy administration utility, which then applies the licenses You must upload the lic based on the type. However, there are also several differences between the licensing models as described below. 3.3 Capacity Tiers Licensing ing model provided with ONTAP Select. It continues to be supported iers is the original licens Capacity T with the latest ONTAP Select releases. de No Storage Capacity Assigned to each ONTAP Select oncept iers, you must purchase a license for each ONTAP Select node, and there is no c C T With apacity level license. The assigned capacity is based on the purchase agreement. Any unused - of a cluster The number of licenses would only exceed capacity cannot be moved to a different ONTAP Select node. if the number of nodes a cust omer has purchased additional licenses for nodes that they are preparing to deploy. 43 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

44 Licensing Characteristics Summary of iers licensing model has the following characteristics: The C apacity T • License serial number. The license serial number is a nine - digit number generated by NetApp for hing serial number each node. Each license is locked to a specific ONTAP Select node with a matc . serial number is nine digits long and is the same as the license serial ode serial number. N • The node number. • The license is perpetual, and renewal is not required. License duration. 3.4 Capacity Pools Licensing Capacity Pools is a new licensing model provided beginning with ONTAP Select 9.5 using Deploy 2.10. It licensing model provides several Pools provides an alternative to the capacity tiers model. The apacity C benefits, including the following: Storage capacity shar • ed across one or more nodes More efficient allocation of storage capacity • Significantly reduced administrative overhead and lower cost • • Improved usage metrics Leasing Storage Capacity from a Shared Pool Pools , you purchase a license for each shared pool. The Tiers C apacity model, with C Unlike the apacity nodes then lease capacity as needed from the single pool they are associated with. The License Manager C — manages the apacity — (LM) a new software component introduced with ONTAP Select Deploy 2.10 15 Pool . licenses and leases. LM is bundled with the Deploy utility as shown in Figure 15 Figure ) License Manager. 44 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

45 Every time a data aggregate is created, expanded, or changed, the ONTAP Select node must locate an active capacity lease or request a new lease from the LM. If a valid lease cannot be acquired, the data aggregate operation fails. The lease duration for each pool can be configured to between one hour and seven days, with a default of 24 hours. Leases are automatically renewed by the node. If a lease is not renewed for some reason, it expires and the capacity is returned to the pool. Locking a Capacity Pool License to a License Manager Instance After purchasing a capacity pool license from NetApp, you must associate the license serial number with a specific instance of LM. This is done through the License Lock ID (LLID), a unique 128 - bit number identifying each LM instance (and therefore each Deploy instance). You can locate the LLID for your Deploy instance on the web user interface on the Administration page under System Settings. The LLID is also available in the Add Licenses section of the Getting Started page. You provide both the li cense serial number and LLID when generating the license file. You can then upload the license file to the Deploy utility, so that the capacity pool can be used with new ONTAP Select cluster deployments. Summary of the Licensing Characteristics The C apaci ty Pools licensing model has the following characteristics: License serial number. digit number generated by NetApp for - The license serial number is a nine • each capacity pool. Each license is locked to a specific License Manager instanc e. is generated by the License , The node serial number is twenty digits long Node serial number. • . assigned to the node is d Manager , an • The license is valid for a limited term (such as one year) and must be renewed. License duration. Modifying ONTAP Select Cluster Properties 3.5 ONTAP Select cluster properties such as cluster name, cluster management IP address, and node management IP address can be modified using ONTAP management tools such as System Manager. herefore, subsequent ONTAP Deploy . T ONTAP Deploy is not notified when such modifications occur fail. In a virtualized environment, the management operations targeted at the ONTAP Select cluster ONTAP Select VM name can also be changed, which would similarly result in ONTAP Deploy no longer being able to com municate with an ONTAP Select cluster. Starting with ONTAP Deploy 2.6, the cluster refresh functionality allows ONTAP Deploy to recognize the following changes made to the ONTAP Select cluster: ) Networking configuration (IPs, netmasks, gateway, DNS, and NTP • • or ONTAP Select cluster node names ONTAP Select version • ONTAP Select VM name and state • The cluster refresh functionality works for any ONTAP Select node that is online and available (but has 6. In other words, the older version of not been modified) at the time of upgrading to ONTAP Deploy 2. ONTAP Deploy must have knowledge of and access to the ONTAP Select node so that the ONTAP Deploy upgrade process can append some uniquely identifying information to that VM’s metadata. After stored in the VM’s metadata and the ONTAP Deploy database, future changes to this unique identifier is node properties can be synchronized with the ONTAP Deploy database by or the ONTAP Select cluster the cluster refresh operation. This process provides continued communication be tween ONTAP Deploy and the modified ONTAP Select VM. 45 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

46 3.6 ONTAP Management Because ONTAP Select runs ONTAP, it supports all common NetApp management tools. As a result, after the product is deployed and ONTAP is configured, it can be administered using the same set of applications that a system administrator would use to manage FAS storage arrays. There is no special procedure required to build out an ONTAP configuration, such as creating SVMs, volumes, LIFs, and so on. anagement tasks that require the use of ONTAP Deploy. There are, however, several ONTAP Select m cluster ONTAP Deploy is the only method to create Select clusters. Therefore, issues encountered during ONTAP creation can only be investigated using Deploy. ONTAP Deploy communicates with the ONTAP Select clusters it created using the information configured at the time of deployment This information . includes the ESX host name or IP address and the ONTAP Select cluster management IP address. For node ONTAP Select clusters, the node manage ment IP addresses are used for the iSCSI mediator - two traffic. Changing the ONTAP Select node management IP addresses for two - node clusters after deployment results in an immediate loss of storage failover capabilities for that ONTAP Select cluster. A new me diator location on the same or a different ONTAP Deploy VM must be configured immediately. The ability to change the ESX host name or IP address is not supported except for a VMware HA or vMotion. ONTAP Deploy attempts to rehost the ONTAP Select VM, as lon g as the new ESX host is managed by the same VMware vCenter Server. After the cluster creation, ONTAP Deploy can be used to complement the other NetApp management tools for troubleshooting purposes. The ONTAP Deploy CLI provides options for troubleshooting that are not available in the GUI. Most commands include a show option, which allows you to gather information about the environment. The ONTAP Deploy logs can contain valuable information to help troubleshoot cluster setup issues. The ® d CLIs allow you to generate a NetApp AutoSupport bundle containing the ONTAP Deploy GUI an ONTAP Deploy logs. The GUI also allows you to download the bundle for immediate inspection. specific AutoSupport bundles. - Finally, the Deploy GUI can be used to invoke node - ONTAP Deploy plays an important role in the quorum service for two node clusters as well as troubleshooting of the environment . T herefore, the ONTAP Deploy database should be backed up s not possible to rediscover an ONTAP regularly and after every change in the environment. Currently, it i . Select cluster that was created by a different instance of ONTAP Deploy having an unmanaged Also, cluster results in the loss of some important troubleshooting functionality. The ONTAP Deploy ase can be backed up by running the command from the deploy backup create configuration datab ONTAP Deploy CLI. Network Design Considerations 4 This section covers the various network configurations and best practices that should be considered ter. Like the design and implementation of the underlying storage, when building an ONTAP Select clus T hese choices have a significant impact on . care should be taken when making network design decisions 9.5 introduces support both the performance and resiliency of the ONTAP Select cluster. ONTAP Select and it is the default driver for all new installations , for the VMXNET3 driver . Prior versions of ONTAP does Select use the E1000 driver. Upgrading to ONTAP Select 9.5 NOT automatically change the ed. network driver. A manual procedure that includes an ONTAP Select node reboot is requir C ontact NetApp Technical Support for further instructions. The ESX E1000 driver reports the speed as 1Gbps but it does not affect the actual throughput Note: 10Gb NICs are fully supported at line speed. Howeve that the host can provide . Indeed, r, there are significant performance improvements when switching from the E1000 driver to the 46 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

47 VMXNET3 driver Therefore, NetApp recommends making the switch after upgrading to ONTAP . Select 9.5 . In traditional FAS systems, ifgroups are use d to provide aggregate throughput and fault tolerance using a single, logical, virtualized network interface configured on top of multiple physical network interfaces. ONTAP Select leverages the underlying hypervisor’s virtualization of multiple physical n etwork interfaces to achieve the same goals of throughput aggregation and resiliency. T herefore, t he NICs that ONTAP Select manages are logical constructs, and configuring additional ifgroups does not achieve the goals of throughput aggregation or recovering from hardware failures. As a matter of fact, ifgroups are not supported with ONTAP Select. 4.1 Network Configuration: Multinode The multinode ONTAP Select network configuration consists of two networks . These are an internal network, responsible for providing cluster and internal replication services, and an external network, to end isolation of traffic that flows - responsible for providing data access and management services. End - within these two networks is extremely important in allowing you to build an environment that is suitable for cluster resiliency. These networks are represented in Figure 16 , whic h shows a four - node ONTAP Select cluster running on a VMware vSphere platform. Six - and eight - node clusters have a similar network layout. Each ONTAP Select instance resides on a separate physical server. Internal and external traffic Note: is isolated using se parate network port groups, which are assigned to each virtual network interface and allow the cluster nodes to share the same physical switch infrastructure. Figure 16 ) Overview of an ONTAP Select multi node cluster network configuration. Each ONTAP Select VM contains seven virtual network adapters (six adapters in versions before ONTAP Select 9.3) presented to ONTAP as a set of seven network ports, e0a through e0g. Although ONTAP treats these adapters as physical NICs, they are in fact virtual and map to a set of physical interfaces through a virtualized network layer. As a result, each hosting server does not require six physical network ports. Adding virtual network adapters to the ONTAP Select VM is not Note: supported. These ports are preconfigured to provide the following services: LIFs and data anagement M . e0a, e0b, and e0g • 47 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

48 . luster network LIFs C e0c, e0d • • RSM . e0e e0f • HA interconnect . Ports e0a, e0b, and e0g reside on the external network. Although ports e0c through e0f perform several different functions, collectively they compose the internal Select network. When making network design decisions, these ports should be placed on a single layer - 2 network. There is no need to separate these virtual adapters across different networks. 17 Figure The relationship between these ports and the underlying physical adapters is illustrated in , TAP Select cluster node on the ESX hypervisor. which depicts one ON 17 ) Network configuration of a single node that is part of a multi Figure . node ONTAP Select cluster Segregating internal and external traffic across different physical NICs prevents latencies from being introduced into the system due to insufficient access to network resources. Additionally, aggregation through NIC teaming makes sure that failure of a single network adapter does not prevent the ONTAP accessing the respective network. Select cluster node from Note that both the external network and internal network port groups contain all four NIC adapters in a symmetrical manner. The active ports in the external network port group are the standby ports in the Conversely, the active ports in the internal network port group are the standby ports in internal network. the external network port group. LIF Assignment With the introduction of IPspaces, ONTAP port roles have been deprecated. Like FAS arrays, ONTAP in both a default IPspace cluster IPspace. By placing network ports e0a, e0b, and a Select clusters conta and e0g into the default IPspace and ports e0c and e0d into the cluster IPspace, those ports have ining ports within the ONTAP essentially been walled off from hosting LIFs that do not belong. The rema . Select cluster are consumed through the automatic assignment of interfaces providing internal services They are not exposed through the ONTAP shell, as is the case with the RSM and HA interconnect interfaces. 48 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

49 Note: are visible through the ONTAP command shell. The HA interconnect and RSM Not all LIFs interfaces are hidden from ONTAP and are used internally to provide their respective services. The network ports and LIFs are explained in detail in the following sections. anagement and Data LIFs (e0a, e0b, and e0g) M ONTAP ports e0a, e0b, and e0g are delegated as candidate ports for LIFs that carry the following types of traffic: • SAN/NAS protocol traffic (CIFS, NFS, and iSCSI) Cluster, node, and SVM management traffic • • Intercluster traffic (SnapMirror and SnapVault) Note: Cluster and node management LIFs are automatically created during ONTAP Select cluster setup. The remaining LIFs can be created post deployment. Cluster Network LIFs (e0c, e0d) ONTAP ports e0c and e0d are del egated as home ports for cluster interfaces. Within each ONTAP Select cluster node, two cluster interfaces are automatically generated during ONTAP setup using link local IP addresses (169.254.x.x). Note: These interfaces cannot be assigned static IP addresses, and additional cluster interfaces should not be created. - Cluster network traffic must flow through a low - latency, nonrouted layer 2 network. Due to cluster throughput and latency requirements, the ONTAP Select cluster is expected to be physically located node node - , or eight node - , six - within proximity (for example, multipack, single data center). Building four stretch cluster configurations by separating HA nodes across a WAN or across significant geographical - nod e configuration with a mediator is supported. distances is not supported. A stretched two section the For details, see " MetroCluster Software Defined Storage (Two - Node Stretched Cluster High Availability) ." Note: To make sure of maximum throughput for cluster network traffic, this network port is configured to 9000 MTU). This is not configurable, so for proper cluster use jumbo frames ( operation, to 7500 verify that jumbo frames are enabled on all upstream virtual and physical switches providing internal network services to ONTAP Select cluster nodes. RAID SyncMirror Traffic (e0e) Synchronous replication of blocks across HA partner nodes occurs using an internal network interface residing on network port e0e. This functionality occurs automatically, using network interfaces configured by ONTAP during cluster setup, and requires no configuration by the administrator. Because this port is reserved by ONTAP for internal replication traffic, neither the port nor the hosted LIF is visible in the ONTAP CLI or management tooling. This interface is configured to use an automatically generated link local IP address, and the reassignment of an alternate IP address is not supported. 9000 MTU). Note: 7500 to This network port requires the use of jumbo frames ( Throughput and latency requirements that are critical to the proper behavior of the replication network ical proximity, so building a hot disaster dictate that ONTAP Select nodes be located within close phys recovery solution is not supported. HA Interconnect (e0f) NetApp FAS arrays use specialized hardware to pass information between HA pairs in an ONTAP cluster. - Software have this type of equipment available (such as defined environments, however, do not tend to 49 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

50 InfiniBand or iWARP devices), so an alternate solution is needed. Although several possibilities were considered, ONTAP requirements placed on the interconnect transport required that this functionality be em ulated in software. As a result, within an ONTAP Select cluster, the functionality of the HA interconnect (traditionally provided by hardware) has been designed into the OS, using Ethernet as a transport mechanism. an HA interconnect port, e0f. This port hosts the HA Each ONTAP Select node is configured with interconnect network interface, which is responsible for two primary functions: • Mirroring the contents of NVRAM between HA pairs tween HA pairs • Sending/receiving HA status information and network heartbeat messages be HA interconnect traffic flows through this network port using a single network interface by layering remote direct memory access (RDMA) frames within Ethernet packets. Like RSM, neither the physical port nor isible to users from either the ONTAP CLI or management tooling. As a the hosted network interface is v result, the IP address of this interface cannot be modified, and the state of the port cannot be changed. 9000 MTU). 7500 to Note: This network port requires the use of jumbo frames ( 4.2 Network Configur ation: Single Node - node ONTAP Select configurations do not require the ONTAP internal network, because there is Single no cluster, HA, or mirror traffic. Unlike the multinode version of the ONTAP Select product, each ONTAP network adapters (two for releases before ONTAP Select 9.3), presented Select VM contains three virtual to ONTAP network ports e0a, e0b, and e0c. and intercluster LIFs. data, These ports are used to provide the following services: management, The relationship between these ports and the u , 18 Figure nderlying physical adapters can be seen in which depicts one ONTAP Select cluster node on the ESX hypervisor. node ONTAP Select cluster. - ) Network configuration of single 18 Figure node cluster, NIC teaming is still required. - Even though two adapters are sufficient for a single Note: 50 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

51 LIF Assignment As explained in the multinode LIF assignment section of this document, IPspaces are used by ONTAP Select to keep cluster network traffic separa te from data and management traffic. The single - node variant of this platform does not contain a cluster network . T herefore, no ports are present in the cluster IPspace. Note: Cluster and node management LIFs are automatically created during ONTAP Select clu ster setup. The remaining LIFs can be created post deployment. Management and Data LIFs (e0a, e0b, and e0c) ONTAP ports e0a, e0b, and e0g are delegated as candidate ports for LIFs that carry the following types of traffic: • SAN/NAS protocol traffic (CIFS, N FS, and iSCSI) • Cluster, node, and SVM management traffic • Intercluster traffic (SnapMirror and SnapVault) 4.3 Networking: Internal and External ONTAP Select Internal Network The internal ONTAP Select network, which is only present in the multi node variant of t he product, is responsible for providing the ONTAP Select cluster with cluster communication, HA interconnect, and synchronous replication services. This network includes the following ports and interfaces: Hosting cluster network LIFs • e0c, e0d . the RSM LIF Hosting e0e • . • the HA interconnect LIF e0f . Hosting The throughput and latency of this network are critical in determining the performance and resiliency of the ONTAP Select cluster. Network isolation is required for cluster security and to make sure that system interfaces are kept separate from other netwo rk traffic. Therefore, this network must be used exclusively by the ONTAP Select cluster. Note: Using the Select internal network for traffic other than Select cluster traffic, such as application or management traffic, is not supported. There can be no other VM s or hosts on the ONTAP internal VLAN. - 2 network. - tagged layer Network packets traversing the internal network must be on a dedicated VLAN This can be accomplished by completing one of the following tasks: • l virtual NICs (e0c through e0f) (VST mode) tagged port group to the interna Assigning a VLAN - Using the native VLAN provided by the upstream switch where the native VLAN is not used for any • other traffic (assign a port group with no VLAN ID, that is, EST mode) In all cases, VLAN tagging for internal netwo rk traffic is done outside of the ONTAP Select VM. Only ESX standard and distributed vSwitches are supported. Other virtual switches or direct Note: connectivity between ESX hosts are not supported. The internal network must be fully opened; re not supported. NAT or firewalls a 2 - Within an ONTAP Select cluster, internal traffic and external traffic are separated using virtual layer network objects known as port groups. Proper vSwitch assignment of these port groups is extremely important, especially for the int ernal network, which is responsible for providing cluster, HA interconnect, and mirror replication services. Insufficient network bandwidth to these network ports can cause performance degradation and even affect the stability of the cluster node. six , node - - e, four Therefor node, node clusters require that the internal ONTAP Select network use 10Gb connectivity; 1Gb NICs - eight and 51 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

52 are not supported. Tradeoffs can be made to the external network, however, because limiting the flow of incoming data to an ONT AP Select cluster does not affect its ability to operate reliably. A two - node cluster can use either four 1Gb ports for internal traffic or a single 10Gb port instead of the - two 10Gb ports required by the four in whic node cluster. In an environment h conditions prevent the server from being fit with four 10Gb NIC cards, two 10Gb NIC cards can be used for the internal network and two 1Gb NICs can be used for the external ONTAP network. Internal Network Validation and Troubleshooting Starting with Deploy 2.2, the internal network in a multinode cluster can be validated by using the network can be invoked from the Deploy CLI running the network connectivity checker functionality . This function - command. check start connectivity check show Run the network connectivity - is a number) command to view the -- run - id X ( X output of the test. This tool is only useful for troubleshooting the inte rnal network in a multinode Select cluster. The tool - should not be used to troubleshoot single node clusters (including vNAS configurations), ONTAP Deploy side connectivity issues. to ONTAP Select connectivity, or client - Starting with Deploy 2.5, the clust er create wizard (part of the ONTAP Deploy GUI) includes the internal network checker as an optional step available during the creation of multinode clusters. Given the important role that the internal network plays in multinode clusters, making this step part of the cluster create workflow improves the success rate of cluster create operations. Starting with ONTAP Deploy 2.10, the MTU size used by the internal network can be set between 7,500 and 9,000. The network connectivity checker can also be used to test MTU size between 7,500 and 9,000. The default MTU value is set to the value of the virtual network switch. That default would have to be replaced with a smaller value if a network overlay like VXLAN is present in the environment. ONTAP Select Extern al Network The ONTAP Select external network is responsible for all outbound communications by the cluster and, node and multinode configurations. Although this network does therefore, is present on both the single - equirements of the internal network, the administrator should be not have the tightly defined throughput r careful not to create network bottlenecks between the client and ONTAP VM, because performance issues could be mischaracterized as ONTAP Select problems. can be tagged at the vSwitch layer (VST) In a manner s imilar to internal traffic, external traffic Note: and at the external switch layer (EST). In addition, the external traffic can be tagged by the ONTAP Select VM itself in a process known as VGT. See the section “ Data and Management Separation ” for further details . 6 highlights the major differences between the ONTAP Select internal and external networks. Table 6 ) Internal versus external network quick reference. Table Description In ternal Network External Network Network services • Data management Cluster • Intercluster (SnapMirror and • HA/IC • SnapVault) RAID SyncMirror (RSM) • Network isolation Optional Required 2 7,500 to 9,000 Frame size (MTU) 1,500 (default) • • (supported) 9,000 52 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

53 ternal Network External Network In Description 1 Required before ONTAP Select Required before ONTAP Select NIC aggregation 9.3 9.3 IP address assignment Autogenerated User - defined No DHCP support No 1 ONTAP Select 9.3 supports a single 10Gb link for two - node clusters; however, it is a NetApp best practice to make sure of hardware redundancy through NIC aggregation. 2 . Requires ONTAP Select 9.5 and ONTAP Deploy 2.10 NIC Aggregation To make sure that the internal and external networks have both the necessary bandwidth and resiliency characteristics required to provide high performance and fault tolerance, physical network adapter aggregation is used. Starting with ONTAP Select 9.3, two - node cluster configurations with a single 10Gb commended best practice is to make use of NIC aggregation link are supported. However, the NetApp re on both the internal and the external networks of the ONTAP Select cluster. NIC aggregation provides the ONTAP Select cluster with two major benefits: Isolation from a single physical port failure • • I ncreased throughput NIC aggregation allows the ONTAP Select instance to balance network traffic across two physical ports. enabled port channels are only supported with distributed - Link Aggregation Control Protocol (LACP) vSwitches. Best Practice application - specific integrated circuit s (ASIC If a NIC has multiple s ) , select one network port from each ASIC when building network aggregation constructs through NIC teaming for the internal and external networks. MAC Address Generation resses assigned to all ONTAP Select network ports are generated automatically by the The MAC add , organizationally unique identifier (OUI) specific - a platform The utility uses included deployment utility . h FAS systems. A copy of this address is then specific to NetApp to make sure there is no conflict wit stored in an internal database within the ONTAP Select installation VM (ONTAP Deploy), to prevent accidental reassignment during future node deployments. At no point should the administrator modify the assigne d MAC address of a network port. 4.4 Supported Network Configurations Server vendors understand that customers have different needs and choice is critical. As a result, when purchasing a physical server, there are numerous options available when making networ k connectivity provide port and multiport that - single decisions. Most commodity systems ship with various NIC choices options with varying permutations of 1Gb and 10Gb ports. Care should be taken when selecting server vided by server vendors can have a significant impact on the overall because the choices pro NICs performance of the ONTAP Select cluster. Link aggregation is a core construct used to provide sufficient bandwidth to both the external and internal ONTAP Select networks. provides that neutral standard an open protocol for network - is a vendor LACP hat t endpoints bundle groupings of physical network ports into a single logical channel. 53 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

54 When choosing an ONTAP Select netwo rk configuration, use of LACP, which requires specialized hardware support, might be a primary consideration. Although LACP requires support from both the software virtual switch and the upstream physical switch, it can provide a significant throughput ben efit to incoming client protocol traffic. 7 lists the various supported configurations. The use of LACP is called out because environmental Table and hypervisor - sp ecific dependencies prevent all combinations from being supported. Table 7 ) Network configuration support matrix. Server Environment Select Configuration Best Practices The l balancing policy at the ingle LACP channel with all A s • oad - • Distributed vSwitch • port group level is “ r oute based ports. Two or more 10GB physical • on IP hash .” ports • The i nternal network uses a port group with on • load policy T VST or physical he balancing - • The p hysical uplink switch ST) to add switch tagging (E the the link aggregation group supports LACP and supports 1 VLAN tagging. (LAG) is ource and destination s “ large MTU size on all ports . IP address and TCP/UDP port • The e xternal network uses a and VLAN” . VST separate port group; EST , • and VGT are supported. LACP mode set to Active on both the ESX and the physical • All the ports must be owned by LACP timer he . T switches the same vSwitch. The vSwitch 1 (one should be set to Fast must support a large MTU size . second) on the switch , hysical p , port channel interfaces ports, the VMNICs. and • VMware recommends that STP be set to Portfast on the switch ports connected to the ESXi hosts. Do not use any LACP • The load - • balancing policy at the • Standard vSwitch channels. r oute based oup level is “ gr port - • 4 x 10Gb ports or on originating virtual port ID.” • All the ports must be owned • 4 x 1Gb ports VMware recommends that STP • by the same vSwitch. The • The p hysical uplink switch does ortfast on the switch be set to P vSwitch must support a large not support or is not configured 1 ports connected to the ESXi . MTU size for LACP and supports large hosts. 1 MTU size on all ports . • ONTAP Deploy supports configurations with a single i nternal port group for the network and a single port group for the e xternal network. For best performance, each network should use two separate port groups. The procedure to switch from a single port group per network to two port groups per network is detailed in Section 4.5 under the Standard vSwitch configuration. ONTAP Sel © 2016 NetApp, Inc. All rights reserved. 54 rights reserved. © 2019 NetApp, Inc. All e ct Product Architecture and Best : Practices

55 Select Configuration Server Environment Best Practices oad balancing policy at the • - The l • Us e a standard vSwitch . • Do not use any LACP channels. port group level is “ r oute based • The i nternal network must use a • 2 x 10Gb ports on originating virtual port ID.” port group with 1 x 10Gb active hysical uplink switch does The p • 1 the • VMware recommends that and 1 x 10Gb standby . configured not support or is not ortfast P STP be set to on the • xternal network uses a The e for LACP and supports large 1 switch ports connected to the . separate port group. The active MTU size on all ports ESXi hosts. port is the st andby port for the internal port group. The standby port is the active port for the internal network port group. All the ports must be owned by • the same vSwitch. The vSwitch 1 . must support a large MTU size 1 Starting with ONTAP Select 9.5 and ONTAP Deploy 2.10, the internal network supports an MTU size between 7,500 and 9,000 . Because the performance of the ONTAP Select VM is tied directly to the characteristics of the underlying capable NICs results in a higher - hardware, increasing the throughput to the VM by selecting 10Gb - performi ng cluster and a better overall user experience. When cost or form factor prevents the user from designing a system with four 10Gb NICs, two 10Gb NICs can be used. There are a number of other node clusters, 4 - configurations that are also supported. For two x 1Gb ports or 1 x 10Gb ports are See Table 8 for minimum requirements supported. For single node clusters, 2 x 1Gb ports are supported. and recommen dations. Table 8 ) Network minimum and recommended configurations. Minimum Requirements Recommendations 2 x 10Gb 2 x 1Gb Single node clusters Two node clusters / Metrocluster 4 x 1Gb or 1 x 10Gb 2 x 10Gb SDS 4/6/8 node clusters 2 x 10Gb 4 x 10Gb 4.5 VMware vSphere: vSwitch Configuration ONTAP Select supports the use of both standard and distributed vSwitch configurations. This section describes the vSwitch configuration and load h two NIC and - balancing policies that should be used in bot - - NIC configurations. four Standard vSwitch All vSwitch configurations require a minimum of two physical network adapters bundled into a single LAG node - clusters . (referred to as NIC teaming). ONTAP Select 9.3 supports a single 10Gb link for two H owever, it is a NetApp best practice to make sure of hardware redundancy through NIC aggregation. On a vSphere server, NIC teams are the aggregation construct used to bundle multiple physical network adapters into a single logical channel, a llowing the network load to be shared across all member ports. It’s important to remember that NIC teams can be created without support from the physical switch. Load - balancing and failover policies can be applied directly to a NIC team, which is unaware o f the upstream switch configuration. In this case, policies are only applied to outbound traffic. To balance inbound traffic, the physical switch must be properly configured. Port channels are the primary way this is accomplished. 55 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

56 Note: Static port channels are not supported with ONTAP Select. LACP - enabled channels are only supported with distributed vSwitches. Best Practice To optimize load balancing across both the internal and the external ONTAP Select networks, use the “route based on originating virtual port” load - balancing policy. s For single node clusters, ONTAP Deploy configure the ONTAP Select VM to use a port group for the external network and either the same port group or optionally, a different port group for the cluster and , node ma nagement traffic. For single node clusters, the desired number of physical ports can be added to the external port group as active adapters. For multi node clusters, ONTAP Deploy configure s each ONTAP Select VM to use a port group for the internal net work and a separate port group for the external network. The cluster and node management traffic can either use the same port as the external traffic, or optionally a separate port group. The cluster roup with and node management traffic cannot share the same port g internal traffic. node cluster), When using four physical ports per node (as part of a multi you can use all four ports in two port groups (one internal port group and one external port group) and split these four physical ports tandard between active and standby adapters. This default configuration is explained in the section “ S ”. Performance testing has shown that the default configuration Ports Physical our F vSwitch and per result in an un even traffic distribution across the four physical links. For hig h performance might environments, NetApp recommends a manual assignment of ONTAP Select virtual NICs to four port Physical groups instead of two. This procedure is detailed in the section: “ Standard vSwitch and Four ports per ”. Both of these sections are applicable to multi node ONTAP Select clusters. S Configuration) (Default Node tandard vSwitch and per F our Physical Ports The following examples show the configuration of a s tandard vSwitch and the two port groups responsible for handling internal and external communication services for the ONTAP Select cluster. The internal node ONTAP Select clusters. network is only present for multi the a network outage because during Note: The external network can use the internal network VM NICs configured in Standby mode. The and internal network NICs should be part of this port group opposite is the case for the external network. Alternating the active and standby NICs between a proper failover of the ONTAP Select VMs in the two port groups is network for the critical . outage st Practice Be If a NIC has multiple ASICs, select one network port from each ASIC for the active adapters and select the other network port from each ASIC for the standby adapters. 56 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

57 1 Figure 9 ) Default external port group configuration using a standard vSwitch and four physical ports . Figure 20 ) Default internal port group configuration using a standard vSwitch and four physical ports . Standard ration) u (Advanced Config Node per ports Physical Four vSwitch and , a further ) intensive workloads - especially write ( For environments with high performance requirements distrib ution of traffic between physical ports is recommended. The current version of ONTAP Deploy does , so not support this configuration the following manual procedure is required. 57 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

58 Two additional port groups be created. In other words, there are two port groups for external traffic must and two port groups for internal traffic. The internal network is only present for multi node clusters. Each port group ha s a single active port and three standby ports. The order of the ports in the standby list is important. After the four port groups are configured (two internal and two external), you must make a recommend manual assignment of port groups to ONTAP Select vmnics NetApp . s shutting down the ONTAP Select VM prior to making these changes. Given that this configuration is applicable to ONTAP Select nodes that are part of an HA pair, a storage failover or give back s customer to perform t his allow procedure in a non disruptive fashion. Figure 21 shows the correct vmnic - to - port group assignment for an ONTAP Select node that is part of a node cluster. multi For readability, the assignments are as follows: • - Network adapter 1: ONTAP Management Network adapter 2: ONTAP - External • - Network adapter 3: ONTAP Internal • • Network adapter 4: ONTAP - Internal2 • Network adapter 5: ONTAP - Internal - • Network adapter 6: ONTAP Internal 2 Network adapter 7: ONTAP - External2 • 21 Figure ) ONTAP Select vmnic to port group assignments (advanced configuration for multi node clusters using four ports and a standard v S witch). External Figure 22 and Figure 23 shows the configurations of the external network port groups (ONTAP - External2). Note - and ONTAP In . , this setup that the active adapters are from different network cards vminc 5 are dual ports on the same physical NIC, while vmnic 6 and vmnic vmnic 4 and 7 are similarly . The order of the standby ada dual ports on a separate NIC a hierarchical fail over with the provides pters 58 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

59 ports from the internal network being last. The order of internal ports in the standby list is similarly swapped between the two external port groups. For readability, the assignments are as follows: - ONTAP - External ONTAP External2 Active adapters: vmnic5 Active adapters: vmnic7 Standby adapters: vmnic7, vmnic4, vmnic6 Standby adapters: vmnic5, vmnic6, vmnic4 ) 22 node ONTAP Select external port groups configurations (advanced configuration for multi Part 1: Figure witch) S clusters using four ports and a standard v . Figure 23 ) Part 2: ONTAP Select external port groups configurations (advanced configuration for multinode . witch) clusters using four ports and a standard v S and Figure 24 Internal Figure 25 shows the configurations of the internal network port groups (ONTAP - - and ONTAP n this setup , vmnic the active adapters are from different network cards that Internal2). Note . I vmnic 5 are dual ports on the same physical ASIC, while vmnic 6 and nic vm 4 and 7 are similarly dual a hierarchical fail over with the provides pters . The order of the standby ada ports on a separate ASIC 59 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

60 ports from the external network being last. The order of external ports in the standby list is similarly swapped between the two internal port groups. For readability, the assignments are as follows: Internal - ONTAP ONTAP - Internal2 Active adapters: vmnic4 Active adapters: vmnic6 Standby adapters: vmnic4, vmnic7, vmnic5 Standby adapters: vmnic6, vmnic5, vmnic7 (advanced configuration for multi ONTAP Select internal port groups configurations Part 1: ) Figure 24 node clusters using four ports and a standard v . witch) S 25 ONTAP Select internal port groups configurations (advanced configuration for multinode ) Part 2: Figure S . witch) clusters using four ports and a standard v Standard vSwitch and two physical ports per node s and a adapter ctive a When using only two physical ports, each port group should have an tandby node ONTAP adapter configured opposite to each other. The internal network is only present for multi 60 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

61 node clusters, both adapters can be configured as active in the external port - Select clusters. For single group. The following example shows the configuration of a standard vSwitch and the two port groups responsible for handling internal and external communication service node ONTAP Select cluster. The s for a multi the external network can use the internal network VMNIC in the event of a network outage because internal network VMNICs s tandby mode. The opposite is the part of this port group and configured in are the external network. Alternating the active and standby VMNICs between the two port groups is case for during absolutely critical to the proper failover of the ONTAP Select VMs network outages. Figure 26 ) Standard vSwitch with two physical ports per node. Distr ch ibuted vSwit When using distributed vSwitches in your configuration, LACP can be used to increase the throughput and resiliency o f the network construct. The only supported LACP configuration requires that all the VMNICs are in a single LAG. The uplink physical switch must support an MTU size between 7,500 to 9,000 on all the ports in the channel. The internal and external ONTAP Sel ect networks should be isolated at the port group level. The internal network should use a nonroutable (isolated) VLAN. The , external network can use either VST, EST or VGT. The following examples show the distributed vSwitch configuration using LACP. 61 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

62 ure Fig 27 ) LAG properties when using LACP. ) External port group configurations using a distributed vSwitch with LACP enabled. 28 Figure © 2016 NetApp, Inc. All rights reserved. e ct : Product Architecture and Best 62 ONTAP Sel © 2019 NetApp, Inc. All rights reserved. Practices

63 ) Internal port group configurations using a distributed vSwitch with LACP enabled. 29 Figure Note: LACP requires that you configure the upstream switch ports as a port channel. Prior to enabling - this on the distributed vSwitch, make sure that an LACP e nabled port channel is properly configured. Best Practice NetApp recommends that the LACP mode be set to active on both the ESX and the physical switches. port port s, Furthermore, the LACP timer should be set to fast (1 second) on the physical switch , channel interfaces and on the VMNICs. that you the load e configur When using a distributed vSwitch with LACP, NetApp recommends - balancing policy to “route based on IP hash” on the port group and “source and destination IP address and TCP/UDP port and VLAN” on the LAG. 4.6 Physical Switch Configuration Careful consideration should be taken when making connectivity decisions f rom the virtual switch layer to physical switches. Separation of internal cluster traffic from external data services should extend to the upstream physical networking layer through isolation provided by layer - 2 VLANs. This section covers upstream physica switch l switch configurations based on single - switch and multi environments. Physical switch ports should be configured as trunkports. ONTAP Select external traffic can be separated across multiple layer - 2 networks in one of two ways. One method is by using ONTAP VLAN - tagged virtual ports with a single port group. The other method is by assigning separate port groups in VST mode to to data ports assign You must also . management port e0a ONTAP the depending on e0b and e0c/e0g or single - . If the external traffic is separated across multinode configuration Select release and node the 2 networks, the uplink physical switch ports should have those VLANs in its allowed VLAN multiple layer - list. 63 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

64 ONTAP Select internal network traffic occurs using virtual interfaces defined with link local IP addresses. routable, internal traffic between cluster nodes must flow across a Because these IP addresses are non single layer - 2 network. Route hops between ONTAP Select cluster nodes are unsupported. Best Practice VMware recommends that STP be set to Portfast on the switch ports connected to the ESXi hosts. Not setting STP to Portfast on the switch ports can affect ONTAP Select's ability to tolerate uplink failures. When using LACP, the LACP timer should be set to fast (1 second) . T he load - balancing policy should H ddress and A D ource and S on the port group and ash ased on IP B oute estination IP be set to R on the LAG. TCP/UDP port and VLAN Shared Physical Switch Figure depicts a possible switch configuration used by one node in a multinode ONTAP Select cluster. 30 In this example, the physical NICs used by the vSwitches hosting both the internal and external n etwork port groups are cabled to the same upstream switch. Switch traffic is kept isolated using broadcast domains contained within separate VLANs. For the ONTAP Select internal network, tagging is done at the port group level. While the Note: uses VGT for the external network, both VGT and VST are supported on that following example port group. ) Network configuration using shared physical switch. 30 Figure possible, multiple In this configuration, the shared switch becomes a single point of failure. If Note: switches should be used to prevent a physical hardware failure from causing a cluster network outage. Multiple Physical Switches Figure When redundancy is needed, multiple physical network switches should be used. 31 shows a node ONTAP Select cluster. NICs from both the recommended configuration used by one node in a multi internal and external port groups are cabled into different physical switches, protecting the user from a - switch failure. A virtual port channel is configured between switches to prevent spanning single hardware tree issues. 64 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

65 Best Practice When sufficient hardware is available, NetApp recommends using the multi switch configuration shown in Figure 31 , due to the added protection against physical switch failures. ) Network configuration using multiple physical switches. 31 Figure Data and Management Separation 4.7 ON TAP Select external network traffic is defined as data (CIFS, NFS, and iSCSI), management, and replication (SnapMirror) traffic. Within an ONTAP cluster, each style of traffic uses a separate logical interface that must be hosted on a virtual network port. On the multinode configuration of ONTAP Select, , n the single node configuration . O these are designated as ports e0a and e0b/e0g these are designated as e0a and e0b/e0c, while the remaining ports are reserved for internal cluster services. - recommends isolating data traffic and management traffic into separate layer NetApp 2 networks. In the - ONTAP Select environment, this is done using VLAN tags. This can be achieved by assigning a VLAN Then you can assign . a nt traffic tagged port group to network adapter 1 (port e0a) for manageme separate port group(s) to ports e0b and e0c (single - node clusters) and e0b and e0g (multinode clusters) for data traffic. If the VST solution described earlier in this document is not sufficient, collocating both data and , management LIFs on the same virtual port might be required . To do so, use a process known as VGT in VLAN tagging is performed by the VM. which Data and management network separation through VGT is not available when using the Note: ONTAP must be performed after cluster setup is complete. process Deploy utility. This node cluster configurations, There is an additional caveat when using VGT and two node clusters. In two - - o the mediator before ONTAP is fully the node management IP address is used to establish connectivity t VST tagging is supported on the port group mapped to the node and available. Therefore, only EST management LIF (port e0a). Furthermore, if both the management and the data traffic are using the same port group, only E node cluster. - ST/VST are supported for the entire two 65 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

66 32 in , VST and VGT, are supported . Figure Both configuration options shows the first scenario, VST, which traffic is tagged at the vSwitch layer through the assigned port group. In this configuration, cluster and node management LIFs are assigned to ONTAP port e0a and tagged with VLAN ID 10 through the assigned port group. Data LIFs are assigned t o port e0b and either e0c or e0g and given VLAN ID 20 e a third port group and are on VLAN ID 30. us using a second port group . Th e cluster ports 32 ) Data and management separation using VST. Figure 33 Figure traffic is tagged by the ONTAP VM using VLAN in which shows the second scenario, VGT, - ports that are placed into separate broadcast domains. In this example, virtual ports e0a 10/(e0c 10/e0b - - . This configuration allows or e0g) - 20/e0b - 10 and e0a 20 are placed on top of VM ports e0a and e0b network tagging to be performed directly within ONTAP, rather than at the vSwitch layer. Manageme nt 2 subdivision within a single VM and data LIFs are placed on these virtual ports, allowing further layer - port. The cluster VLAN (VLAN ID 30) is still tagged at the port group. This style of configuration is especially desirable when using multiple IPsp aces. Group VLAN Note: ports into separate custom IPspaces if further logical isolation and multitenancy are desired. Note: To support VGT, the ESXi/ESX host network adapters must be connected to trunk ports on the physical switch. The port groups connected to the vir tual switch must have their VLAN ID set to to enable trunking on the port group 4095 . 66 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

67 Figure 33 ) Data and management separation using VGT. Best Practice or when VLAN ports is required - If data traffic spans multiple layer 2 networks you are and the use of using multiple IPspaces, VGT should be used. 5 Use Cases ONTAP Select is a flexible storage management solution that enables various use cases. This section describes some of these use cases. es Remote and Branch Offic 5.1 The ONTAP Select VM can be collocated with application VMs, making it an optimal solution for remote class file services while - offices or branch offices (ROBOs). Using ONTAP Select to provide enterprise Select or FAS clusters enables resilient solutions to be allowing bidirectional replication to other ONTAP ONTAP Select comes prepopulated with feature licenses for built in low - touch or low - cost environments. tion CIFS, NFS, and iSCSI protocol services as well as both SnapMirror and SnapVault replica all . Therefore, of these features are available immediately upon deployment. technologies Starting with ONTAP Select 9.2 and ONTAP Deploy 2.4, all vSphere and VSAN licenses are now supported. node cluster with a remote mediator is an attractive solution for small data centers. An ONTAP Select two - HA functionality is provided by ONTAP Select. The minimum networking requirement In this configuration, 67 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

68 node ONTAP Select ROBO solution is four 1Gb links. Starting with ONTAP Select 9.3, a single - for a two 10Gb network connection is also supported. The vNAS ONTAP Select solution running on VSAN (including the two - node VSAN ROBO configuration) Finally, a single - lity is provided by VSAN. node is another option. In this configuration, the HA functiona ONTAP Select cluster replicating its data to a core location can provide a set of robust enterprise data management tools on top of a commodity server. Figure 34 depicts a common remote office configuration using ONTAP Select. driven SnapMirror relationships periodically replicate the data from the remote office to a single - Schedule located in the main data center. neered storage array consolidated engi Figure 34 ) Scheduled backup of remote office to corporate data center. 5.2 Private Cloud (Data Center) Another common use case for ONTAP Select is providing storage services for private clouds built on commodity servers. Figure 35 shows how a storage farm provides compute and locally attached storage to the ONTAP Select VM, which provides sto rage services upstream to an application stack. The entire SVMs to the deployment and configuration of application VMs, is workflow, from the provisioning of automated through a private cloud orchestration framework. service a This is sing the HA version of ONTAP Select creates U . the same - oriented private cloud model cost FAS arrays. Storage server resources are - would expect on higher you ONTAP experience consumed exclusively by the ONTAP Select VM, with appli cation VMs hosted on separate physical infrastructure. 68 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

69 35 ) Private cloud built on DAS. Figure - 5.3 MetroCluster Software Defined Storage (Two Node Stretched Cluster High Availability) Starting with ONTAP Select 9.3P2 and ONTAP Deploy 2.7, a two - node cluster can be stretched between two locations if certain minimum requirements are met. This architecture fits neatly in between hardware - - data single defined). The - or software defined - clusters (hardware center based MetroCluster and software - defined requirements for the ONTAP Select MetroCluster SDS highlight the general flexibility of storage solutions as well as the differences between it and the hardware o based MetroCluster SDS . N - proprietary hardware is required. Unlike MetroCluster, ONTAP Select uses existing network infrastructure and supports a network latency of up to 5ms RTT with a maximum jitter of up to 5ms, for a total of 10ms maximum latency. A maximum though the latency profile is more important. Separation al ance of 10km is also a requirement, dist physical separation than the actual distance. In requirements in the market space have more to do with . I some instances, this can mean different buildings other instances, it can mean different rooms in the n - node cluster as a same building. Regardless of the actual physical placement, what defines a two MetroCluster SDS is that each node a separate uplink switch. uses figuration, a mediator is required to properly identify the active node As part of the two - node HA con active independently - during a failover and avoid any split both nodes remain in which brain scenario node HA configuration previously - identical to the regular two during a network partition. This operation is available. For proper protection and failover during site failure, the mediator should be in a different site from the two HA nodes. The maximum latency between the mediator and each ONTAP Selec t node cannot exceed 125ms. - With this solution, enterprise customers can confidently take advantage of the flexibility of a software deploy with peace of mind knowing their data They can . defined storage solution on commodity hardware - with an enterprise is protected grade, 0 RPO solution. ONTAP Select MetroCluster SDS provides the following benefits: 69 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

70 • MetroCluster SDS provides another dimension (data center to data center) of protection for ONTAP Select. Customers can now take advantage of t his extra level of protection in addition to leveraging all the benefits of software - defined storage and ONTAP. • MetroCluster SDS provides business - critical data protection with 0 RPO and automatic failover. Both the data storage and the application access points are automatically switched over to the surviving data center or node with zero intervention from IT. • MetroCluster SDS is cost effective. It takes advantage of the existing networking infrastructure to enable stretched resiliency between the HA pair, and no additional hardware is required. It also provides active/active data access and data center redundancy in the same cluster. Figure 36 ) MetroCluster SDS. Node HA Versus Two For more best practices and other requirements, see the Node - - section “ Two Two ” and “ Stretched HA (MetroCluster SDS) Node Stretched HA (MetroCluster SDS) Best Practices - 6 Upgrading ONTAP Select and ONTAP Deploy This section contains important information abo ut the maintenance of various aspects of an ONTAP Select cluster. It is possible to upgrade ONTAP Select and ONTAP Deploy independently of each other. Table 9 describ es the support matrix for ONTAP Select and ONTAP Deploy. Table 9 ) ONTAP Deploy versus ONTAP Select support matrix. Select 9.1 Select 9.2 Select 9.0 Select 9.3 Select 9.4 Select 9.5 Supported Supported Not Not ted Suppor Deploy 2.7 Not supported supported supported (limited support) ONTAP Sel © 2016 NetApp, Inc. All rights reserved. rights reserved. 70 e ct Product Architecture and Best : © 2019 NetApp, Inc. All Practices

71 Select 9.3 Select 9.1 Select 9.2 Select 9.0 Select 9.4 Select 9.5 Deploy 2.8 Not Not Supported Supported Supported Supported supported supported Not Not Deploy 2.9 Supported Supported Supported Supported supported supported Supported Supported Supported Supported Not Supported Deploy 2.10 supported Note: ONTAP Deploy only manages the Select clusters that it has deployed. There is currently no functionality to discover ONTAP Select clusters installed using another instance of ONTAP backing up the ONTAP Deploy configuration every time a new Deploy. NetApp recommends cluster is deployed. Restoring the ONTAP Deploy database allows a new ONTAP Deploy instance to manage ONTAP Select clusters installed using another ONTAP Deploy VM. However, care should be taken s o that one cluster is not managed by multiple ONTAP Deploy instances. apacity 7 Increasing the ONTAP Select C Using ONTAP Deploy 7.1 Increasing C apacity for ONTAP Select vNAS and DAS with Hardware RAID ontrollers C add and license additional storage for each node in an ONTAP Select ONTAP Deploy can be used to cluster. The storage add functionality in ONTAP Deploy is the only way to increase the storage under - management and directly modifying the ONTAP Select VM is not supported. The following figure shows , the “+” icon that initiates the storage - add wizard. expansion operation. Adding The following considerations are important for the success of the capacity - capacity requires the existing license to cover the total amount of space - (existing plus new). A storage add operation that results in the node exceeding its licensed capacity fail s . A new license with sufficient capacity should be installed first. If the extra capacity is added to an existing ONTAP Select aggregate, then the new storage pool (datastore) should have a performance profile similar to that of the existing storage pool (datastore). N ote - rage to an ONTAP Select node installed with an AFF SSD sto like that it is not possible to add non - . ix M personality (flash enabled). DAS and external storage is also not supported ing you storage pools, If locally attached storage is added to a system to provide for additional local (DAS) must build an additional RAID group and LUN (or LUNs). Just as with FAS systems, care should be taken you are to make sure that the new RAID group performance is similar to that of the original RAID group if 71 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

72 , the new RAID group a new aggregate you are creating adding to the same aggregate. If new space layout could be different if the performance implications for the new aggregate are well understood. at same datastore as an extent if the total size of the datastore does The new space can be added to th supported maximum datastore size. Adding a datastore extent to the datastore not exceed the ESX - in ONTAP Select is already installed can be done dynamically and does not aff which ect the operations of the ONTAP Select node. . issues should be considered If the ONTAP Select node is part of an HA pair, some additional In an HA pair, each node contains a mirror copy of the data from its partner. Adding space to node 1 requires that an identical amount of space is added to its partner, node 2, so that all the data from node 1 add operation - pace added to node 2 as part of the capacity is replicated to node 2. In other words, the s node 1 data is fully that for node 1 is not visible or accessible on node 2. The space is added to node 2 so protected during an HA event. There is an additional consideration with regard to p erformance. The data on node 1 is synchronously replicated to node 2. Therefore, the performance of the new space (datastore) on node 1 must match the performance of the new space (datastore) on node 2. In other words, adding space on both nodes, but This is using different drive technologies or different RAID group sizes, can lead to performance issues . due to the RAID SyncMirror operation used to maintain a copy of the data on the partner node. - be must add operations - air, two storage To increase user accessible capacity on both nodes in an HA p performed, one for each node. Each storage - add operation requires additional space on both nodes. The total space required on each node is equal to the space required on node 1 plus the space required on nod e 2. Initial setup is with two nodes, each node having two datastores with 30TB of space in each datastore. each node consuming 10 ONTAP Deploy creates a two - node cluster, with TB of space from datastore 1. ctive space per node. TB of a ONTAP Deploy configures each node with 5 . ONTAP Select still uses an shows the results of a single storage - add operation for node 1 Figure 37 equal amount of storage (15TB) on each node. However, node has more active storage (10TB) than 1 node (5TB). Both nodes are fully protected as each node hosts a copy of the other node’s data. There 2 space left in datastore 1, and datastore 2 is still completely free. is additional free add operation. ) Capacity distribution: allocation and free space after a single storage 37 - Figure consume 1 add operations on node - Two additional storage the rest of datastore 1 and a part of datastore - 2 (using capacity cap). The first storage add operation consumes the 15TB of free space left in datastore 72 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

73 Figure 38 shows the result of the second storage - add operation. At this point node 1 has 50TB of 1. active data under management while node 2 has the original 5 TB. Figure 38 ) Capacity distribution: allocation and free space after two additional storage - add operation for . 1 node Starting with ONTAP Deploy 2.7 and ONTAP Select 9.3, the maximum VMDK size used during capacity add operations is 16TB. The maximum VMDK size used during cluster create operations continues to be 8TB. ONT AP Deploy creates the appropriate sized VMDKs depending on your configuration (single - node or multinode cluster) and the amount of capacity being added. However, the maximum size of each and 16TB during the storage VMDK should not exceed 8TB during the cluster create operations add - operations. 7.2 Increasing capacity for ONTAP Select with Software RAID The storage - add wizard can similarly be used to increase capacity under management for ONTAP Select those DAS SDD drives that are available and can s nodes using software RAID. The wizard only present be mapped as RDMs to the ONTAP Select VM. e to increase the capacity license by a single TB, when working with software RAID, it Though it is possibl is not possible to physically increase the capacity by a single TB. Similar to adding disks to a FAS or AFF rage that can be added in a single operation. dictate the minimum amount of sto array, certain factors requires that an identical number of drives is also N ote that in an HA pair, adding storage to node 1 2 used by one are ). Both the local drives and the remote disks available on the node’s HA pair (node used to - add operation on node 1 . That is to say, the remote drives are storage make sure that the new storage on node 1 is replicated and protected on node 2 . In order to add locally usable storage on node 2 , be available on both must a s eparate storage - add operation and a separate and equal number of drives nodes. data ONTAP Select s any new drives into the same root , partition , and data partitions as the existing drives. The partitioning operation takes place during the creation of a new aggregate or the during expansion on an existing aggregate. The size of the root partition stripe on each disk is set to match the existing root par tition size on the existing disks. Therefore, each one of the two equal data partition sizes can be calculated as the disk total capacity minus the root partition size divided by two. The root partition stripe size is variable , root space and it is computed during th e initial cluster setup as follows . Total - single node cluster and 136GB for HA pairs) is divided across the initial number of a required (68GB for constant on all to be ined mainta is disks minus any spare and parity drives. The root partition stripe size the drives being added to the system. ies var creating a new aggregate, the minimum number of drives required you are If depending on the . RAID type and whether the ONTAP Select node is part of an HA pair 73 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

74 If adding s torage to an existing aggregate, some additional considerations are . It is possible to necessary RAID group is not at the maximum limit already. add drives to an existing RAID group, assuming that the Traditional FAS AFF best practices for adding spindles to existing RAID groups also apply here , and and creating a hot spot on the new spindle is a potential concern . In addition, only drives of equal o r larger data partition sizes can be added to an existing RAID group. As explained above, the data partition size is not the same as drive raw size. If the data partitions being added are larger than the existing partitions, the new drives is right - remain s un utilized. sized. In other words, a portion of capacity of each new drive It is also possible to use the new drives to create a new RAID group as part of an existing aggregate. In this case, the RAID group size should match the exi sting RAID group size. - Node to Multinode Upgrade and Cluster Expansions 7.3 Single - Upgrading from the single - HA version of ONTAP Select to the multinode scale - out version is node, non not supported. Migrating from a single - node version to a multinode version requires the provisioning of a new ONTAP Select cluster using SnapMirror technology to copy existing data from the single - node cluster. ducing the number of nodes in a multinode ONTAP Select cluster Expanding or re is not a supported workflow at the time of writing this document. 8 ONTAP Select Performance The performance numbers described in this section are intended as a rough estimate of the performance of an ONTAP Select cluster and are not a performance guarantee. The performance of an ONTAP Select cluster can vary considerably due to the characteristics of the underlying hardware and configuration. As a matter of fact, the specific hardware configurat ion is the biggest factor in the performance of a particular ONTAP Select instance. Here are some of the factors that affect the performance of a specific ONTAP Select instance: higher frequency is preferable) Core f requency . I n general , a • ONTAP Select does not use multi • Single socket versus multi socket . socket features , but the hypervisor overhead for supporting multi socket configurations accounts for some amount of deviation in total performance . he default driver provided by the T . card configuration and associated hypervisor driver • RAID hypervisor might . need to be replaced by the hardware vendor driver Drive type and number of drives in the RAID group(s) • . • Hypervisor version and patch level . Thi s document include s performance comparisons only when the testing was performed on the exact same test bed to highlight the impact of a specific feature. In general, we document the hardware environment and run the highest performance configurati on possible on that platform. 8.1 ONTAP Select 9.0 Standard Four - Node with Direct - Attached Storage (SAS) : Reference Platform mall Instance) hardware (per node): S ONTAP Select 9.0 ( Dell R530 • 8 core 2.4GHz Haswell − - − 24GB RAM ESX 5.5u3 − enclosure: 1 MD1420 Dell drive • 74 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

75 − 23 600GB 10K RPM SAS drives (22 in use, 1 hot spare) PERC H830 RAID controller − − 2GB NV cache Client hardware: 4 x NFSv3 IBM 3650 clients • Configuration information: 1,500 MTU for data path between clients and Select cluster • No storage efficiency features in use (compression, deduplication, Snapshot copies, SnapMirror, and • so on) Table 10 lists the throughput measured against read/write workloads o n a single node (part of a four - node S - mall instance) ONTAP Select cluster. Performance measurements were taken using the SIO load generating tool. Table 10 ) Performance results for a single node (four - node S mall instance) ONTAP Se lect cluster. Description Sequential Read Sequential Write Random Read Random Write 4KiB 64KiB 64KiB 4KiB 549MBps 54MBps 19MBps 155MBps ONTAP 9 Select Standard 13,824 IOPS SAS disks 2,480 IOPS 4,864 IOPS 8,784 IOPS Sequential Read Details: • SIO direct I/O enabled • 1 x data NIC • 1 x data aggregate (1TB): − 64 volumes; 64 SIO procs/threads 32 volumes per node (64 total) − − 1 x SIO proc per volume; 1 x SIO thread per file 1 file per volume; files 12GB each − Files previously created using mkfile − through each file sequentially from beginning to end. With 100% sequential 64KiB I/O , each thread re ad ed ped for 300 seconds. Tests Each measurement last purposefully sized so that the I/O never wrap were were designed to force I/O from disk. within a given file. Performance measurements Sequential Write Details: • SIO direct I/O enabled 1 x data NIC • 1 x data aggregate (1TB): • 64 volumes; 128 SIO procs/threads − 32 volumes per node (64 total) − per volume; 1 x SIO thread per file 2 x SIO procs − − 720MB each , 2 x files per volume; files are 30 75 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

76 Using 100% sequential 64KiB I/O , each thread wr through each file sequentially from beginning to ote ed for 300 seconds. Tests were purposefully sized so that the I/O never end. Each measurement last ed within a given file. Performance measurements were designed to force I/O to disk. wrap ad Random Re Details: SIO direct I/O enabled • 1 x data NIC • • 1 x data aggregate (1TB) − 64 volumes, 64 SIO procs, and 512 threads 32 volumes per node (64 total) − − 64 SIO procs per volume, each with 8 threads − 1 x SIO proc per volume; 8 threads per file − 1 file per volume; fi les are 8192MB each − Files previously created using mkfile Note: Using 100% random 4KiB I/O , each thread randomly r ead through each file. Each measurement ed last designed to force I/O from disk. were for 300 seconds. Performance measurements Random Write Details: SIO direct I/O enabled • • 1 x data NIC • 1 x data aggregate (1TB) − 64 volumes, 128 SIO procs, and 512 threads − 32 volumes per node (64 total) − 64 SIO procs, each with 8 threads − 1 x SIO proc per volume; 8 threads per file files are 8192MB each − 1 x file per volume; , each thread randomly wr through each file. Each measurement ote Using 100% random 4KiB I/O Note: ed for 300 seconds. Performance measurements were designed to force I/O to disk. last Node with 8.2 ONTAP Select 9.1 Medium Instance (Premium Licens e) Four - - Attached Storage (SSD) Direct Reference Platform ONTAP Select 9.1 (Premium) hardware (per node ): • Cisco C240 UCS: 1 x 14 − core 2.6GHz E5 - 2697 - 128GB RAM − ESX 5.6 − 24 x 400GB SSDs − Cisco RAID controller − − 2GB NV cache 76 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

77 Client hardware: 4 x NFSv3 IBM 3650 clients • Configuration information: 1,500 MTU for data path between clients and the ONTAP Select cluster • • No storage efficiency features in use (compression, deduplication, Snapshot copies, SnapMirror, and so on) node Table lists the throughput measured against read/write workloads on a single node (part of a four - 11 Medium instance) ONTAP Select cluster. Performance measurements were taken using the SIO load - generating tool. 11 Table ) Performance results for a single node (part of a four - node medium instance) ONTAP Select cluster with DAS (SSD). Sequential Read Sequential Write Description Random Read Random Write 64KiB 4KiB 64KiB 4KiB 89MBps 158MBps 233MBps 1151MBps ONTAP 9.1 Select Medium instance with DAS SSDs 22,784 IOPS 3,728 IOPS 18,416 IOPS 40,448 IOPS Sequential Read Details: SIO direct I/O enabled • • 1 x data NIC • 1 x data aggregate (1TB): 64 volumes; 64 SIO procs/threads − 32 volumes per node (64 total) − − 1 x SIO proc per volume; 1 x SIO thread per file − 1 file per volume; files 12GB each − Files previously created using mkfile through each file sequentially from beginning Note: Using 100% sequential 64KiB I/O , each thread read were for 300 seconds. Tests lasted purposefully sized so that the I/O to end. Each measurement never wrap d within a given file. Performance measurements were designed to force I/O from pe disk. Sequential Write Details: • SIO direct I/O enabled 1 x data NIC • • 1 x data aggregate (1TB): − 64 volumes; 128 SIO procs/threads 32 volumes per node (64 total) − − per volume; 1 x SIO thread per file 2 x SIO procs − 2 x files per volume; files are 30720MB each Note: wrote , each thread Using 100% sequential 64KiB I/O through each file sequentially from sefully sized so purpo were for 300 seconds. Tests ed beginning to end. Each measurement last 77 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

78 that the I/O never wrap within a given file. Performance measurements were designed to ped force I/O to disk. Random Read Details: SIO direct I/O enabled • • 1 x data NIC • 1 x data aggregate (1TB) − 64 volumes, 64 SIO procs, and 512 threads − 3 2 volumes per node (64 total) − 64 SIO procs per volume, each with 8 threads − 1 x SIO proc per volume; 8 threads per file − 1 file per volume; files are 8192MB each − Files previously created using mkfile Using 100% random 4KiB I/O Note: , each thread randomly read through each file. Each measurement designed to force I/O from disk. last for 300 seconds. Performance measurements were ed Random Write Details: SIO direct I/O enabled • • 1 x data NIC • 1 x data aggregate (1TB) 64 volumes, 128 SIO procs, and 512 threads − − 2 volumes per node (64 total) 3 64 SIO procs, each with 8 threads − 1 x SIO proc per volume; 8 threads per file − − 1 x file per volume; files are 8192MB each Note: Using 100% random 4KiB I/O , each thread randomly wr ote through each file. Each measurement last ed f or 300 seconds. Performance measurements were designed to force I/O to disk. Node with VSAN AF Storage 8.3 - ONTAP Select 9.2 Small instance Single Reference Platform ONTAP Select 9.2 (Standard) hardware (per node/four - node AF VSAN cluster): • Dell R630: - − Intel Xeon CPU E5 2660 v4 at 2.00GHz − 2 x sockets; 14 x CPUs per socket − 56 x logical CPUs (HT enabled) − 256GB RAM ESXi version: VMware ESXi 6.0.0 build 3620759 - − d rives per host: VSAN datastore • Intel SSDSC2BX40: 372GB for cache tier − SSDSC2BX01: 1.46TB for capacity tier 4 x Intel − 78 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

79 Client hardware: 1 x NFSv3 Debian Linux VM deployed on the same VSAN cluster • • 80GB workload distributed equally across four NFS volumes/mounts No storage efficiency features in use • • Separate 10GbE networks for NFS data tr affic and VSAN internal traffic • 1,500 MTU for NFS interfaces and 9,000 MTU for VSAN interface • Block size: random workload 4k; sequential workload 64k node ONTAP Select Table 12 lists the throughput measured against the read/write workloads on a single - smal - flash VSAN datastore. Performance measurements were taken l instance cluster running on an all using the SIO load - generating tool. Table 12 ) Performance results for a single - node ONTAP Select standard cluster on an AF VSAN datastore . Random Write Description Sequential Read Random Read Sequential Write 64KiB 64KiB 4KiB 4KiB ONTAP 9.2 Select Small 527MBps 34MBps 129MBps 63MBps - instance on all flash VSAN 32,899 IOPS 8,626 IOPS 8,427 IOPS 1,005 IOPS 8.4 Pair with Direct - Attached Storage (SSD) ONTAP Select Premium 9.4 HA Reference Platform ONTAP Select 9.4 (Premium) hardware (per node): • Cisco UCS C240 M4S2: - Intel Xeon CPU E5 2697 at 2.60GHz − − 2 x sockets; 14 x CPUs per socket − 56 x logical CPUs (HT enabled) − 256GB RAM − VMware ESXi 6.5 Dri ves per host: 24 X371A NetApp 960GB SSD − Client hardware: 4 x NFSv3 IBM 3550m4 clients • Configuration information: • 1,500 MTU for data path between clients and Select cluster e efficiency features in use (compression, deduplication, Snapshot copies, SnapMirror, and No storag • so on) 13 Table lists the throughput measured against read/write workloads on an HA pair of ONTAP Select generating tool. - Premium nodes. Performance measurements were taken using the SIO load 79 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

80 Table 13 ) Performance results for a single node (part o f a four - node medium instance) ONTAP Select 9.4 cluster on DAS (SSD). Random WR/ RD Description Sequential Read Random Write Sequential Write Random Read (50/50) 8KiB 64KiB 8KiB 64KiB 8KiB ONTAP 9.4 Select 492MBps 251MBps 1045MBps 218MBps 141MBps Medium instance 16,712 IOPS 27,840 IOPS 18,048 IOPS 62,912 IOPS 4016 IOPS with DAS (SSD) 64K Sequential Read Details: • SIO direct I/O enabled 2 x data NIC • 1 x data aggregate (2TB) • • 64 volumes; 64 SIO procs/threads 32 volumes per node (64 total) • 1 x SIO procs per volume; 1 x SIO thread per file • • 1 x files per volume; files are 12000MB each 64K Sequential Write Details: • SIO direct I/O enabled 2 x data NIC • 1 x data aggregate (2TB): • • 64 volumes; 128 SIO procs/threads 32 volumes per node (64 total) • • 2 x SIO procs per volume; 1 x SIO thread per file • 2 x files per volume; files are 30720MB each 8K Random Read Details: • SIO direct I/O enabled 2 x data NIC • • 1 x data aggregate (2TB): 64 volumes; 64 SIO procs/threads • 32 volumes per node (64 total) • olume; 8 x SIO thread per file 1 x SIO procs per v • 1 x files per volume; files are 12228MB each • 8K Random Write Details: SIO direct I/O enabled • 2 x data NIC • 80 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

81 • 1 x data aggregate (2TB) • 64 volumes; 64 SIO procs/threads • 32 volumes per node (64 total) • 1 x SIO procs per volume; 8 x SIO thread per file • 1 x files per volume; files are 8192MB each 8K Random 50% Write 50% Read Details: • SIO direct I/O enabled • 2 x data NIC • 1 x data aggregate (2TB) 64 volumes; 64 SIO procs/threads • 32 volumes per node (64 total) • • 1 x SIO procs per volume; 2 0 x SIO thread per file • 1 x files per volume; files are 12228MB each ONTAP Select Premium 9.5 HA Pair with Direct - Attached Storage (SSD) 8.5 Reference Platform ONTAP Select 9.5 (Premium) hardware (per node): • Cisco UCS C240 M4SX: - 2.1GHz 2620 at − Intel Xeon CPU E5 − 2 x sockets; 16 x CPUs per socket − 128GB RAM VMware ESXi 6.5 − 24 900GB SSD − Drives per host: Client hardware: • 5 x NFSv3 IBM 3550m4 clients Configuration information: MTU for data path between clients and Select cluster 1,500 • • No storage efficiency features in use (compression, deduplication, Snapshot copies, SnapMirror, and so on) 14 lists the throughput measured against read/write workloads on an HA pair of ONTAP Select Table RAID. Performance measurements were taken s oftware RAID and hardware Premium nodes using both using the SIO load generating tool. - 81 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

82 ) Performance results for a single node (part of a Table - node medium instance) ONTAP Select 9.5 14 four software h ardware RAID . cluster on DAS (SSD) with RAID and Random WR/ RD Description Sequential Read Sequential Write Random Read Random Write (50/50) 8KiB 64KiB 64KiB 8KiB 8KiB ONTAP 9.5 Select MiB/s 309 MiB/s 251 1,714 MiB/s 412 391 MiB/s MiB/s Medium instance with DAS (SSD) RAID hardware 223 451 MiB/s 293 MiB/s ONTAP 9.5 Select MiB/s 360 MiB/s 1,674 MiB/s Medium instance with DAS (SSD) RAID software 64K Sequential Read Details: • SIO direct I/O enabled • 2 nodes • 2 x data NIC per node • 1 x data aggregate per node (2TB HWRAID), (8 TB SWRAID) • 64 SIO procs, 1 thread per proc • 32 volumes per node 1 x files per proc; files are 12000MB e • ach 64K Sequential Write Details: SIO direct I/O enabled • 2 nodes • • 2 x data NIC per node 1 x data aggregate per node (2TB HWRAID), (4 TB SWRAID) • 128 SIO procs, 1 thread per proc • volumes per node 32 (HWRAID), 16 (SWRAID) • 30720MB each 1 file per proc; files are • 8K Random Read Details: SIO direct I/O enabled • 2 nodes • 2 x data NIC per node • 1 x data aggregate per node (2TB HWRAID), (4 TB SWRAID) • 64 SIO procs, 8 thread per proc • volumes per node 32 • 1 file per proc; files are 12228MB each • 82 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

83 8K Random Wri te Details: • SIO direct I/O enabled • 2 nodes • 2 x data NIC per node • 1 x data aggregate per node (2TB HWRAID), (4 TB SWRAID) • 64 SIO procs, 8 threads per proc • volumes per node 32 • 1 file per proc; files are 8192MB each 8K Random 50% Write 50% Read Details: • SIO direct I/O enabled • 2 nodes 2 x data NIC per node • 1 x data aggregate per node (2TB HWRAID), (4 TB SWRAID) • • 64 SIO procs, 20 threads per proc volumes per node 32 • 1 file per proc; files are 12228MB each • Where to Find Additional Information T o learn more about the information described in this document, refer to the following documents and/or websites: • ONTAP Select product page - management - sds.aspx - select software/ontap - https://www.netapp. com/us/products/data • ONTAP Select Resources page http://mysupport.netapp.com/ontapselect/resources ONTAP 9 Documentation Center • - http://docs.netapp.com/ontap 9/index.jsp Version History Date Document Version History Version June 2016 Initial version. Version 1.0 Updated the networking sections 2.5 and 5. August 2016 Version 1.1 December 2016 Version 1.2 Added support for ONTAP Select 9.1 and OVF evaluation method. • Consolidated the networking section. • • Consolidated the deploy section. 83 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

84 Date Document Version History Version Version 1.3 March 2017 • Added support for ONTAP Deploy 2.3, external array, and VSAN. SATA and NL - SAS along with datastore size • Added support for considerations for larger capacity media. Added IOPS metrics to performance table. • Added network checker for internal network troubleshooting. • Version 1.41 June 2017 • Added support for ONTAP Deploy 2.4, ONTAP Selec t 9.2, and 2 - node clusters. Added VSAN performance information. • Added support for ONTAP Deploy 2.7 and ONTAP Select 9.3. March 2018 Version 1.5 Added support for ONTAP Deploy 2.8 and ONTAP Select 9.4. June 2018 Version 1.6 Added support for ONTAP Deploy 2.10 and ONTAP Select 9.5. 2019 February Version 1.7 84 © 2016 NetApp, Inc. All rights reserved. rights reserved. © 2019 NetApp, Inc. All ONTAP Sel e Product Architecture and Best : ct Practices

85 Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact his document are supported for your specific environment. The product and feature versions described in t NetApp IMT defines the product components and versions that can be used to construct configurations dance with that are supported by NetApp. Specific results depend on each customer’s installation in accor published specifications. Copyright Information Copyright © 201 9 NetApp, Inc. All Rights Reserved. Printed in the U.S. No part of this document covered by , copyright may be reproduced in any form or by any means — graphic, electronic, or mechanical including photocopying, recording, taping, or storage in an electronic retrieval system — without prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: FTWARE IS PROVIDED BY NETAPP “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED THIS SO WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIAB LE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license rights, or under any patent rights, trademark any other intellectual property rights of NetApp. The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications. Data contained herein pertains to a com mercial item (as defined in FAR 2.101) and is proprietary to The U.S. Government has a non - exclusive, non - NetApp, Inc. - sublicensable, worldwide, transferrable, non limited irrevocable license to use the Data only in connection with and in support of the U. S. Government contract under which the Data was delivered. Except as provided herein, the Data may not be used, disclosed, reproduced, modified, performed, or displayed without the prior written approval of NetApp, Inc. United States Government license rig hts for the Department of Defense are limited to those rights identified in DFARS clause 252.227 - 7015(b). Trademark Information are trademarks of NETAPP, the NETAPP logo , and the marks listed at http://www.netapp.com/TM . NetApp, Inc. Other company and product names may be trademarks of their respective owners © 2016 NetApp, Inc. All rights reserved. e ct : Product Architecture and Best 85 ONTAP Sel © 2019 NetApp, Inc. All rights reserved. Practices

Related documents

h7123er.docx

h7123er.docx

S A T I V E R E S E N T F R E P U S E O R I D A H O F L O ENROLLED 2019 Legislature CS/HB 7123 , Engrossed 2 1 An act relating to taxation; amending s. 28.241, F.S.; 2 3 requiring that all of the proc...

More info »
FINRA Contact System Annual Verification   Quick Reference Guide

FINRA Contact System Annual Verification Quick Reference Guide

FINRA Contact System Annual Verificatio n Quick Reference Guide Overview FINRA Contact System (FCS) to report the names of their Executive Representatives and other Firms use the communications and no...

More info »
a162

a162

North Carolina Child Support Guidelines Effective January 1, 2019 Introduction Section 50-13.4 of the North Carolina General Statutes requires the Conference of Chief District Judges to prescribe unif...

More info »
salarytable

salarytable

Salary Table Report Job Class Title Job Salary Safety Hourly Barg Unit Hourly Overtime Monthly Monthly Minimum Maximum Minimum Range Class Maximum 14 30.378 5283 6337 Yes 24 0014 36.435 ACCOUNTANT I 8...

More info »
CIP Seven Out of Ten Not Even Close

CIP Seven Out of Ten Not Even Close

Seve n Out o f Te n? Not E ve n Cl os e. CENTRAL CONNECTICUT S Y T A TE UNIVERSIT f Res d ear n the Like view o A Re ch o oo lih te d P are nts of Child re n with I ncar ce ra lve d nvo ce-I g Justi m...

More info »
australia0218 reportcover 8.5x11 HIGHRES

australia0218 reportcover 8.5x11 HIGHRES

H U M A N “ d H e l p , I N e e d e S T H G I R ” u n i s h e d I W d s P a e t s n I a W H C T A o P r i s o n e r s w i t h D i s a b i l i t i e s f i n A u s t r a l i a t c e l g e N d n a e s u ...

More info »
435 441 458 467r e

435 441 458 467r e

WT/DS435/R, WT/DS441/R WT/DS458/R, WT/DS467/R 28 June 2018 Page: (18 - 1/884 4061 ) Original: English AUSTRALIA CERTAIN MEASURES CON CERNING TRADEMARKS, – PACKAGING IONS AND OTHER PLAIN GEOGRAPHICAL I...

More info »
Threads and Threading

Threads and Threading

Machinery's Handbook 27th Edition TABLE OF CONTENTS THREADS AND THREADING METRIC SCREW THREADS SCREW THREAD SYSTEMS 1783 American Standard Metric Screw 1725 Screw Thread Forms V-Thread, Sharp V-thread...

More info »
Still Not Allowed on the Bus: It Matters If You’re Black or White!

Still Not Allowed on the Bus: It Matters If You’re Black or White!

IZA DP No. 7300 Still Not Allowed on the Bus: It Matters If You’re Black or White! Redzo Mujcic Paul Frijters March 2013 DISCUSSION PAPER SERIES Forschungsinstitut zur Zukunft der Arbeit Institute for...

More info »
DoD7045.7H

DoD7045.7H

DoD 7045.7-H EPARTMENT OF D EFENSE D F UTURE Y EARS D EFENSE P ROGRAM (FYDP) S TRUCTURE Codes and Definitions for All DoD Components Office of the Director, Program Analysis and Evaluation A pril 2004

More info »
TPM Main Part 3 Commands v1.2 rev116 01032011

TPM Main Part 3 Commands v1.2 rev116 01032011

1 2 1 2 3 4 5 6 7 8 TPM Main Part 3 Commands 9 10 Specification Version 1.2 11 Level 2 Revision 116 12 1 March 2011 13 TCG Published 14 15 16 17 Contact: [email protected] 18 19 20 21 22...

More info »
industry ucaa chart addresses for submission

industry ucaa chart addresses for submission

ADDRESSES AND CONTACTS FOR SUBMISSION OF APPLICATION listed in the chart below . State Department contact Submit your application to the address (left side) information provided in the chart below (ri...

More info »
accessories catalog

accessories catalog

SERVICES COMPANY A ccessories MODULAR CES SORIES AC created to ORGANIZE and SUPPORT Because everyone has a role and everything has a place. Services C ompany Modular 500 East Britton Road 14 OK 731 Ci...

More info »
CDIR 2018 07 27

CDIR 2018 07 27

S. Pub. 115-7 2017-2018 Official Congressional Directory 115th Congress Convened January 3, 2017 JOINT COMMITTEE ON PRINTING UNITED STATES CONGRESS UNITED STATES GOVERNMENT PUBLISHING OFFICE WASHINGTO...

More info »
hcsc code conduct

hcsc code conduct

Living Our Purpose and Core Values Code of Ethics and Conduct © 2018 HCSC All rights reserved. December 2018

More info »
NRDC: Better Bulbs, Better Jobs   Case Studies in Ohio’s Energy Efficient Lighting Industry (PDF)

NRDC: Better Bulbs, Better Jobs Case Studies in Ohio’s Energy Efficient Lighting Industry (PDF)

NRDC: Better Bulbs, Better Jobs - Case Studies in Ohio’s Energy-Efficient Lighting Industry (PDF) © Rocky Kistner, NRDC EnErgy FACtS Better Bulbs, Better Jobs: Case Studies in Ohio’s Energy-Efficient ...

More info »
DataWindow

DataWindow

Listing of Ambulance and Advanced Life Support First Response S ervices in New York Service Name Service ID License Type Emergency Phone Mailing Address Expiration Date Business Phone Service Type Num...

More info »
http://ca groverbeach.civicplus.com/admin/DocumentView.aspx?DID=100&DL=1&ADMIN=1

http://ca groverbeach.civicplus.com/admin/DocumentView.aspx?DID=100&DL=1&ADMIN=1

C i t y o f G r o v e r B e a c h D EPARTMENT P OLICE Mission and Values Statement OUR VISION ... t, are dedicated to providing the best We, as members of the Grover Beach Police Departmen s superior ...

More info »
StateGovernmentDirectory

StateGovernmentDirectory

West Virginia State Government Directory Published May 2, 2019 West Virginia State Government Directory 1

More info »