Dell EMC Unisphere for PowerMax Online Help

Transcript

1 ™ ™ Dell EMC Unisphere for PowerMax Version 9.0.0 Online Help (PDF version)

2 © Copyright 2012-2018 Dell Inc. or its subsidiaries. All rights reserved. Published May 2018 Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA. Dell EMC Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.DellEMC.com 2 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

3 CONTENTS 21 Tables Introduction Chapter 1 25 Unisphere Online Help... 26 Capacity information... 29 31 Chapter 2 Getting Started Operating as the initial setup user... 32 Viewing Home Dashboard view - All Storage Systems... 32 Viewing Home Dashboard view - Specific Storage System...34 Viewing system performance view...34 Viewing the System Health Dashboard... 35 Understanding the system health score...37 Viewing Storage Group Compliance view... 39 Viewing Capacity dashboard view...40 Viewing Replication dashboard...41 Discovering storage systems... 43 Refreshing storage system information... 44 Viewing product version information... 44 Searching for storage objects...45 Modifying server logging levels...46 Exiting the console... 46 Getting help...46 47 Administration Chapter 3 Managing settings... 48 Setting preferences...49 Backing up the database server...50 Viewing database backups... 51 Deleting database backups...51 Alert settings... 51 Alerts... 52 Alert policies... 56 Threshold alerts...56 Configuring email notifications... 59 Editing subscriptions ... 60 Performance thresholds and alerts... 60 Service level alert policies...63 Server alerts...66 Security... 67 Authentication... 67 Understanding user authorization... 70 View Certificate dialog box... 76 Local Users... 76 Viewing user sessions... 80 Roles and associated permissions...80 Link and launch...83 Creating link-and-launch client registrations... 83 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version) 3

4 CONTENTS Editing link-and-launch client registrations... 84 Deleting link-and-launch client registrations... 84 Viewing link and launch client registrations...84 Managing Database Storage Analyzer (DSA) environment preferences... 85 Managing data protection preferences...85 Viewing authentication authority information... 86 Local User and Authorization operations... 87 Link and Launch operations... 87 Entering PIN number ... 87 Report operations... 87 Storage Management 89 Chapter 4 Understanding Storage Management... 90 Tag and Untag operations... 91 Viewing Storage Group Demand Reports... 91 Viewing Service Level Demand Reports...92 Viewing CKD volumes... 92 Viewing CKD volumes in CU image... 93 Viewing Storage Group Compliance view... 94 Dialog displayed when there is less than one week's data collected...96 Setting volume emulation... 96 FAST association operations...97 Removing DATA volumes...97 Mapping volume operations... 97 Rename operations...98 Provisioning storage ... 98 Using the Provision Storage wizard...100 Provisioning storage for mainframe...104 Using the Provision Storage wizard for mainframe...104 Provisioning storage... 107 Using the Provision Storage wizard...108 Suitability Check restrictions... 111 Creating storage groups... 112 Adding volumes to storage groups... 114 Copying volumes between storage groups... 114 Moving volumes between storage groups... 115 Removing volumes from storage groups... 116 Storage Group operations... 116 Expanding storage groups...116 Expanding ProtectPoint storage groups... 118 Modifying storage groups... 119 Renaming storage groups... 121 Protecting storage groups...122 Converting storage groups to cascaded... 129 Changing Storage Resource Pools for storage groups... 129 Adding or removing cascaded storage groups... 130 Renaming storage groups... 131 Deleting storage groups... 131 Setting host I/O limits... 132 Splitting storage groups... 133 Merging storage groups... 134 Managing VP compression on thin volumes in storage groups...134 Viewing storage groups... 135 Select Storage Resource Pool... 148 Select SSID... 149 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 4

5 CONTENTS Task in Progress...149 Select SRDF group...149 Editing storage group volume details ... 149 Editing storage group details ...150 Modify Custom Capacity dialog box... 150 Understanding FAST ...151 Understanding service levels...151 Understanding Storage Resource Pool details...153 Service level compliance... 157 Symmetrix tiers... 161 FAST policies... 166 Pinning and unpinning volumes... 173 Time windows...174 Understanding Workload Planner... 176 Managing volumes... 177 Managing volumes ... 178 Creating volumes...178 Deleting volumes... 188 Duplicating volumes... 188 Assigning array priority to individual volumes... 189 Assigning array priority to groups of volumes... 189 Changing volume configuration... 190 Expanding existing volumes... 191 Mapping volumes... 192 Unmapping volumes... 193 Setting optimized read miss... 193 Setting volume status... 194 Setting volume attributes...195 Setting volume identifiers... 196 Setting volume names... 196 Setting copy pace (QoS) for device groups... 197 QOS for replication... 197 Setting copy pace (QoS) for storage groups...197 Setting copy pace (QoS) for volumes... 198 Managing Meta Volumes... 199 Viewing CKD volumes... 203 Viewing CKD volume details...204 Viewing CKD volume front end paths...207 Viewing DLDEV volumes... 207 Viewing DLDEV volume details...207 Viewing masking information... 208 Viewing meta volumes... 208 Viewing meta volume details...209 Viewing meta volume meta members... 212 Viewing meta volume member details...212 Viewing other pool information...214 Viewing private volumes ... 215 Viewing private volume details... 215 Viewing regular volumes... 217 Viewing regular volume details... 218 Viewing reserved volumes... 221 Viewing reserved volume details...221 Viewing SAVE volumes ... 222 Viewing SAVE volume details... 222 Viewing storage resource pool information... 223 Viewing thin volumes ... 223 5 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

6 CONTENTS Viewing thin volume details...223 Viewing thin volume bound pool information... 223 Viewing virtual volumes... 224 Viewing virtual volume details... 225 Viewing volume back end paths... 227 Viewing volume FBA front end paths... 228 Viewing volume RDF information... 228 Select Volume Range dialog box... 229 Advanced Options dialog... 230 Viewing disk groups... 230 Viewing disk group details...230 Viewing disks in disk group... 231 Viewing disk details... 231 Viewing disk hyper volumes... 232 Viewing hyper volume details... 233 Viewing volumes for disk...234 Viewing paths for disks... 234 Viewing spare disks in disk group... 235 Viewing spare disk details... 235 Removing disks from disk groups...236 Deleting disk groups...237 Renaming disk groups... 237 Creating DATA volumes... 237 Activating and deactivating DATA volumes...238 Enabling and disabling DATA volumes... 238 Start and stop draining DATA volumes... 239 Viewing DATA volumes... 239 Viewing DATA volume details...239 Creating thin pools... 240 Expanding thin pools... 241 Draining thin pools...241 Starting and stopping thin pool write balancing... 242 Deleting thin pools... 243 Adding or removing thin pool members... 243 Enabling and disabling thin pool members... 244 Managing thin pool allocations... 244 Viewing thin pools...245 Viewing thin pool details... 246 Viewing bound volumes for a thin pool... 247 Viewing DATA volumes for a thin pool... 250 Viewing details on DATA volumes in thin pools... 250 Viewing other volumes for thin pools... 252 Managing thin pool capacity... 253 Allocate/Free/Reclaim dialogs...254 Creating or Expanding or Modifying thin pools... 255 Creating thin volumes...256 Binding/Unbinding/Rebinding thin volumes... 257 Understanding Virtual LUN Migration... 258 Viewing VLUN migration sessions... 259 Viewing VLUN migration session details... 259 Terminating a VLUN migration session... 260 VLUN Migration dialog box... 260 Select VLUN Migration Session Target dialog box... 260 Migrating regular storage group volumes... 261 Migrating regular volumes... 261 Migrating thin storage group volumes... 262 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 6

7 CONTENTS Migrating thin volumes... 262 Understanding Federated Tiered Storage ... 263 Viewing external storage... 263 Virtualizing external LUNs... 264 Virtualizing external LUNs... 265 Removing external LUNs... 266 Understanding storage templates... 267 Creating storage templates...267 Viewing storage templates...270 Modifying storage templates... 271 Deleting storage templates... 272 Understanding FAST.X... 272 Viewing external disks...272 Adding external disks... 274 Removing external disks or External LUNs...274 Working with external disks... 275 Start draining external disks... 275 Stop draining external disks... 276 Activating external disks... 276 Viewing reservations... 277 Viewing reservation details...277 Releasing reservations... 278 Managing vVol... 278 Viewing storage containers... 279 Viewing storage container details... 280 Creating storage containers... 281 Modifying storage containers... 282 Deleting storage containers... 282 Viewing storage resources... 282 Viewing storage resource details... 283 Viewing storage resource related SRPs... 283 Adding storage resources to storage containers... 284 Modifying storage resources... 285 Removing storage resources from storage containers... 285 Viewing protocol endpoints... 285 Viewing protocol endpoint details... 286 Provisioning protocol endpoints to hosts... 286 Deleting protocol endpoints... 287 Configuring the VASA provider connection...287 Understanding compression... 287 Viewing the SRP efficiency details ... 288 Viewing compressibility reports... 288 Viewing a storage group's compression ratio... 289 Viewing a volume's compression details...289 Viewing compression status using the VVol Dashboard... 290 Viewing the compression efficiency dashboard... 290 Host Management 291 Chapter 5 Understanding Host Management... 292 Creating hosts... 292 Adding initiators to hosts... 293 Adding initiator to host... 294 Removing initiators from hosts... 294 Modifying hosts... 294 Renaming hosts/host groups...295 7 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

8 CONTENTS Setting host or host group port flags...296 Deleting hosts/host groups... 296 Viewing hosts/host groups... 296 Viewing host/host group details... 297 Viewing host initiators... 298 Host/Host group flags... 299 Host I/O limits dialog box... 300 Host Group filtering rules... 300 Select Storage Resource Pool...301 Provisioning storage... 302 Creating host groups... 302 Adding hosts to host groups... 303 Removing hosts from host groups... 303 Modifying host groups... 304 Renaming hosts/host groups...304 Setting host or host group port flags...305 Deleting hosts/host groups... 305 Viewing hosts/host groups... 306 Viewing host/host group details... 306 Creating masking views... 307 Renaming masking views... 308 Deleting masking views...308 Viewing masking views... 308 Viewing masking view connections... 309 Viewing masking view details... 311 Set Dynamic LUN Addresses... 311 Setting initiator port flags... 311 Setting initiator attributes... 312 Renaming initiator aliases... 312 Replacing initiators...313 Removing masking entries...313 Viewing initiators...313 Viewing initiator details... 314 Viewing volumes associated with host initiator... 315 Viewing details of a volume associated with initiator... 316 Creating port groups... 316 Deleting port groups... 317 Adding ports to port groups...317 Removing ports from port groups... 318 Renaming port groups... 319 Viewing port groups... 319 Viewing port groups details... 319 Viewing ports in port group... 320 Viewing port details... 321 Volume Set Addressing... 321 Viewing host IO limits... 321 Managing storage for Mainframe...322 Provisioning storage for mainframe... 324 Using the Provision Storage wizard for mainframe... 324 Viewing splits... 327 Viewing CU images... 328 Viewing CU image details...329 Creating CKD volumes... 330 Editing CKD volume capacities... 331 Expanding CKD volumes...331 z/OS map from the CU image list view... 332 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 8

9 CONTENTS z/OS unmap from the CU image list view... 333 z/OS map from the volume list view ... 333 z/OS unmap from the volume list view... 334 z/OS map from the Volumes (Storage Groups) list view... 335 z/OS unmap from the Volumes (Storage Groups) list view... 335 Adding an alias range to a CU image... 336 Removing an alias range from a CU image ... 337 Setting the base address... 337 Understanding All Flash Mixed FBA/CKD support ...337 Mapping CKD volumes...340 Unmapping CKD volumes... 341 Copying CU image mapping... 342 Available Volume for EA/EF Mapping dialog box...342 Base Addresses in Use dialog box... 342 Select SSID dialog box...342 Viewing CKD volumes in CU image... 342 Creating PowerPath hosts...343 Viewing PowerPath hosts... 344 Viewing PowerPath hosts details...344 Viewing PowerPath Host Virtual Machines...345 Viewing host cache adapters... 346 347 Data Protection Chapter 6 Understanding Data Protection Management...348 Creating device groups... 348 Adding volumes to device groups...349 Removing volumes from device groups... 350 Setting consistency protection...350 Renaming device groups... 351 Deleting device groups... 351 Viewing device groups...351 Viewing device group details...352 Viewing volumes in device group... 353 Understanding TimeFinder/Clone operations... 354 Managing TimeFinder/Clone sessions... 354 Creating clone copy sessions...355 Activating clone copy sessions... 357 Recreating clone copy sessions... 358 Creating clone snapshots... 359 Modifying clone copy sessions... 360 Restoring data from target volumes... 361 Splitting clone volume pairs... 362 Terminating clone copy sessions...363 Viewing clone pairs... 364 Viewing clone pair details...365 Clone copy session options...365 Understanding TimeFinder/Snap operations... 367 Managing TimeFinder/Snap sessions... 368 Creating virtual copy sessions... 369 Activating virtual copy sessions... 370 Creating snapshots...371 Duplicating virtual copy sessions...372 Recreating virtual copy sessions... 373 Restoring virtual copy sessions... 374 Terminating virtual copy sessions... 375 9 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

10 CONTENTS Viewing snap pair details... 376 Viewing snap pairs... 376 Snap session options...377 Set TimeFinder Snap Pairs dialog box... 378 Managing TimeFinder/Mirror sessions...378 Creating Snapshots... 379 Restoring BCV pairs... 380 Splitting BCV pairs... 381 Cancelling BCV pairs... 381 Viewing mirror pairs... 382 Viewing mirror pair details... 382 TimeFinder/Mirror session options... 383 Setting TimeFinder/Mirror pairs... 385 Managing TimeFinder SnapVX...385 Creating snapshots... 387 Modifying TimeFinder SnapVX snapshots...389 Linking to snapshots... 390 Relinking to snapshots... 391 Unlinking from snapshots... 392 Restoring snapshots... 393 Setting snapshots to automatically terminate...393 Setting "Secure" status on an existing snapshot... 394 Terminating snapshots... 395 Setting copy mode for snapshots... 396 Viewing snapshots... 396 Viewing snapshot details...397 Viewing snapshot links... 398 Viewing snapshot link details... 399 Viewing snapshot source volumes... 399 Viewing snapshot source volume details... 400 Viewing snapshot source volume linked volumes... 401 RBAC roles for performing local and remote replication actions.. 402 Managing remote replication sessions... 402 Creating SRDF connections...403 Creating SRDF pairs... 404 Deleting SRDF pairs... 407 Moving SRDF pairs... 408 Setting SRDF mode... 409 Viewing SRDF volume pairs... 410 Viewing SRDF volume pair details... 412 Viewing SRDF volume pair details... 414 Viewing SRDF protected storage group pairs ... 415 Viewing SRDF protected storage group pair properties...417 Deleting SRDF pairs... 419 Deleting SRDF pairs from the SRDF List Volumes View...421 Establishing SRDF pairs...421 Failing over... 422 Failing back...423 Invalidating R1/R2 volumes...424 Making R1/R2 volumes ready... 425 Making R1/R2 volumes not ready... 426 Read/write disabling R2 volumes... 427 Read/write enabling R1/R2 volumes...428 Resuming SRDF links... 429 Read/write disabling R1/R2 volumes... 429 Refreshing R1 or R2 volumes... 430 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 10

11 CONTENTS Setting SRDF/A controls to prevent cache overflow... 431 Setting consistency protection... 432 Resetting original device identity... 432 Restoring SRDF pairs...433 Setting bias location... 434 Setting the SRDF GCM flag...434 Setting volume status... 435 Splitting SRDF pairs... 436 Suspending SRDF pairs... 436 Swapping SRDF personalities... 438 Updating R1 volumes... 438 SRDF session options... 439 SRDF session modes... 442 RBAC roles for performing local and remote replication actions.. 443 Understanding Virtual Witness... 444 Adding SRDF Virtual Witness instances... 445 Removing SRDF Virtual Witness instances... 445 Set state for SRDF Virtual Witness instances... 446 Viewing SRDF Virtual Witness instances...447 Viewing SRDF Virtual Witnesses details... 447 Creating SRDF/A DSE pools... 448 Deleting SRDF/A DSE pools... 448 Adding volumes to SRDF/A DSE pools... 449 Removing volumes from SRDF/A DSE pools... 449 Enabling all volumes in SRDF/A DSE pools... 449 Disabling all volumes in SRDF/A DSE pools... 449 Viewing SRDF/A DSE pools...450 Viewing SRDF DSE pool details... 450 Creating TimeFinder/Snap pools...451 Adding volumes to TimeFinder/Snap pools... 452 Enabling all volumes in TimeFinder/Snap pools...452 Disabling all volumes in TimeFinder/Snap pools... 452 Deleting TimeFinder/Snap Pools... 452 Removing volumes from TimeFinder/Snap pools... 453 Viewing TimeFinder/Snap pools... 453 Viewing TimeFinder/Snap pool details... 454 Viewing SRDF group volumes... 454 Viewing SRDF protected storage groups...455 Viewing related SRDF groups... 457 Creating SRDF groups... 457 Modifying SRDF groups...459 Setting SRDF/A DSE attributes... 461 Setting SRDF/A group attributes... 461 Setting SRDF/A pace attributes... 462 Swapping SRDF groups... 463 Setting consistency protection... 463 Deleting SRDF groups... 464 Viewing SRDF groups... 464 Viewing SRDF group details...465 Viewing SRDF protected device groups... 467 Resuming SRDF links... 468 Viewing SRDF group volumes... 469 SRDF/A control actions...469 RDFA flags...470 SRDF group modes...471 Understanding RecoverPoint...471 11 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

12 CONTENTS Tagging and untagging volumes for RecoverPoint (storage group level)...472 Tagging and untagging volumes for RecoverPoint (volume level)... 472 Untagging RecoverPoint tagged volumes... 473 Viewing RecoverPoint copies... 473 Viewing RecoverPoint copy details... 474 Viewing RecoverPoint sessions...475 Viewing RecoverPoint session details... 475 Viewing RecoverPoint storage groups... 476 Viewing RecoverPoint tagged volumes... 476 Viewing RecoverPoint tagged volume details... 477 Protecting storage groups using RecoverPoint...479 Viewing RecoverPoint volumes...480 Viewing RecoverPoint clusters... 481 Viewing RecoverPoint cluster details... 482 Viewing RecoverPoint splitters... 482 Viewing RecoverPoint appliances... 483 RecoverPoint systems... 483 RecoverPoint consistency groups... 486 RecoverPoint replication sets... 488 RecoverPoint links... 489 Creating Open Replicator copy sessions...490 Activating Open Replicator session... 491 Recreating Open Replicator sessions...492 Restoring Open Replicator sessions...492 Renaming Open Replicator sessions... 492 Removing Open Replicator sessions... 493 Setting Open Replicator session background copy mode...493 Setting Open Replicator session donor update off...493 Setting Open Replicator session front end zero detection off... 493 Setting Open Replicator session pace...494 Setting Open Replicator ceiling... 494 Terminating Open Replicator sessions... 494 Viewing Open Replicator sessions... 495 Viewing Open Replicator session details... 496 Viewing Open Replicator SAN View... 496 Open Replicator session options... 497 Open Replicator flags... 499 Understanding non-disruptive migration (NDM)... 500 Preparing a non-disruptive migration (NDM) session... 500 Creating a non-disruptive migration (NDM) session... 502 Viewing the non-disruptive migration (NDM) sessions list...504 Viewing migration details... 504 Readying the migration target... 506 Cutting over a migration session... 506 Synchronizing data after non-disruptive migration (NDM) cutover .. 507 Committing a migration session... 507 Cancelling a migration session... 508 Recovering a migration session... 509 Viewing migration environments...509 Setting up a migration environment... 510 Removing a migration environment... 510 Viewing the authorized users and groups details... 510 Expanding remote volumes ... 511 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 12

13 CONTENTS Setting a device identity... 511 Editing storage group volume details ...512 Editing storage group details ...513 Replication state severities... 513 Managing space reclamation... 514 Advanced Options dialog...515 Performance Management 517 Chapter 7 Understanding Performance Management...518 Performance Dashboards...518 Viewing dashboards... 519 Using default dashboards... 520 Using the All Arrays overview dashboard...521 Creating a dashboard with charts... 522 Editing a template dashboard... 523 Copying a dashboard... 523 Editing dashboards... 523 Deleting dashboards... 523 Running a report from the dashboard... 524 Saving a dashboard as a template...524 Saving dashboard changes... 524 Scheduling a report from the dashboard...525 Navigating to the Details view... 526 Saving dashboards and charts... 526 Utilization Threshold charts... 527 Charts View... 527 Customizing a chart...528 Customizing the tabbed Charts view... 530 Editing charts... 532 Copying a chart... 533 Analyze view...533 Creating a dashboard from an Analyze view... 534 Creating a template dashboard from an Analyze view... 535 Changing the time range... 535 Symmetrix systems view (Real Time)... 536 FE Director view (Real Time)... 537 BE Director (DA) view (Real Time)... 537 External Director view (Real Time)... 538 RDF Director view (Real Time)... 538 Array systems view (Diagnostic) ... 539 Alerts view (Diagnostic)... 539 FE Directors view (Diagnostic)... 540 BE Directors (DA) view (Diagnostic)... 541 External Directors view (Diagnostic)... 541 RDF Directors view (Diagnostic)... 542 IM Directors view (Diagnostic)... 542 EDS Directors view (Diagnostic)... 543 Cache Partitions view (Diagnostic)...543 Boards view (Diagnostic)...543 Disk Technologies view (Diagnostic)... 544 Events view (Diagnostic)... 544 Storage Groups view (Diagnostic)... 545 Device Groups view (Diagnostic)...546 Databases view (Diagnostic)...547 Thin Pools view (Diagnostic)... 547 13 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

14 CONTENTS Disk Groups view (Diagnostic)...548 External Disk Groups view (Diagnostic)... 549 SRPs view (Diagnostic)... 549 RDFA Groups view (Diagnostic)... 550 RDFS Groups view (Diagnostic)... 550 Snap Pools view (Diagnostic)...551 DSE Pools view (Diagnostic)...551 FAST VP Policies view (Diagnostic)... 552 Disk Group Tiers view (Diagnostic)...552 Virtual Pool Tiers view (Diagnostic)...553 Storage Groups view (Diagnostic)... 554 Hosts view (Diagnostic)... 555 Initiators view (Diagnostic)...555 Masking Views view (Diagnostic)... 556 Port Groups view (Diagnostic)... 556 Host IO Limit by SG view (Diagnostic)... 556 Host IO Limit by FE view (Diagnostic)... 557 Symmetrix systems view (Historical)...557 Alerts view (Historical)... 558 FE Directors view (Historical)... 559 BE Directors (DA) view (Historical)...559 External Directors view (Historical)...560 RDF Directors view (Historical)... 560 IM Directors view (Historical)... 561 EDS Directors view (Historical)...561 Cache Partitions view (Historical)... 562 Boards view (Historical)... 562 Disk Technologies view (Historical)... 562 Events view (Historical)... 563 Storage Groups view (Historical)... 564 Device Groups view (Historical)... 565 Databases view (Historical)... 565 Thin Pools view (Historical)...566 Disk Groups view (Historical)... 567 External Disk Groups view (Historical)...567 SRPs view (Historical)...568 RDFA Groups view (Historical)... 568 RDFS Groups view (Historical)... 569 Snap Pools view (Historical)... 570 DSE Pools view (Historical)... 570 FAST VP Policies view (Historical)... 571 Disk Group Tiers view (Historical)... 571 Virtual Pool Tiers view (Historical)...572 Storage Groups view (Historical)...572 Hosts view (Historical)... 573 Masking Views view (Historical)... 573 Port Groups view (Historical)... 574 Host IO Limit by SG view (Historical)... 574 Host IO Limit by FE view (Historical)... 575 Initiators view (Historical)... 575 Heatmap...575 Viewing Heatmap Metrics Charts... 576 Navigating from Heatmap to Analyze or Charts... 577 Filtering heatmaps... 577 Reports...577 Report operations... 578 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 14

15 CONTENTS Creating performance reports... 579 Copying performance reports...580 Creating queries using the Create Query wizard...580 Editing queries using the Edit Query wizard...582 Modifying performance reports... 583 Deleting performance reports... 584 Running performance reports... 584 Scheduling performance reports... 584 Cancelling a scheduled report...585 Copying performance reports... 586 Viewing Real Time traces... 586 Creating a Real Time trace... 587 Modifying a Real Time Trace... 588 Deleting a Real Time trace... 588 Plan View... 588 SRP projection dashboard... 589 Thin pool projection dashboard...590 Viewing system registrations... 591 Viewing system registration details... 592 Registering storage systems...592 Removing a system registration...593 Viewing registered storage systems information... 593 Changing registration details... 593 Managing dashboard catalog... 594 Viewing Performance databases...594 Viewing database details... 595 Restoring a database... 597 Backing up a database... 597 Canceling a scheduled database backup... 598 Editing a scheduled Performance database backup... 598 Editing database retention settings... 599 Deleting databases... 599 Removing database backup files...600 Viewing Performance thresholds and alerts... 600 Performance Threshold Alert operations...601 Creating a performance threshold alert...601 Editing a performance threshold alert...602 Deleting performance thresholds and alerts... 602 Configuring SNMP notifications... 602 Managing dashboard catalog... 603 Configuring email notifications... 603 About exporting and importing performance settings...604 Importing Performance settings... 604 Exporting Performance settings... 605 Exporting Performance Viewer settings... 605 Metrics...606 BE Director (DA) metrics... 611 BE Emulation metrics... 613 BE Port metrics...614 Board metrics...614 Cache Partition metrics... 615 DATA Volume metrics... 616 Database metrics... 618 Database by Pool metrics... 624 Device Group metrics... 625 Disk metrics...630 15 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

16 CONTENTS Disk Bucket metrics... 631 Disk Group metrics... 632 Disk Group tier metrics... 633 Disk Technology metrics...634 DSE Pool metrics...635 DX Emulation metrics... 636 DX Port metrics... 636 EDS Director metrics... 637 EDS Emulation metrics... 637 External Director metrics...638 External Disk metrics... 640 External Disk Group metrics... 641 FAST VP Policy metrics... 642 FE Director metrics... 643 FE Emulation metrics...650 FE Port metrics... 650 FE Port - FE metrics... 651 FE Port - SG metrics...651 FICON Emulation metrics... 651 FICON Emulation Threads metrics...652 FICON Port Threads metrics... 652 Host metrics... 652 Host IO Limit by FE metrics... 653 Host IO Limit by SG metrics... 653 IM Director metrics... 653 IM Emulation metrics...654 Initiator metrics... 654 Initiators by Port metrics... 654 IP Interface metrics... 655 iSCSI Target metrics... 656 Masking View metrics...656 Metas metrics...657 Other - Pool Bound Volume metrics... 664 Pool by Storage Group metrics...669 Port Group metrics... 670 RDF Director metrics...671 RDF Emulation metrics... 673 RDF Port metrics... 673 RDF/A Group metrics... 673 RDF/S Group metrics... 677 SAVE Volume metrics... 682 Snap Pool metrics...688 Spare Disk metrics...688 SRP metrics... 690 Storage Group metrics... 692 Storage Group by Pool metrics...699 Storage Group by Tier metrics...700 Thin Pool metrics... 701 Thin Volume metrics... 703 Tier by Storage Group metrics...709 Virtual Pool Tier metrics... 710 Volume metrics...712 Viewing and managing metrics... 718 Editing metrics... 719 Metrics... 719 Setting the time range for viewing data... 831 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 16

17 CONTENTS Creating a template dashboard from an Analyze view... 831 Filtering performance data... 832 Filtering object lists... 832 Filtering heatmaps... 832 Database Storage Analyzer 833 Chapter 8 Introduction... 834 Database collection and retention policy... 834 Mapping files... 835 Viewing Databases page... 835 Viewing Database Administration page... 837 Registering a monitored environment... 839 Adding an Oracle database... 839 Registering Monitored Environment - Advanced Options... 840 Adding monitored MS SQL server instances... 841 Editing monitored Oracle databases... 843 Editing monitored MS SQL server instances...844 Starting statistics collection... 845 Stopping statistics collection...845 Running device mapping...845 Schedule device mapping... 845 Removing monitored environment instance... 846 Viewing the Performance Page... 846 Viewing the Analytics Page...849 Viewing analytics details... 853 Viewing database storage details...853 Viewing database details... 854 Adding hints... 854 Viewing hints... 855 Editing hints... 856 Enabling hints... 857 Disabling hints... 857 Removing hints...858 Viewing hint logs... 858 Hint operations...859 VMware 861 Chapter 9 Understanding Unisphere support for VMware...862 Viewing vCenters and ESXi information... 862 Registering vCenter/ESXi... 863 Editing vCenter/ESXi... 864 Unregistering vCenter/ESXi... 864 Rediscover vCenter/ESXi... 865 Viewing ESXi server details... 865 Viewing ESXi server masking views... 866 Viewing ESXi server performance details... 866 Viewing ESXi server virtual machines details... 867 Viewing ESXi server virtual machine disks... 868 Chapter 10 System Management 871 Viewing Storage System details...872 Setting system attributes... 874 Using the Emulation Management wizard... 877 Setting CPU I/O resource distribution... 877 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version) 17

18 CONTENTS Setting logging level preferences ...878 Understanding eNAS... 878 Discovering eNAS control stations...879 Managing File storage...879 Provisioning storage for file ... 880 Launching Unisphere for VNX...882 Managing file storage groups...882 Managing file masking views...883 Viewing file systems... 883 Viewing file system details... 884 Viewing file system storage pools... 885 Viewing file system storage pool details... 885 Manage file storage alerts...887 Viewing the system audit log... 889 Viewing Symmetrix audit log details... 890 Viewing system hardware... 891 Viewing available ports... 891 Viewing back-end directors... 892 Viewing back-end director details... 893 Viewing external directors... 893 Viewing external director details... 894 Viewing system front-end directors...895 Viewing system front end director details... 897 Viewing RDF directors... 899 Viewing RDF director details... 901 Viewing RDF director SRDF groups... 903 Viewing IM directors...905 Viewing EDS directors... 906 Viewing failed drives... 907 Viewing mapped front-end volumes...907 Enginuity Warning dialog box...908 Converting directors...909 Associating directors with ports... 910 Setting director port attributes... 910 Associating directors and ports... 912 Disassociating directors and ports...913 Enabling and disabling director ports...913 Performing system health checks... 913 Naming storage systems... 914 Replacing failed drives...914 Managing jobs... 918 Making configuration changes safely... 918 Understanding task persistence... 919 Previewing jobs... 920 Scheduling jobs... 920 Running jobs...921 Rescheduling jobs...921 Modifying jobs... 922 Reordering tasks within a job... 922 Grouping jobs... 922 Un-grouping jobs... 922 Stopping jobs...923 Deleting jobs... 923 Viewing the job list...923 Viewing job details... 924 Understanding licenses...925 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 18

19 CONTENTS Installing licenses... 926 Removing host-based licenses... 927 Viewing Symmetrix entitlements...927 Viewing host-based licenses... 928 Viewing license usage... 928 Viewing license file... 929 Viewing license file details... 930 Understanding access controls... 931 Opening access controls... 931 Creating access groups... 932 Adding access ID to access groups... 932 Removing access IDs from access groups... 933 Deleting access groups... 933 Viewing access groups...934 Viewing access group details... 934 Creating access pools...935 Modifying access pools...935 Deleting access pools... 936 Viewing access pools... 936 Viewing access pool details...937 Creating access control entries... 937 Deleting access control entries... 938 Viewing access control entries... 938 Viewing access control entry details... 939 Viewing access IDs... 939 Viewing access pool volumes... 940 Viewing access types...940 Access types... 941 Modifying access types... 942 Understanding dynamic cache partitioning... 943 Enabling/Disabling dynamic cache partitioning... 943 Creating dynamic cache partitions...943 Modifying dynamic cache partitions... 944 Assigning dynamic cache partitions... 945 Deleting dynamic cache partitions... 945 Running in analyze mode... 946 Viewing dynamic cache partitions... 947 Viewing dynamic cache partition details... 948 Viewing volumes assigned to dynamic cache partitions... 950 System management - iSCSI... 951 Creating an iSCSI target...952 Modifying an iSCSI target... 953 Creating an IP interface...954 Editing an IP interface... 954 Adding an IP route... 955 Deleting an iSCSI target... 955 Deleting an IP interface... 955 Removing an IP route... 956 Attaching an IP interface to an iSCSI target... 956 Attaching an iSCSI target to an IP interface... 957 Detaching an IP interface from an iSCSI target... 958 Disabling an iSCSI target... 958 Enabling an iSCSI target...959 Setting port flags... 959 Viewing the iSCSI directors list... 960 Viewing the iSCSI director details... 961 19 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

20 CONTENTS Viewing IP interfaces list ... 961 Viewing IP interfaces details... 962 Viewing iSCSI targets list... 964 Viewing iSCSI target details... 965 Viewing IP routes list... 966 Viewing the IP routes details... 966 Viewing iSCSI ports list...967 Viewing iSCSI ports details... 968 20 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

21 TABLES 1 Service level compliance rules... 63 User roles and associated permissions... 81 2 Permissions for Local Replication, Remote Replication and Device Management roles 3 ... 82 Host/Host group flags...299 4 TimeFinder/Clone session options...366 5 6 TimeFinder/Snap session options... 377 7 TimeFinder/Mirror session options... 383 8 Array metrics...606 9 BE director (DA) metrics... 611 BE emulation metrics... 614 10 11 BE port metrics... 614 Board metrics...614 12 Cache partition metrics... 615 13 DATA volume metrics... 617 14 Database metrics... 618 15 Database by pool metrics...624 16 17 Device group metrics...625 Disk metrics...630 18 Disk bucket metrics...631 19 Disk group metrics...632 20 Disk group tier metrics... 633 21 Disk technology metrics... 634 22 DSE pool metrics... 636 23 DX emulation metrics... 636 24 DX port metrics... 637 25 EDS director metrics... 637 26 EDS director metrics... 637 27 External director metrics... 638 28 External disk metrics... 640 29 External disk group metrics... 641 30 FAST VP policy metrics... 642 31 FE director metrics... 643 32 FE director metrics...646 33 FE emulation metrics... 650 34 FE port metrics... 650 35 FE port (FE) metrics... 651 36 FE port (SG) metrics... 651 37 FICON emulation metrics... 651 38 FICON emulation threads metrics...652 39 FICON port threads metrics... 652 40 Host metrics... 652 41 Host IO limit (by FE) metrics... 653 42 Host IO limit (by SG) metrics... 653 43 IM director metrics... 653 44 IM emulations metrics... 654 45 Initiator metrics... 654 46 Initiators (by port) metrics... 655 47 IP interface metrics... 655 48 iSCSI target metrics... 656 49 Masking view metrics... 656 50 Metas metrics...657 51 Pool-bound volumes metrics... 664 52 21 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

22 TABLES 53 Pool by storage group metrics...669 Port group metrics... 670 54 RDF director metrics... 671 55 RDF emulation metrics...673 56 RDF port metrics... 673 57 58 RDF/A group metrics...673 59 RDF/S group metrics...677 60 SAVE volume metrics... 682 61 Snap pool metrics... 688 Spare disk metrics... 688 62 63 SRP metrics... 690 Storage group metrics... 693 64 Storage group (by pool) metrics...700 65 Storage group (by tier) metrics... 700 66 Thin pool metrics...701 67 Thin volume metrics...704 68 69 Tier (by storage group) metrics... 709 70 Virtual pool tier metrics... 710 71 Volume metrics...712 72 Array metrics...719 73 BE director (DA) metrics... 725 BE emulation metrics... 727 74 BE port metrics...727 75 Board metrics... 728 76 Cache partition metrics... 728 77 DATA volume metrics... 730 78 Database metrics...731 79 Database by pool metrics...738 80 Device group metrics... 739 81 Disk metrics... 744 82 Disk bucket metrics... 745 83 Disk group metrics... 745 84 Disk group tier metrics...746 85 Disk technology metrics...748 86 DSE pool metrics... 749 87 DX emulation metrics... 750 88 DX port metrics... 750 89 EDS director metrics... 750 90 EDS director metrics... 751 91 External director metrics... 751 92 External disk metrics... 753 93 External disk group metrics... 755 94 FAST VP policy metrics... 756 95 FE director metrics... 756 96 FE director metrics... 759 97 FE emulation metrics... 763 98 FE port metrics... 763 99 FE port (FE) metrics...764 100 FE port (SG) metrics... 764 101 FICON emulation metrics...765 102 FICON emulation threads metrics...765 103 FICON port threads metrics... 765 104 Host metrics... 765 105 Host IO limit (by FE) metrics... 766 106 Host IO limit (by SG) metrics... 766 107 IM director metrics... 767 108 22 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

23 TABLES 109 IM emulations metrics...767 Initiator metrics... 767 110 Initiators (by port) metrics... 768 111 IP interface metrics... 768 112 iSCSI target metrics... 769 113 114 Masking view metrics... 769 115 Metas metrics... 770 116 Pool-bound volumes metrics... 777 117 Pool by storage group metrics... 782 Port group metrics...783 118 119 RDF director metrics... 784 120 RDF emulation metrics... 786 121 RDF port metrics... 786 122 RDF/A group metrics...786 123 RDF/S group metrics... 790 SAVE volume metrics... 795 124 125 Snap pool metrics... 801 Spare disk metrics...801 126 SRP metrics... 803 127 Storage group metrics...805 128 Storage group (by pool) metrics... 812 129 Storage group (by tier) metrics...813 130 Thin pool metrics...814 131 Thin volume metrics... 816 132 Tier (by storage group) metrics... 822 133 Virtual pool tier metrics... 823 134 Volume metrics...825 135 Task status before and after server shutdown... 919 136 Access types... 941 137 23 9.0.0 Online Help (PDF version) Dell EMC Unisphere for PowerMax

24 TABLES 24 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

25 CHAPTER 1 Introduction l Unisphere Online Help ...26 l Capacity information ...29 Introduction 25

26 Introduction Unisphere Online Help Unisphere is a HTML5 web-based application that allows you to configure and manage PowerMax, VMAX All Flash, and VMAX storage systems. The term Unisphere incorporates "Unisphere for PowerMax" for the management of PowerMax and All Flash storage systems running PowerMaxOS 5978, and "Unisphere for VMAX" for the management of VMAX All Flash and VMAX storage systems running HYPERMAX OS 5977 and Enginuity OS 5876. A HTML5 based Unisphere provides a number of advantages: l improved security l modern user interface l reduced application response times Unisphere supports the following tasks which are available from the items on the side panel and blue title bar: All Systems view is selected: The side panel has the following items when the l HOME - View system view dashboard of all storage systems being managed. l - Monitors and manages storage system performance data PERFORMANCE (Dashboards, Charts, Analyze, Heatmap, Reports, Plan, Real Time traces, and Understanding Performance Performance Database management). Refer to on page 518for more information. Management l VMWARE - Views all the relevant storage related objects to an ESXi server and also provides the ability to help troubleshooting storage performance related Understanding Unisphere support for VMware issues to the ESXi server. Refer to on page 862 for more information. l DATABASES - Monitors and troubleshoots database performance issues. Refer to on page 834 for more information. Introduction l EVENTS - Includes Alerts and Job List. l - Displays support information. SUPPORT l Click to set preferences. Refer to for more information. Click to hide the side panel and click it again to display the side panel. to return to the Click view. HOME All Systems The side panel has the following items when the storage system specific view is selected: l HOME - View system view dashboard of all storage systems being managed. l - View the following dashboards for a selected storage system: DASHBOARD Capacity and Performance, System Health, Storage Group compliance, Capacity, and Replication. l STORAGE - Manage storage (storage groups, service levels, templates, SRPs, volumes, external storage, VVols, FAST policies, tiers, thin pools, disk groups and Understanding Storage Management on page 90 for VLUN migration). Refer to more information. 26 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

27 Introduction l HOSTS - Manage hosts (hosts, masking views, port groups, initiators, XtremSW Cache Adapters, PowerPath Hosts, Mainframe , and CU images). Refer to Understanding Host Management on page 292 for more information. l -Manage data protection (storage groups, device groups, DATA PROTECTION SRDF groups, migrations, virtual witness, open replicator, SRDF/A DSE pools, Understanding Data TimeFinder SNAP pools, and RecoverPoint systems). Refer to Protection Management on page 348 for more information. l PERFORMANCE - Monitors and manages storage system performance data (Dashboards, Charts, Analyze, Heatmap, Reports, Plan, Real Time traces, and Understanding Performance Performance Database management). Refer to Management on page 518 for more information. l SYSTEM - Includes Hardware, Symmetrix Properties, File (eNAS), and iSCS.. l EVENTS - Includes Alerts, Job List, and Audit log. l - Displays support information. SUPPORT New and modified features/functionality in 9.0.0 l HTML5 support -A HTML5 based Unisphere provides a number of advantages: n improved security n modern user interface look and feel - use of browser functionality, bookmarks for links, back and forward buttons. Facilitates enhanced collaboration as you can share links to system views with colleagues. n reduced application response times n aligns with other Dell EMC products l System Health Score -The System Health dashboard provides a single place from which you can quickly determine the health of the system. The System Health panel displays values for the following high level health or performance metrics: Configuration, Capacity, System Utilization, Storage Group Response Time and Service Level Compliance. It also displays an overall health score based on the lowest health score out of the five metrics. These five categories are for storage systems running HYPERMAX OS 5977 or higher. For storage systems running Enginuity OS 5876, the health score is based on four categories: Configuration, System Utilization, Capacity and storage group (SG) Response Time. The health score is calculated every five minutes. The overall value is always calculated from all metric values. If a health score category is seen as stale or unknown then the overall health score is not updated. The previously calculated overall health score is displayed but its value is denoted as stale by setting the menu item to grey on page 37). (refer to Understanding the system health score l Role Based Access Control (RBAC) - This feature provides a set of roles with more granular access that can be assigned to users in order to limit what resources can be accessed and what functions a user can perform on those resources. Additional roles are Device Management, Local Replication and Remote Replication at the entire array or an storage group subset. This feature also supports, for tracking purposes, a full audit log of users and actions performed. Adding authorization rules on page 72). (refer to l Service Levels (Performance QoS) - Unisphere supports all service levels (Diamond, Platinum, Gold, Silver, Bronze and Optimized) for FBA SRPs containing internal disk groups on Storage systems running PowerMaxOS 5978 and above. There are no changes to service level restrictions for CKD SRPs or SRPs Viewing service levels on page 152). containing external disk groups (refer to l Compliance - A Compliance Tab has been added to the the Storage Group detailed view page (refer to Viewing Storage Group Compliance view on page 94). Unisphere Online Help 27

28 Introduction l Noisy Neighbors - The Noisy Neighbors feature displays the following performance data for a selected storage group: n FE Directors details - Name, % busy, and queue depth utilization. n FE Port details - Name, % busy, and host I/Os per second. n Related SGs - Name, response time, host I/Os per second, and host MBs per second. (refer to Viewing ESXi server performance details on page 866). l Data Reduction - Data is reduced using data compression and de-duplication (de- duplication applies for storage systems running PowerMaxOS 5978 or higher). l Real Time Data Collection - This feature provides the ability to troubleshoot at a more granular level for a set number of Storage Group's for a limited set of metrics at a 30 second level. This will be limited to 1 array per time, a maximum of 5 SG's at a time and a certain number of KPI metrics. The metrics reported on are Response Time, Host I/O's Per Sec, Host MB's Per Sec, Host Reads Per Sec, and Host Writes Per Sec. l SRDF and Metro topology view - The SRDF and Metro topology view visually describes the layout of the SRDF connectivity of the selected storage system in Unisphere. l Storage Templates - Using the configuration and performance characteristics of an existing storage group as a starting point, you can create templates that will pre-populate fields in the provisioning wizard and create a more realistic performance reservation in your future provisioning requests (refer to Creating storage templates on page 267). l VMware integration - Unisphere support for VMware provides the storage admin access to all the relevant storage related objects to an ESXi server and also provides the ability to help troubleshooting storage performance related issues to the ESXi server. You can, as a read only user, discover at the vCenter level as well as discovering an individual ESXi server. If a vCenter is discovered, then all ESXi servers under that vCenter are discovered. All ESXi servers, that do not have local storage on the Unisphere performing the discovery, are filtered out. Once VMware information is added by a user, all other users of Unisphere are able to access this information. The minimum version number supported by vCenter is version 5.5. The VMware feature supports a maximum of 75 ESXi servers and 2000 VMs per Unisphere for PowerMax install (refer to Understanding Unisphere support for VMware on page 862). l Integration of Database Storage Analyzer into Unisphere - DSA for Oracle and SQL now fully integrated with Unisphere, no separate login or page launch required. The DB mapping procedure has been streamlined to make it more user Introduction on page 834). friendly (refer to l Silent Install - This supports installations of Unisphere by invoking a automated script which handles the various steps involved. Included is a response file containing default values that the user can edit. Where there is not enough space or memory on a host, the install will be aborted. Using the help system Clicking on the navigation bar results in the display of three options. Clicking the top option results in the display of a window that displays the Unisphere help home page. Clicking the middle option results in the display of a window that displays the Unisphere help for that screen (context-sensitive help). Clicking the bottom option About ) results in the display of a window that displays the Unisphere version number. ( Finding information: 28 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

29 Introduction l Using the Contents tab—Click the book icon to expand the table of contents and display help topics. l Using the Search tab—Click the Search tab in the navigation pane. Type a search word or phrase and a list of topics that contain the word or phrase displays in the navigation panel. Click on the name of the topic to display it in the View panel. Your comments— Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Send your opinions to: content feedback . Supporting documentation Information on the installation of Unisphere for PowerMax can be found in the Unisphere for PowerMax Installation Guide located at the Dell EMC support website or . technical documentation page the For information specific to this Unisphere product release, refer to the Unisphere for Dell EMC support website PowerMax Release Notes technical located at the or the . documentation page Capacity information Storage capacity can be measured using two different systems – base 2 (binary) and base 10 (decimal). Organizations such as the International System of Units (SI) describe storage capacity . In base 10 recommend using the base 10 measurement to notation, one megabyte (MB) is equal to 1 million bytes, and one gigabyte (GB) is equal to 1 billion bytes. Operating systems generally measure storage capacity using the base 2 measurement system. Unisphere and Solutions Enabler use the base 2 measurement system to display storage capacity along with the TB notation as it is more universally understood. In base 2 notation, one megabyte (MB) is equal to 1,048,576 bytes and one gigabyte (GB) is equal to 1,073,741,824 bytes. Name Abbrevi Binary Decimal Binary Value (in Decimal Decimal) ation (Equivalent) Power Power 3 10 2 1,024 10 1,000 KB kilobyte 6 20 1,048,576 10 2 1,000,000 MB megabyte 30 9 gigabyte 1,073,741,824 10 GB 1,000,000,000 2 12 40 TB 1,099,511,627,776 10 2 1,000,000,000,000 terabyte Capacity information 29

30 Introduction 30 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

31 CHAPTER 2 Getting Started l Operating as the initial setup user ... 32 l ... 32 Viewing Home Dashboard view - All Storage Systems l Viewing Home Dashboard view - Specific Storage System ... 34 l Viewing system performance view ... 34 l ... 35 Viewing the System Health Dashboard l Understanding the system health score ... 37 l ... 39 Viewing Storage Group Compliance view l Viewing Capacity dashboard view ... 40 l Viewing Replication dashboard ... 41 l Discovering storage systems ...43 l Refreshing storage system information ...44 l Viewing product version information ... 44 l Searching for storage objects ... 45 l ... 46 Modifying server logging levels l Exiting the console ...46 l Getting help ... 46 Getting Started 31

32 Getting Started Operating as the initial setup user When Unisphere is first installed, there is a single user called the Initial Setup User (ISU). This user can perform administrative tasks only on storage systems that do not have defined roles (authorization rules). Once an Administrator or SecurityAdmin is assigned to a storage system, the ISU will no longer be able to access or even see the system from the Unisphere console. Therefore, it is recommended that users not operate in this role for too long. When logging in to Unisphere as the Initial Setup User (ISU), the "Initial setup user warning" message is displayed and it informs you that you can only access the listed storage systems because they do not have defined authorization rules. Once rules are defined for the storage systems, you will no longer be able to access or view the storage systems as the ISU. To continue to access/view a storage system while operating as the ISU, select the corresponding Assign Admin role to ISU option and click OK. The main tasks of an ISU are: l on page 76 Creating local users l Adding authorization rules on page 72 For more information on operating as the ISU, refer to the Unisphere for PowerMax Installation Guide . Viewing Home Dashboard view - All Storage Systems Before you begin The user requires a minimum of Monitor permissions to perform this task. on page 26. For an overview of Unisphere functionality, see Unisphere Online Help The Home Dashboard view (the default mode on login) provides an overall view of the status of all of the storage systems managed by Unisphere. The following panels are displayed: l Compliance l Capacity l Understanding the system health score Health score - see on page 37 l Throughput l IOPS l Efficiency You can sort the view by the following: l Compliance l Capacity l Understanding the system health score on page 37 Health score - see l Throughput l IOPS l Efficiency To view the home dashboard view: Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 32

33 Getting Started Procedure . 1. From the main menu, click Unisphere for PowerMax The home dashboard view is displayed. 2. View the following parameters displayed in each storage system panel. Depending on the metric selected, some of the following items are displayed: l Storage system ID - The serial number of the storage system. l Storage system model - The model number of the storage system. l The version of microcode on the storage system. l Data chart - The information displayed in the chart depends on the selected metric. l Capacity - Percentage of currently allocated capacity for the storage system. l Compliance - Service level compliance data in the form of Storage group counts for each compliance state (Critical, Marginal, Stable) as well as total Storage Group count and number of Storage groups with no service level assigned. l Performance - Current performance health score. l Throughput - Current throughput for the system, in MB/second. l IOPS - Current IOPS for the system. l Efficiency - The overall efficiency ratio for the array. It represents the ratio of the sum of all TDEVs plus snapshot sizes (calculated based on the 128K track size) and the physical Used Storage (calculated based on the compressed pool track size). 3. (Optional) To view the alerts, click on any storage system panel and click VIEW ALERTS . The color reflects the highest severity alert for the associated storage system. You can also view the job list and navigate to the compliance view for the storage system by clicking the related icon on any storage system panel. Note: you can view alerts for remote storage systems and storage systems that are not registered to collect performance data. 4. (Optional) Click to view the system view in list format. 5. to view the system view in card view format. (Optional) Click 6. (Optional) From a panel view or list view, click the storage system identity of the system you want to view in more detail. 7. (Optional) To navigate to other areas, click on any of the following from the left hand panel: l HOME l PERFORMANCE l VMWARE Viewing Home Dashboard view - All Storage Systems 33

34 Getting Started l DATABASES l EVENTS l SUPPORT Viewing Home Dashboard view - Specific Storage System Before you begin The user requires a minimum of Monitor permissions to perform this task. For an overview of Unisphere functionality, see Unisphere Online Help on page 26. The Home Dashboard view for a specific storage system provides a view of the status of a specific storage system managed by Unisphere. The following panels are displayed: l PERFORMANCE l SYSTEM HEALTH l SG COMPLIANCE l CAPACITY l REPLICATION To view the home dashboard view: Procedure . 1. From the main menu, click Unisphere for PowerMax The home dashboard view for all storage systems is displayed. 2. Select a storage system. Viewing system The system performance dashboard is displayed by default (see on page 34). performance view 3. (Optional) To navigate to other areas, click on any of the following from the left hand panel: l HOME l DASHBOARD l STORAGE l HOSTS l DATA PROTECTION l PERFORMANCE l SYSTEM l EVENTS l SUPPORT Viewing system performance view Before you begin The user requires a minimum of Monitor permissions to perform this task. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 34

35 Getting Started To view the system performance view: Procedure Unisphere for PowerMax . 1. From the main menu, click 2. Select a storage system. The home dashboard view for the selected storage system is displayed. The system performance dashboard is displayed by default. 3. View the capacity and performance data for the selected storage system. The following items are displayed: l A Capacity panel displaying the following: n A graphical representation of the system's subscribed and usable capacity (used = blue and free = grey) and the percentage used for both. n The percentage of subscribed usable capacity. n The overall efficiency ratio. l panel displaying the following graphs over a four hour, one A Performance week, or two weeks period: n Host IOs per sec in terms of read and write operations over time n Latency in terms of read and write operations over time n Throughput in terms of read and write operations over time l A Capacity Trend panel displaying usable capacity and subscribed capacity in terabytes. l The following control is available: n Viewing dashboards — on page VIEW PERFORMANCE DASHBOARD 519 4. (Optional) To navigate to other areas, click on any of the following from the left hand panel: l HOME l DASHBOARD l STORAGE l HOSTS l DATA PROTECTION l PERFORMANCE l SYSTEM l EVENTS l SUPPORT Viewing the System Health Dashboard System Health dashboard provides a single place from which you can quickly The determine the health of the system. You can also access hardware information. The System Health section displays values for the following five high level health or performance metrics: System Utilization, Configuration, Capacity, SG Response Time and Service Level Compliance. It also displays an overall health score based on these Viewing the System Health Dashboard 35

36 Getting Started five categories. The overall system health score is based on the lowest health score out of the categories System Utilization, Configuration, Capacity, SG Response Time Understanding the system health score on page and service level compliance. See 37 for details on how these scores are calculated. These five categories are for systems running HYPERMAX OS 5977 or later. For systems running Enginuity 5876, the health score is based on the Hardware, Configuration, Capacity and SG Response time scores. The health score is calculated every five minutes. Note: The Health score values for Hardware, SG Response and service level compliance are not real-time; they are based on values within the last hour. section shows the director count for Front End, Back End, and SRDF Hardware The Directors as well as the available port count on the system. An alert status is indicated through a colored bell beside the title of the highest level alert in that category. If no alerts are present, then a green tick is displayed. To view the system health dashboard: Procedure 1. Select the storage system. SYSTEM HEALTH to view the system health summary for 2. Optional: Hover over the storage system. 3. Click SYSTEM HEALTH and view the following items: l Introducing your Health Score - Understanding the system health score on page 37 l panel - The current score and the 30 day trend are displayed Health Score for the storage system health parameters - Total Issues, Configuration, Capacity, System Utilization, Service Level Compliance, and SG Response Time. The following views are available by clicking on the associated panel item: n VIEW ALERTS Viewing alerts on page 52 — n Using default dashboards on page 520 VIEW PERFORMANCE — n VIEW STORAGE GROUPS — Viewing storage groups on page 135 l panel - The storage system hardware is displayed in terms of the Hardware number of front end (FE) directors, SRDF directors, back end (BE) directors, available ports and cache partitions. The following views are available by clicking on the associated panel item: n on page 895 — Front End Viewing system front-end directors n — on page 899 RDF Viewing RDF directors n Back End on page 892 — Viewing back-end directors n Available Ports — Viewing available ports on page 891 l The following controls are available: n — Viewing Storage System details on VIEW SYMMETRIX PROPERTIES page 872 n Using the Emulation Management wizard — on MANAGE EMULATION page 877 (For storage systems running HYPERMAX OS 5977 or higher) n Viewing reservations on page 277 VIEW RESERVATIONS — n — Viewing dynamic cache partitions on page VIEW OTHER HARDWARE 947 36 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

37 Getting Started n Performing system health checks RUN HEALTH CHECK on page 913 — n Replacing failed drives on page 914 (For RUN DISK REPLACEMENT — storage systems running Enginuity OS 5876) Understanding the system health score System Health The dashboard provides a single place from which you can quickly determine the health of the system. The System Health panel displays values for the following high level health or performance metrics: Configuration, Capacity, System Utilization, Storage Group Response Time and Service Level Compliance. It also displays an overall health score based on the lowest health score out of the five metrics. These five categories are for storage systems running HYPERMAX OS 5977 or higher. For storage systems running Enginuity OS 5876, the health score is based on four categories: Configuration, System Utilization, Capacity and storage group (SG) Response Time. The health score is calculated every five minutes. The overall value is always calculated from all metric values. If a health score category is seen as stale or unknown then the overall health score is not updated. The previously calculated overall health score is displayed but its value is denoted as stale by setting the menu item to grey. The Configuration health score is based on storage system hardware alerts in the system like Director and Port alerts. The System Utilization, Capacity, storage group response time and service level compliance are based on performance information. The Configuration health score is calculated every five minutes and is based on the director and port alerts in the system at the time of calculation. Unisphere does not support alert correlation or auto clearing, so you are required to manually delete alerts that have been dealt with or are no longer relevant as these will impact on the hardware health score until such time as they are removed from Unisphere. The Configuration health score is calculated as follows: l Director out of service- 40 points reduced l Director Offline - 20 points reduced l Port Offline - 10 points reduced The Capacity health Score is based on percentage of used usable. Capacity levels are checked at the Array, SRP (only on storage systems running HYPERMAX OS 5977 or higher) and Thin Pool level (only storage systems running Enginuity OS 5876). The capacity health scores are calculated as follows: l Critical level: 95% - 30 points reduced l Warning level: 80% - 10 points reduced The System Utilization health score is calculated using the threshold limits of the following categories and metrics: l FE_DIR: PERCENT_BUSY, QUEUE_DEPTH_UTILIZATION l FE_PORT: PERCENT_BUSY l BE_PORT_DA: PERCENT_BUSY l BE_DIR_DA: PERCENT_BUSY l RDF_PORT: PERCENT_BUSY l RDF_DIR: PERCENT_BUSY Understanding the system health score 37

38 Getting Started l BE_PORT_DX: PERCENT_BUSY l BE_DIR_DX: PERCENT_BUSY l IM_DIR: PERCENT_BUSY l EDS_DIR: PERCENT_BUSY l BOARD: UTILIZATION l CP: WP l DISK: PERCENT_BUSY For each instance and metric for particular category, the threshold info is found. If not set, use defaults thresholds. The default thresholds are: FE Port - Percent Busy - Critical 70, Warning 50 FE Director - Percent Busy - Critical 70, Warning 50; Queue Depth Utilization - Critical 75, Warning 60 BE Port DA - Percent Busy - Critical 70, Warning 55 BE Director DA - Percent Busy - Critical 70, Warning 55 RDF Port - Percent Busy - Critical 70, Warning 50 RDF Director - Percent Busy - Critical 70, Warning 50 BE Port DX - Percent Busy - Critical 70, Warning 55 BE Director DX - Percent Busy - Critical 70, Warning 55 IM Director - Percent Busy - Critical 70, Warning 55 EDS Director - Percent Busy - Critical 70, Warning 55 Board - Utilization - Critical 70, Warning 60 Cache Partition - Percent Busy - Critical 75, Warning 55 Disk - Percent Busy - Critical 70, Warning 55 The system utilization score is calculated as follows: l Critical level: - 30 points reduced l Warning level: - 10 points reduced Storage systems running HYPERMAX OS 5977 or higher: The Service Level Compliance health score is based on WLP Workload state. A reduction from the health score is performed when storage groups which have an Service Level defined are not meeting the service level requirements It is based on WLP Workload state. A reduction from the health score is performed when storage groups, which have a service level defined, are not meeting the service level requirements. The Service Level compliance score is calculated as follows: l Underperforming: - 30 points reduced l Marginal performing: - 10 points reduced Storage systems running Enginuity OS 5876: The Storage Group Response health score is based on software category health scores. Certain key metrics are examined against threshold values and if they exceed a certain threshold, then the health score is negatively affected The storage group response score is calculated as follows: l Storage Group: Read Response Time, Write Response Time, Response Time Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 38

39 Getting Started n Read Response Time: Critical: 30 points reduced; Warning : 20 points reduced n Write Response Time: n Response Time: 30 points reduced; Warning : 20 points reduced l Database: Read Response Time, Write Response Time, Response Time For each instance and metric for particular category, the threshold info is found. If not found, default thresholds are used. Viewing Storage Group Compliance view Before you begin The user requires a minimum of Monitor permissions to perform this task. To view the Storage Group (SG) Compliance view: Procedure 1. Select a storage system. Viewing system The system performance dashboard is displayed by default (see performance view on page 34). 2. Optional: Hover over SG COMPLIANCE to view the storage health summary for the storage system. and view the following items: 3. Click SG COMPLIANCE l panel—Displays how well the storage system's workload is Compliance complying with the overall service level. Storage groups compliance information displays for storage systems registered with the Performance component. The total number of storage groups is listed, along with information about the number of storage groups performing according to service level targets. Possible values are: n Critical —Number of storage groups performing well below service level targets. n Marginal —Number of storage group ps performing below service level targets. n Stable —Number of storage groups performing within the service level targets. n No Status —Number of storage groups without a status. l panel—The storage groups are listed in a view that can be Storage Groups filtered. l The following controls are available: n — Viewing compliance reports on page VIEW COMPLIANCE REPORT 157(For storage systems running HYPERMAX OS 5977 or higher) n — Viewing storage groups on page 135 VIEW ALL STORAGE GROUPS n — Viewing FAST storage groups on VIEW FAST STORAGE GROUPS page 173 (For storage systems running Enginuity OS 5876) Viewing Storage Group Compliance view 39

40 Getting Started n Using the Provision Storage wizard PROVISION STORAGE on page — 100 n — on page 158 (For Managing Data Exclusion Windows EXCLUDE DATA storage systems running HYPERMAX OS 5977 or higher) Viewing Capacity dashboard view Before you begin The user requires a minimum of Monitor permissions to perform this task. To view the Capacity dashboard view: Procedure 1. Select a storage system. The system performance dashboard is displayed by default (see Viewing system performance view on page 34). CAPACITY 2. Optional: Hover over to view the capacity summary for the storage system. CAPACITY 3. Click , select the system or an SRP instance (not applicable for systems running Enginuity 5876) and view the following items: System running Enginuity 5876: A graphical representation of the system's physical and virtual capacity (used = blue and free = grey) and the percentage used for both. System running HYPERMAX OS 5977 or PowerMaxOS 5978 - System selected and Show Detailed selected : l A graphical representation of the system's subscribed, snapshot and usable capacity (used = blue and free = grey) and the percentage used for both. l A textual representation of the system's subscribed usable capacity. l System Usage Show Detailed slider. The is displayed if you turn on the information is displayed in terms of System Meta data used, Replication to Meta Data and Front End Meta Data. You can click on Analyze Trend analyze trends across metrics particular to capacity and usage. Trending shown for Metadata usage, Subscribed Capacity, Snapshot Capacity, and Usable Capacity. n Metadata trending will capture System, Replication, Front-end and Back- end. n Subscribed capacity trending will capture all (non-shared and shared) allocated against total subscribed capacity. n Snapshot capacity trending will capture all (shared and non-shared) modified capacity against total snapshot capacity. n Usable capacity trending will capture all (user, system, temp) used capacity against total usable capacity. Note The data shown depends on the code level the system is running. Front-end meta data isn't shown for systems running HYPERMAX OS 5977. 40 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

41 Getting Started l Efficiency is also displayed in terms of Overall Efficiency Ratio, Data Reduction (Ratio and Enabled Percent), Virtual Provisioning savings, and Snapshot savings. System running HYPERMAX OS 5977 or higher - SRP instance selected: l A graphical representation of the system's subscribed, snapshot and usable capacity (used = blue and free = grey) and the percentage used for both. l A textual representation of the system's subscribed usable capacity. l Headroom details are also displayed. Headroom is displayed by default as an overall figure but can also be filtered to display headroom for OLTP, OLTP + Replication, DSS, DSS + Replication, and None. The headroom displayed depends on the system's code level — systems running HYPERMAX OS 5977 will only show Diamond service levels and a combination of workload types. Systems running PowerMaxOS 5978 will show headroom for the different SLO types (workload types aren't supported by this code level). l is also displayed in terms of Overall Efficiency Ratio, Data Efficiency Reduction (Ratio and Enabled Percent), Virtual Provisioning savings, and Snapshot savings. The following controls are available (for storage system running HYPERMAX Actions panel: OS 5977 or higher when a SRP instance is selected) from the l Viewing Storage Group Demand Reports on STORAGE GROUP DEMAND — page 91 l — Viewing Service Level Demand Reports on SERVICE LEVEL DEMAND page 92 l Viewing compressibility reports — on page 288 COMPRESSIBILITY Viewing Replication dashboard Before you begin The user requires a minimum of Monitor permissions to perform this task. The Replication Dashboard provides Storage Group Summary Protection information, summarizing worst states of various Replication technologies and counts of Management objects participating in these technologies. Storage systems running Enginuiity 5876 also display a Device Group Summary, with counts of various Replication Technologies using Device Groups. To view the Replication dashboard: Procedure 1. From the main menu, click Unisphere for PowerMax . 2. Select a storage system. Viewing system The system performance dashboard is displayed by default (see on page 34). performance view 3. Click REPLICATION and view the following items: l Storage Group Summary panel is displayed. For systems running A HYPERMAX OS 5977 and higher, summary information for SRDF, SRDF/ Metro and SnapVX is displayed. For systems running Enginuity 5876, Viewing Replication dashboard 41

42 Getting Started summary information for SRDF and Device Groups is displayed. To view the Storage Groups that are in the states indicated, you can click on the row which brings you to the technologies Storage Group list view that is filtered to show only the applicable Storage Groups for the selected state. l A visual display of SRDF topology: The SRDF Topology View visually describe the layout of the SRDF connectivity of the selected storage system in Unisphere. It calculates this with a maximum of two hops, for example, Symm A has SRDF Groups to Symm B, which has SRDF Groups to Symm C, if a fourth storage system Symm D has SRDF Groups to Symm C but is not connected to Symm A or Symm B, it is not shown as it is outside the two hop count for the Array that Unisphere is currently managing. All types of SRDF Groups are used to calculate this view. There are two components which make up the topology view, nodes and edges. A node is the storage system and the edges are the connectivity between the storage systems. The edges are color coded in the familiar traffic light system. The colors are Green, Yellow and Red. Green edges indicate that the state of the connectivity between two nodes (Arrays) is Good. Yellow indicates that the connectivity between two nodes is degraded. Degraded in this case means that one or more SRDF Groups between the 2 arrays are either in a Transmit Idle state, or have some ports in an SRDF Group that are offline. Red indicates that the state of the connectivity between two nodes is Critical. Critical in this case means that one or more SRDF Groups between the 2 arrays are Offline, or one more SRDF Groups contains ports that are all offline. The edges are also drawn differently depending on the modes of the SRDF Groups between two arrays. A legend is available under the view: Edges that are drawn with short dashes and short gapes between the dashes indicate that all the SRDF Groups between the two Arrays are Metro or Synchronous SRDF Groups. Edges that are drawn with longer dashes and a short gap between the dashes indicate that all the SRDF Groups between the two Arrays are Asynchronous Edges that are solid indicate that there is a mix of Asynchronous, Synchronous and SRDF/Metro SRDF Groups between the two Arrays. Edges that are drawn with short dashes and a long distance between the dashes indicate the SRDF Groups between the two Arrays are Other SRDF Groups than mentioned above, including Empty SRDF Groups, Virtual Witness, Adaptive Copy, etc. The nodes are drawn with some basic information about the Array, including Symmetrix ID and if set, the user defined nice name of the Array. An icon specific to the model of the Array will also be drawn into the node. l A visual display of Migration Environments: The Migrations Environments topology view visually describes the layout of the migration environments of the currently selected storage system. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 42

43 Getting Started The edges are color coded using the familiar traffic light system Red, Yellow and Green, Red in this case meaning the Migration Environment is in an Invalid State and Green meaning it is in a valid state. The color of the edge can also be dictated by the worst state of any migrations using this Environment. The edges are all drawn in a solid full line. The nodes are drawn with some basic information about the storage system, including ID and if set, the user defined nice name of the storage system. An icon specific to the model of the Array is drawn into the node. l Both topology views have the following controls: - this brings the topology views nodes and edges back Re-center into full view. - Zoom the view in to see nodes and edges. - zoom the view out to view more of the topology view. - allows the user view a fuller screen view of the topology. Clicking this opens a popup that takes up most of the visible space on the screen. All functionality of this view is the same as the view embedded on the page. displays a Layout Manager which provides layout change Clicking options. l The following controls are available: n CREATE SNAPSHOT Creating snapshots on page 387 (For storage — systems running HYPERMAX OS 5977 or higher) n — Creating SRDF groups on page 457 CREATE SRDF GROUP n CREATE MIGRATION ENVIRONMENT — Setting up a migration on page 510 environment n CREATE VIRTUAL WITNESS Adding SRDF Virtual Witness instances — on page 445 (For storage systems running HYPERMAX OS 5977 or higher) n — Creating device groups on page 348 (For CREATE DEVICE GROUP storage systems running Enginuity OS 5876) Discovering storage systems Discovery refers to process by which storage system, volume-level configuration and status information is retrieved. Discovered configuration and status data for all storage systems, as well as their directors and volumes, is maintained in a configuration database file on each host. Once you have discovered your environment, you can direct information requests to retrieve system level (high-level) data or volume-level (low-level) information from it. To discover a storage system: Discovering storage systems 43

44 Getting Started Procedure view, click 1. On the Unisphere for PowerMax HOME in the title bar. 2. Select a storage system 3. Click the arrow next to the storage system ID in the title bar and select . DISCOVER SYSTEMS OK to 4. Read the warning stating the operation may take some time and click confirm if you wish to proceed. Refreshing storage system information Unisphere refreshes all of the storage system data from its database. This operation does not discover new storage systems, only refreshes data for existing systems. To refresh a storage system: Procedure 1. Select a storage system. 2. , click in the title bar. In the Dashboard to the 3. Click dialog. OK System Refresh Confirmation Viewing product version information Procedure 1. Select to open the Support view. SUPPORT The following Latest Software properties display: l Installed Unisphere Version l Latest Available Unisphere Version l Installed Solution Enabler Version l Latest Available Solution Enabler Version The following Solution Enabler properties display: l Connection Type —Connection Type l —Net Connection Security Level Net Connection Security Level l Net Protocol —Net Protocol l Net Address —Net Address l —Net Port Net Port l Node Name —Node Name l OS Type —OS Type l —OS Name OS Name l —OS Version OS Version l OS Release —OS Release l Machine Type —Machine Type l —System Time System Time 44 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

45 Getting Started l Num Symm Pdevs —Number of Symm Pdevs l SYMAPI Build Version —SYMAPI Build Version l SYMAPI Runtime Version —SYMAPI Runtime Version l —Library Type Library Type l 64 bit Libraries —64 bit Libraries l —Multithread Libraries Multithread Libraries l —Server Processor Server Processor l Storage Daemon —Storage Daemon l GNS —GNS l Storage Daemon GK Mgmt —Storage Daemon GK Mgmt l Storage Daemon Caching —Storage Daemon Caching l —Storage Daemon Emulation Storage Daemon Emulation l Storage Daemon EM Caching —Storage Daemon EM Caching l VMware Guest —VMware Guest l Type of SYMAPI Database —Type of SYMAPI Database l —SYMAPI Lib Version which SYMAPI Lib Version which discovered DB discovered DB l SYMAPI Lib Version which wrote DB —SYMAPI Lib Version which wrote DB l —Minimum Edit Level of Minimum Edit Level of SYMAPI Lib Required SYMAPI Lib Required l —Database Sync Time Database Sync Time l —DG Modify Time DG Modify Time l Device in Multiple Device Groups —Device in Multiple Device Groups The following operations are available from the Actions panel: l —clicking this brings you to the product PRODUCT SUPPORT PAGE support page. l SERVICE CENTER —clicking this brings you to the service center. l MODIFY SERVER LOGGING - Modifying server logging levels on page 46. Searching for storage objects This procedure explains how to search for objects (storage groups, hosts, initiators) across all manage storage systems. Procedure 1. Click in the title bar. Storage Group , Initiator , Host , Virtual Machine and 2. Select the type of object ( ). ESXi Server 3. Depending on the object you are looking for, type the following: l Storage Group —Type all or part of the storage group name. Searching for storage objects 45

46 Getting Started l Initiator —Type all or part of the initiator name. l —Type all or part of the host name. Host l Virtual Machine —Type all or part of the virtual machine name. l ESXi Server —Type all or part of the ESXi Server name. l Select or a specific storage system identifier. All Symmetrix . Find 4. Click , the Object Type , and the associated storage Results include the object Name Symmetrix ID system ( ). view. Details 5. To view object details, click the object name to open its 6. Click to clear the results of the search. Clear Modifying server logging levels This procedure explains how to set the severity level of the alerts to log in the debug log. Once set, will only log events with the specified severity. Procedure SUPPORT 1. Select . panel, select . 2. In the MODIFY SERVER LOGGING Actions 3. Select a OK . Server Logging level (WARN, INFO or DEBUG) and click Exiting the console in the title bar, select Sign Out and click OK to To exit the console, click confirm. Getting help Clicking in the title bar and selecting opens the entire help system. Help Clicking help in a dialog box, wizard page, or view opens a help topic specifically for that dialog, page, or view. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 46

47 CHAPTER 3 Administration l Managing settings ...48 l ... 49 Setting preferences l Backing up the database server ... 50 l Viewing database backups ... 51 l Deleting database backups ... 51 l ... 51 Alert settings l Server alerts ... 66 l ... 67 Security l Viewing user sessions ...80 l Roles and associated permissions ... 80 l Link and launch ... 83 l Managing Database Storage Analyzer (DSA) environment preferences ...85 l ... 85 Managing data protection preferences l Viewing authentication authority information ...86 l Local User and Authorization operations ... 87 l Link and Launch operations ... 87 l Entering PIN number ... 87 l Report operations ... 87 Administration 47

48 Administration Managing settings Before you begin l To perform this operation, you must be a StorageAdmin or higher. This procedure explains how to manage system settings. Procedure 1. panel. to open the Settings Select The following categories of settings are displayed (and the Preferences settings Setting preferences are displayed by default - see on page 49): l Preferences l System and Licences l Users and Groups l Symmetrix Access Control l Management l Data Protection l Performance l Unisphere Databases l DSA Environment l Alerts 2. Click on one of the following categories to view or modify its settings. l Preferences — on page 49 Setting preferences l Viewing license usage on page System and Licences > License Usage — 928 l System and Licences > Solutions Enabler — Viewing host-based licenses on page 928 l — System and Licences > Symmetrix Entitlements Viewing Symmetrix on page 927 entitlements l — Viewing authentication authorities Users and Groups > Authentication on page 69 l Users and Groups > Local Users Viewing local users on page 78 — l Viewing user sessions on page 80 — Users and Groups > User Sessions l Users and Groups > Authorized Users and Groups — Viewing the authorized users and groups list on page 75 l — Symmetrix Access Control > Access Control Entries Viewing access control entry details on page 939 l Symmetrix Access Control > Access Groups — Viewing access group details on page 934 l — Viewing access pools on page Symmetrix Access Control > Access Pools 936 48 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

49 Administration l Setting system attributes Management > Symmetrix Attributes on page — 874 l — Viewing link and launch client Management > Link and Launch registrations on page 84 l — on page 85 Data Protection Managing data protection preferences l on — Performance > System Registrations Viewing system registrations page 591 l Performance > Dashboard Catalog — Managing dashboard catalog on page 594 l Viewing Real Time traces on page 586 Performance > Real Time Traces — l — on page 718 Performance > Metrics Viewing and managing metrics l — Importing Performance settings Performance > Import Settings on page 604 l Performance > Export Settings Exporting Performance settings on page — 605 l Performance > Export PV Settings — Exporting Performance Viewer on page 605 settings l Unisphere Databases > Performance Databases Viewing Performance — databases on page 594 l — Unisphere Databases > System Database Viewing database backups on page 51 l DSA Environment — Managing Database Storage Analyzer (DSA) on page 85 environment preferences l Alerts > Alert Policies Configuring alert policies on page 56 — l Viewing compliance alerts policies — Alerts > Compliance Alert Policies on page 65 l Alerts > Performance Thresholds and Alerts — Viewing Performance thresholds and alerts on page 62 l — Viewing threshold alerts on Alerts > Symmetrix Thresholds and Alerts page 58 l Configuring alert notifications on page 55 Alerts > Notifications — Setting preferences Before you begin Only a user with Administrator permission can set preferences. To set system preferences: Procedure 1. to open the Settings panel. Select 2. Select to open the Preferences page. Preferences 3. Modify any number of the following preferences: —This setting enables or disables Unisphere 360 Unisphere 360 Support integration from the Unisphere side. Setting the checkbox to "disabled" Setting preferences 49

50 Administration prevents Unisphere 360 from being able to enroll this Unisphere and disconnects any instance of Unisphere 360 that had previously enrolled it. —This setting enables or disables the display of the Initial Setup User Warning warning when permissions are not configured during initial setup. Introduction to Health Score Card —This setting enables or disables the display of health score guide in the dashboard. System Health Custom Welcome Screen Message Type a message to display to users during login. For example, you may want to notify logging in users about a software upgrade. Messages can be up to 240 characters. Solutions Enabler Debug Specify the debug level. Set the following parameters: l Debug —Set the level of debugging to write to the debug file. l Debug2 —Set the secondary level of debugging to write to the debug file. l Debug Filename —Enter the debug file name. Note Changing the debug level from the default value of 0 might substantially increase the size of the log files and affect your system's performance. . 4. Click APPLY Backing up the database server Before you begin To perform this operation, you must be an Administrator. This procedure explains how to backup all the data currently on the database server, including Database Storage Analyzer, Workload Planner, performance, and infrastructure data. Database backups will enable you to recover from system crashes. You can only restore the database to the same version and same operating system. For example, a V8.0.1 database on Windows, can only be restored to a V8.0.1 on Windows. Procedure 1. Settings panel. Select to open the > System Database Unisphere Databases 2. Select Backup to open the Database Backup dialog box. 3. Click 4. In File Name , type a description of the backup. Note that the final file name consists of a time stamp and your custom description. OK . 5. Click Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 50

51 Administration Viewing database backups Before you begin To perform this operation, you must be a Monitor. Procedure 1. Settings Select panel. to open the 2. Select Unisphere Databases > System Database The following properties display: l TimeStamp_CustomName . Name —Name of the backup in the form l —Status of the backup. Status l —Time the backup started. Start Time l End Time —Time the backup ended. l —Message related to the backup. Description The following controls are available: l — Backing up the database server on page 50 Backup l — Deleting database backups on page 51 Deleting database backups Before you begin To perform this operation, you must be an Administrator. Procedure 1. Settings panel. Select to open the > System Database Unisphere Databases 2. Select 3. . Select one or more backups and click OK . 4. Click Alert settings You can configure Unisphere to monitor storage systems for specific events or error conditions. When an event or error of interest occurs, Unisphere displays an alert and, if configured to do so, notifies you of the alert by way of email, SNMP, or Syslog. In addition to alerting you of specific events or errors, Unisphere also generates a Server alerts number of server alerts that also alert you. For more information, refer to on page 66. The procedures in this section explain how to configure and use the alert functionality. Viewing database backups 51

52 Administration Alerts Viewing alerts Before you begin l Events and Alerts Guide For alert (event) descriptions, refer to the . l In addition to alerting you of specific events or errors, Unisphere also generates a number of server alerts that also alert you. For more information, refer to Server alerts on page 66. l The maximum number of alerts Unisphere displays is 10,000. Once this threshold is reached, Unisphere deletes the oldest alert for each subsequent alert it receives. This procedure explains how to view alerts for a particular storage system or all the visible storage systems. This procedure also applies to storage container alerts which can be viewed by navigating to STORAGE > VVol dashboard and clicking on Actions panel. STORAGE CONTAINER ALERTS from within the Procedure 1. Do the following, depending on whether you want to view the alerts for a particular storage system, or for all storage systems. For a particular storage system: a. Select the storage system. > Alerts to open the system's b. Select list view. EVENTS Alerts For all visible storage systems: a. Home . and then select to open the Alert list view. Select 2. (Optional) Use the alert filter to view a subset of the listed alerts. For more information on the alert filter, refer to Filtering alerts on page 53. In both cases, the following properties display: l State —State of the alert. Possible values are New or Acknowledged. l Severity —Severity of the alert. Possible values are: n Fatal n Critical n Warning—The following events map to this severity: – The component is in a degraded state of operation. – The storage array is no longer present (during certain operations). – The component is in an unknown state. – The component is (where possible) in a write-disabled state. n Information—The component is no longer present (during certain operations). n Normal—The component is now (back) in a normal state of operation. 52 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

53 Administration l Type —Type of alert. Possible values are Array, Performance, Server, System, and File. l —Storage system reporting the alert. This field only appears Symmetrix when viewing alerts for all Symmetrix systems. This field will appear blank for server alerts. This is because server alerts are specific to the server or runtime environment and are not associated with a specific object or storage system. l Object —Component to which the alert is related. This is because server alerts are specific to the server or runtime environment and are not associated with a specific object or storage system. l Description —Description of the alert. l Created —Date/time the alert was created. l Acknowledged —Date/time the alert was acknowledged. The following controls are available: l on page 54. — Viewing alert details l — on page 53. Acknowledging alerts Acknowledge l — Deleting alerts on page 54. Filtering alerts Procedure 1. EVENTS > Alerts , or select Home and then select Select to open the Alerts list view. 2. to narrow the listed alerts to only those that meet the Use the filter tool specified criteria: l State —Filters the list for alerts with the specified state. l Severity —Filters the list for alerts with the specified severity. l Type —Filters the list for alerts with the specified type. l — Filters the list based on the storage system identity. Symmetrix l Object —Filters the list for alerts for the specified object. l Description —Filters the list for alerts with the specified description. l — Filters the list based on when the alert was created. Created l Acknowledged — Filters the list for alerts that have been acknowledged. Acknowledging alerts Procedure 1. EVENTS > Alerts , or select Home and then select Select to open the Alerts list view. 2. Select one or more alerts and click Acknowledge . Alerts 53

54 Administration Viewing alert details Procedure 1. Alerts , or select Home and then select to open the EVENTS > Select Alerts list view. 2. to open the Alerts Details view. Select an alert and click The following properties display: Alert ID Unique number assigned by Unisphere. State State of the alert. Possible values are new or acknowledged. Severity Alert's severity. Possible values are: Fatal l Critical l l Warning Information l l Normal Type Type of alert. Possible values are Array, Performance, and System. Symmetrix ID of the storage system generating the alert. Object Object to which the alert is related. For more information, click the object to open its details view. Created Date/time the alert was created. Description Description of the alert. Acknowledged Shows the date on which the alert was acknowledged (if it has been). Deleting alerts Procedure 1. EVENTS > Alerts , or select Home and then select Select to open the Alerts list view. 54 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

55 Administration 2. Select one or more alerts and click . Configuring alert notifications Before you begin l To perform this operation, you must be an Administrator or StorageAdmin. l Unisphere employs the following throttling algorithms to prevent alert flurries from straining the system: Storage system Event Throttling When a storage system raises an alert flurry, the alert infrastructure packages all the alerts into a single notification. Generic Throttling When the number of alerts generated by a non-storage system event exceeds a set threshold, the alert infrastructure ignores subsequent alerts from the source. This procedure explains how to configure Unisphere to notify you when a storage system generates an alert. Procedure 1. Do one of the following: l To enable alert notifications: n Select Settings panel. to open the n > Select to open the Notifications page. Alerts Notifications n Configure for method you want to use to deliver the In the panel, click Configuring email notifications notifications (see on page 59 or Configuring SNMP notifications on page 602. (Not applicable for Syslog. For syslog, refer to in the Setting up the event daemon for monitoring Solutions Enabler Installation and Configuration Guide for instructions. ) n In the panel, move the slider bar to the right to enable the configured method you want to use to deliver the notifications. Possible methods are: Syslog Forwards alert notifications to a remote syslog server. Email Forwards alert notifications to an email address. SNMP Forwards alert notifications to a remote SNMP listener. n Alerts panel, do the following for each storage system from which In the you want to receive notifications: System Level and Performance Level severities in which – Select the you are interested. – To clear your selection, click a previously clicked item. Alerts 55

56 Administration – Once satisfied, click APPLY . Alert policies Configuring alert policies Before you begin l To perform this operation, you must be an Administrator or StorageAdmin. l To receive alert notifications, you must first configure alert notifications. l Solutions Enabler Installation Guide . For alert (event) descriptions, refer to the Procedure 1. Settings panel. to open the Select > Alert Policies 2. Select Alerts . Select Array 3. Select all or a specific storage system from the drop-down list. The following properties display: Name Policy name. For alert (event) descriptions, refer to the Solutions Enabler Installation Guide . Type Type of alert policy. Possible values are: for array-based alerts. l Array l SMAS for application-based alerts. File for eNAS-based alerts. l Enabled Whether the policy is Enabled or Disabled. Notifications Icon indicating the method to use when delivering the alert notification (e- mail, SNMP, or Sys Log). None indicates that Unisphere is not configured to deliver an alert notification for the corresponding policy. 4. To enable alert reporting for a particular event, configure alert notifications , checkbox for that event and click APPLY select the Enabled . Enabled checkbox for 5. To disable alert reporting for a particular event, clear the that event and click APPLY . Threshold alerts Managing threshold alerts Before you begin l For alert (event) descriptions, refer to the Events and Alerts Guide . l Pool utilization thresholds are enabled by default on every storage system. l To receive utilization threshold alerts, you must enable alerts on the storage system. 56 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

57 Administration l To receive alert notifications, you must first configure the alert notifications feature. Certain alerts are associated with a numerical value. This value is compared with a set of threshold values, which determine whether the alert is delivered and, if so, with what severity. This procedure explains how to manage the alert threshold feature. Procedure 1. to open the Settings panel. Select Symmetrix Threshold and Alerts . 2. Select > Alerts 3. Do the following, depending on whether you are creating, editing, or deleting thresholds: l Creating: n Create . Click n Select the storage system on which to create the threshold. n of threshold to assign. Category Select the Possible values are: DSE Pool Utilization Threshold event that reflects the allocated capacity (as percentage) within a DSE pool. This category only applies to Enginuity 5876. DSE Spill Duration Threshold event that reflects how long (in minutes) an SRDF spillover has been occurring. This category only applies to Enginuity 5876. Snap Pool Utilization Threshold event that reflects the allocated capacity (as percentage) within a snap pool. This category only applies to Enginuity 5876. Thin Pool Utilization Threshold event that reflects the allocated capacity (as percentage) within a virtual pool. FAST VP Policy Utilization Threshold event that reflects the allocated capacity (as percentage) of all the pools in all the tiers in a FAST VP policy. This category only applies to Enginuity 5876. Storage Resource Pool Utilization Threshold event that reflects the allocated capacity (as percentage) within an SRP. This category only applies to storage systems running HYPERMAX OS 5977 or higher. Local Replication Utilization Threshold event that indicates that the local replication resource usage has exceeded the threshold. This category only applies to storage systems running HYPERMAX OS 5977 Q1 2016 SR or higher. System Meta Data Utilization Threshold event that indicates that the system meta data Utilization usage has exceeded the threshold. This category only applies to storage systems running HYPERMAX OS 5977 Q1 2017 SR or higher. Threshold alerts 57

58 Administration Storage Container Utilization Threshold event that indicates that the storage container utilization usage has exceeded the threshold. This category only applies to storage systems running HYPERMAX OS 5977 or higher. Frontend Meta Data Usage Threshold event that indicates that the front end meta data usage has exceeded the threshold. This category only applies to storage systems running PowerMaxOS 5978 or higher. Backend Meta Data Usage Threshold event that indicates that the back end meta data usage has exceeded the threshold. This category only applies to storage systems running PowerMaxOS 5978 or higher. n Instances to enable Select the pools ( ) on which to create the threshold. n Enable (select) or disable (clear) alerts for the threshold. n Specify a threshold value (percentage of utilization) for each severity , Critical , and level: . Warning Fatal n . Click OK l Editing: n Hover over a threshold and click . n Select a threshold and specify a new threshold value (percentage of utilization) for any number of the severity levels: Critical , and Warning , . Fatal n Enable (select) or disable (clear) alerts for the threshold. n Click OK . l Deleting: n . Hover over a threshold and click Viewing threshold alerts Before you begin l For alert (event) descriptions, refer to the Events and Alerts Guide . Procedure 1. Select to open the Settings panel. Alerts > Symmetrix Threshold and Alerts 2. Select 3. Select All or a specific storage system. The following properties display: l Name —Category on which the threshold is defined. Possible values are: n DSE Pool Utilization —Threshold event that reflects the allocated capacity (as percentage) within a DSE pool. n —Threshold event that reflects how long (in DSE Spill Duration minutes) an SRDF spillover has been occurring. 58 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

59 Administration n Snap Pool Utilization —Threshold event that reflects the allocated capacity (as percentage) within a snap pool. n Thin Pool Utilization —Threshold event that reflects the allocated capacity (as percentage) within a virtual pool. n FAST VP Policy Utilization —Threshold event that reflects the allocated capacity (as percentage) of all the pools in all the tiers in a FAST VP policy. n —Threshold event that indicates that the Local Replication Utilization local replication resource usage has exceeded the threshold. This category only applies to storage systems running HYPERMAX OS 5977 Q1 2016 SR or higher. n System Meta Data Utilization —Threshold event that indicates that the system meta data Utilization usage has exceeded the threshold. This category only applies to storage systems running HYPERMAX OS 5977 Q1 2017 SR or higher. n Storage Container Utilization —Threshold event that indicates that the storage container utilization usage has exceeded the threshold. This category only applies to storage systems running HYPERMAX OS 5977 or higher. n Storage Resource Pool Utilization —Threshold event that reflects the allocated capacity (as percentage) within an SRP. This category only applies to storage systems running HYPERMAX OS 5977 or higher. l —Percentage of utilization at which point a warning alert is issued. Warning l Critical —Percentage of utilization at which point a critical alert is issued. l —Percentage of utilization at which point a fatal alert is issued. Fatal l Custom —Whether the policy has been customized. l —Whether the policy is Enabled or Disabled. Enabled l Notifications —Whether the alert notification option is enabled (Email, SYSLOG, or SNMP) or disabled (NONE) for the alert. The following controls are available: l Create on page 56 — Managing threshold alerts l — Managing threshold alerts on page 56 Configuring email notifications You can configure email addresses to which notifications, alerts, and reports are sent. You can configure a single email address for all notification instances, or you can use different email addresses for different notifications on different storage systems. To set up email notifications: Procedure 1. To set up email notification: a. to open the Settings panel. Select Alerts > Notifications . b. Click Configuring email notifications 59

60 Administration c. In the Configure . Email section, click section specify the following details: d. In the Outgoing Mail Server (SMTP) l IP Address/Host l Server Port User Information section, specify the Sender E-mail Address . e. In the f. In the and specify the address you want to Recipients section, click Create add. g. Select one or more system or performance level indicators or reports to enable email notifications for the relevant level of system or performance notifications. . h. Click APPLY Editing subscriptions To edit a subscription: Procedure 1. Select to open the panel. Settings 2. Click Alerts > Notifications 3. Edit Subscriptions dialog. to open the Select a storage system and click 4. Tick one or more of the checkboxes ( System Notifications , Performance , and Reports ) and click OK . Notifications Performance thresholds and alerts Creating a performance threshold alert You can use the default system values for thresholds and alerts, or create your own. When you set threshold values, you can optionally view them when you create charts for performance metrics in the Diagnostic view. To create performance threshold alerts: Procedure 1. Select to open the Settings panel. 2. Select Performance Thresholds and Alerts . Alerts > 3. Select a storage system. 4. Select the category for which you want to create a threshold or alert. 5. Click Create . Create Threshold and Alert wizard displays. The Array Category , and Metrics . 6. Select the , 7. to move list and click Select Instances from the Available Instances them to the Instances to Enable list. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 60

61 Administration 8. Add a value for Warning Threshold and Critical Warning Threshold or Threshold . 9. Click NEXT . 10. To add an alert for each configured threshold, complete the following steps: . Enable Alert a. Select b. For each threshold you are configuring, specify values for the following fields: Severity The following values are available: l Information Warning l l Critical Occurrence The number of occurrences in the data samples which must happen before the alert is triggered. For example, if the threshold is breached 3 times out of 5 samples, an alert is initiated. Samples The number of occurrences in the data samples which must happen before the alert is triggered. For example, if the threshold is breached 3 times out of 5 samples, an alert is initiated. c. (Optional) If required, select any additional configuration options. For some group categories, you can choose to enable for the alert for the Disk Group individual components of the group, for example, when the category is selected, you have the option to enable the alert for the disk. 11. Click OK . Editing a performance threshold alert Custom When you edit a threshold and alert setting, a symbol displays in the column of the alerts list to indicate that the value has changed from the default. To edit performance threshold alerts: Procedure 1. to open the Settings panel. Select Alerts > Performance Thresholds and Alerts . 2. Select 3. Navigate to the threshold alert to be edited by selecting the appropriate storage system and category. 4. Hover over an item from the table and click . 5. Edit the settings. OK . 6. Click Performance thresholds and alerts 61

62 Administration Deleting performance thresholds and alerts Before you begin You can delete only custom values. You cannot delete default thresholds. To delete a performance threshold and alert: Procedure 1. to open the panel. Settings Select Performance Thresholds and Alerts . 2. Select Alerts > 3. Navigate to the threshold or alert to be edited by selecting the appropriate section. Category category in the 4. Select one or more rows and click . 5. Click OK . Viewing Performance thresholds and alerts You can configure a warning threshold and a critical threshold value for each metric. Procedure 1. Select Settings panel. to open the > Performance Thresholds and Alerts 2. Select Alerts . 3. Select All or a storage system. 4. Select the category for which you want to view the configured thresholds and alerts. The thresholds and alerts configured for that category are displayed, according to metric. Any metrics that include a custom threshold or alert are highlighted with a tick Custom mark in the column. The following properties display: l Name —The metric name. l Warning —The warning threshold. l Alert —Indicates if a warning alert has been generated. The icon displayed corresponds to the alert type. l Critical —The critical threshold. l Alert —Indicates if a critical alert has been generated.. The icon displayed corresponds to the alert type. l —Indicates if the metric is a KPI. KPI l Custom —Indicates if a custom threshold or alert has been generated.. 5. Click APPLY . The following controls are available: l - Creating a performance threshold alert on page 60 Create 62 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

63 Administration l - Deleting performance thresholds and alerts on page 62. Service level alert policies Creating service level compliance alerts policies This procedure explains how to configure Unisphere to alert you when the performance of a storage group, relative to its service level target, changes. Once configured, Unisphere will assess the performance of the storage every 30 minutes, and deliver the appropriate alert level. When assessing the performance for a storage group, Workload Planner calculates its weighted response time for the past 4 hours and for the past 2 weeks, and then compares the two values to the maximum response time associated with its given service level. If both calculated values fall within (under) the service level defined response time band, the compliance state is STABLE. If one of them is in compliance and the other is out of compliance, then the compliance state is MARGINAL. If both are out of compliance, then the compliance state is CRITICAL. The following table details the state changes that will generate an alert and the alert level. Table 1 Service level compliance rules Alert level State change Alert generated ANY STATE > NONE No No NONE > STABLE NONE > MARGINAL Yes Warning Yes Critical NONE > CRITICAL Yes Warning STABLE > MARGINAL Yes STABLE > CRITICAL Critical STABLE > STABLE No Yes Info MARGINAL > STABLE MARGINAL > CRITICAL Yes Critical No MARGINAL > MARGINAL Yes Info CRITICAL > STABLE Yes Warning CRITICAL > MARGINAL CRITICAL > CRITICAL Yes Critical Note When a storage group configured for compliance alerts is deleted or renamed, the Deleting compliance alerts will need to be deleted manually. For instructions, refer to on page 65. compliance alerts policies Service level alert policies 63

64 Administration Before you begin l The storage system must be running HYPERMAX OS 5977 or higher and registered for performance stats collection. l The storage group must: n Be either a child or standalone. Parent storage groups are not supported. n Be associated with a service level other than optimized. n Contain volumes other than gatekeepers. n Be in a masking view. n Not have a policy currently associated with it. To create service level compliance alert policies: Procedure 1. Settings panel. to open the Select Alerts Compliance Alert Policies 2. Select > . Create 3. Click 4. Select the storage system on which the storage groups are located. > . 5. Select one or more storage groups and click 6. (Optional) By default, service level compliance policies are configured to generate alerts for all service level compliance states. To change this default behavior, clear any of the states for which you do not want generate alerts: l Critical—Storage group performing well below service level targets. l Marginal—Storage group performing below service level target. l Stable—Storage group performing within the service level target. 7. Click OK . Editing compliance alerts policies Before you begin The storage system must be running HYPERMAX OS 5977 or higher. Procedure 1. Select to open the Settings panel. Alerts > Compliance Alert Policies 2. Select 3. Select the policy, and then select (enable) or clear (disable) any of the compliance states. Unisphere generates alerts only for enabled compliance states. 4. Click APPLY . Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 64

65 Administration Deleting compliance alerts policies Before you begin The storage system must be running HYPERMAX OS 5977 or higher. Procedure 1. Settings Select panel. to open the 2. Select Alerts > Compliance Alert Policies 3. . Select one or more policies and click OK . 4. Click Viewing compliance alerts policies This procedure explain how to view compliance alert polices set on storage systems running HYPERMAX OS 5977 or higher. Procedure 1. Select to open the panel. Settings > 2. Select Alerts Compliance Alert Policies 3. Select All or a specific storage system. The following properties display: l — Policy name.. Name l Compliance State — Enabled compliance states: n Critical — Storage group performing well below service level targets n Marginal — Storage group performing below service level target. n Stable — Storage group performing within the service level target. l Notifications — Method to use when delivering the alert notification (e- mail, SNMP, or Sys Log). None indicates that Unisphere is not configured to deliver an alert notification for the corresponding policy. To enable alert reporting for a particular event, see Configuring compliance alert on page 66. notifications The following controls are available: l — Creating service level compliance alerts policies on page 63 Create l Deleting compliance alerts policies on page 65 — Service level alert policies 65

66 Administration Configuring compliance alert notifications Before you begin l The storage system must be running HYPERMAX OS 5977 or higher. l The storage system must be configured to deliver alert notifications, as described on page 55 . in Configuring alert notifications This procedure explains how to configure Unisphere to notify you when a storage group generates a compliance alert. Procedure 1. panel. Settings to open the Select > Compliance Alert Policies 2. Select Alerts . 3. Select one or more policies and click Notify 4. Select (enable) the method you want to use to deliver the notifications. Possible methods are: l —Forwards alert notifications to an email address. Enable Email l Enable SNMP —Forwards alert notifications to a remote SNMP listener. l —Forwards alert notifications to a remote syslog server. Enable Syslog Note The storage system must already be configured to deliver alerts in the desired Configuring alert notifications on page 55. method, as described in APPLY 5. Click . Server alerts Unisphere genertaes server alerts under the conditions listed in the table below. Checks are run on 10 minute intervals and alerts are raised on 24 hour intervals from the time the server was last started. Note that these time intervals also apply to discover operations. That is, performing a discover operation will not force the delivery of these alerts. Note Runtime alerts are not storage system-specific. They can be deleted as long as the user has admin or storage admin rights on at least one storage system. A user with a monitor role is not allowed to delete the server alerts. Number of Thresh Alert Details Server alert volumes old Total memory on the 0 - 64,000 12 GB System memory <# GB> is below Unisphere server the minimum requirement of <# 64,000 - 128,000 16 GB GB> 128,000 - 256,000 20 GB Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 66

67 Administration Server alert Thresh Number of Alert Details volumes old Free disk space on 0 - 64,000 100 GB Free disk space <# GB> is below the Unisphere the minimum requirement of <# 140 GB 64,000 - 128,000 installed directory GB> 128,000 - 256,000 180 GB Threshold is 20. Number of managed Number of managed arrays <#> is storage systems over the maximum supported number of # 256,000 Number of managed volumes <#> Number of managed is over the maximum supported volumes number of <#>. Note that Solutions Enabler may indicate a slightly different number of volumes than indicated in this alert. Number of 6 Number of gatekeepers <#> on Symmetrix (SymmID) is below the gatekeepers minimum requirement of 6. Security Authentication Login authentication When you log in, Unisphere checks the following locations for validation: l Windows — The user has a Windows account on the server. (Log in to Unisphere with your Windows Domain\Username and Password.) l LDAP-SSL — The user account is stored on an LDAP-SSL server. (Log in to with your Unisphere LDAP-SSL Username and Password.) The Unisphere Administrator or SecurityAdmin must set the LDAP-SSL server location in the LDAP-SSL Configuration dialog box. l — The user has a local Unisphere account. Local user accounts are stored Local locally on the Unisphere server host. (Log in to Unisphere with your Username and Password.) The Initial Setup User, Administrator, or SecurityAdmin must create a local Unisphere user account for each user. Logging in Login dialog box contains the following elements: The Username —user name (refer to Login authentication on page 67). Password —password. This dialog box may also include a login message. The login message feature enables Administrators and StorageAdmins to display a message to users during login. For example, an administrator may want to notify users about a software upgrade. Login —Opens the console. Security 67

68 Administration Configuring authentication authorities Before you begin l If configuring authentication to use LDAP, obtain the LDAP-SSL server bind distinguished name (DN) and password from your LDAP Administrator. This procedure explains how to configure Unisphere to authenticate users. To configure authentication: Procedure 1. Select to open the Settings panel. Authentication 2. Select Users and Groups > . to use during login. Possible values are: Authentication Authority 3. Select the l Local Directory — You can disable this if enabled and enable this if disabled. When enabled, users can log in as a user from the CST local directory. l — You can disable this if enabled and enable this if disabled. LDAP-SSL When enabled, users can log in as a user from the configured LDAP directory. l — You can disable this if enabled and enable this if Windows OS/AD disabled. When enabled, users can log in as a user from the Windows local host and/or from the Active Directory domain. This option only applies to Windows installations. 4. If you select the Windows OS/AD authority and click Modify , as an option you can specify to limit authentication to members of a specific Windows OS/ AD group. To do this, select the Limit authentication to members of a specific Windows OS/AD group(s) checkbox option, and type the Group , separated by commas. Name(s) Next 5. Click . and do the following: 6. If you are configuring LDAP-SSL, click Enable Next . a. Specify values for the following parameters and click l Server (IP or Hostname) —IP address or hostname of the LDAP server to use for authentication. Only alphanumeric characters are allowed. Values longer than 40 characters will wrap. l —Port number of the LDAP service. Typically, this value is 389 for Port LDAP and 636 for LDAPS. Valid values range from 1 through 65,535. l Bind DN —Distinguished name of the privileged account used to perform operations, such as searching users and groups, on the LDAP directory. Only alphanumeric characters are allowed. Values longer than 60 characters will wrap. l Bind Password —Password of the privileged account. Only alphanumeric characters are allowed. Values longer than 15 characters will wrap. l User Search Path —Distinguished name of the node at which to begin user searches. Only alphanumeric characters are allowed. Values longer than 40 characters will wrap. l —Object class identifying users in the User Object Class LDAP hierarchy. Only alphanumeric characters are allowed. Values longer than 15 characters will wrap. 68 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

69 Administration l User ID Attribute —Attribute identifying the user login ID within the user object. Only alphanumeric characters are allowed. Values longer than 15 characters will wrap. l —Distinguished name of the node at which to begin Group Search Path group searches. Only alphanumeric characters are allowed. Values longer than 40 characters will wrap. l —Object class identifying groups in the Group Object Class LDAP hierarchy. Only alphanumeric characters are allowed. Values longer than 15 characters will wrap. l Group Name Attribute —Attribute identifying the group name. Only alphanumeric characters are allowed. Values longer than 15 characters will wrap. l Group Member Attribute —Attribute indicating group membership for a user within the group object. Only alphanumeric characters are allowed. Values longer than 15 characters will wrap. , locate the b. Optional: To upload an SSL certificate, click Choose File . To view the contents of the certificate, click certificate, and click Open . To clear the file selection, click CLEAR . VIEW CERTIFICATE c. Optional: To limit authentication to only members of specific LDAP groups, click Limit Authentication to members of LDAP group(s) , select the Group Name(s) , separated by commas. option, and then type the d. Click Next . OK . 7. Click Viewing authentication authorities Procedure 1. to open the Settings panel. Select > Settings > Authentication 2. Select Users and Groups page to view and manage authentication settings. Use the Authentication The following properties display: Authentication The following authentication types are displayed: l Local Directory —When enabled, users can log in as a user from the CST local directory. l Windows OS/AD —When enabled, users can log in as a user from the Windows local host and/or from the Active Directory domain. This property only displays for Windows installations. l LDAP-SSL —When enabled, users can log in as a user from the configured LDAP directory. The following controls are available: l —Hover over an authentication type and click to view the authentication authority information (see Viewing authentication authority information on page 86). Authentication 69

70 Administration l Local Directory Enable or —This control changes the status of Windows OS/AD from disabled to enabled. This control also changes the status of (see on page 68 ). LDAP-SSL Configuring authentication authorities l , Disable —This control changes the status of Local Directory Windows LDAP-SSL from enabled to disabled. , or OS/AD l Configuring authentication authorities on page 68 Modify — Understanding user authorization User authorization is a tool for restricting the management operations users can perform on a storage system or with the Database Storage Analyzer application. By default, user authorization is enabled for Unisphere users, regardless of whether it is enabled on the Symmetrix system. When configuring user authorization, an Administrator or SecurityAdmin maps individual users or groups of users to specific roles on storage systems or Database Storage Analyzer, which determine the operations the users can perform. These user- to-role-to-storage system/Database Storage Analyzer mappings (known as authorization rules) are maintained in the symauth users list file, which is located on either a host or storage system, depending on the storage operating environment. Note If there is one or more users listed in the symauth file, users not listed in the file are unable to access or even see storage systems from the Unisphere console. Roles The following lists the available roles. Note that you can assign up to four of these roles per authorization rule. For a more detailed look at the permissions that go along Roles and associated permissions on page 80. with each role, see l None —Provides no permissions. l —Performs read-only (passive) operations on a storage system excluding Monitor the ability to read the audit log or Access Control definitions. l StorageAdmin —Performs all management (active or control) operations on a Symmetrix system and modifies GNS group definitions in addition to all Monitor operations l Admininstrator —Performs all operations on a storage system, including security operations, in addition to all StorageAdmin and Monitor operations. l SecurityAdmin —Performs security operations on a Symmetrix system, in addition to all Monitor operations. l —Grants the ability to view, but not modify, security settings for a Auditor Symmetrix system, (including reading the audit log, symacly list and symauth) in addition to all Monitor operations. This is the minimum role required to view the Symmetrix audit log. l DSA Admin —Collects and analyzes database activity with Database Storage Analyzer. A user cannot change their own role so as to remove Administrator or SecurityAdmin privileges from themselves. l —Performs local replication operations (SnapVX or legacy Local Replication Snapshot, Clone, BCV). To create Secure SnapVX snapshots a user needs to have 70 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

71 Administration Storage Admin rights at the array level. This role also automatically includes Monitor rights. l —Performs remote replication (SRDF) operations involving Remote Replication devices and pairs. Users can create, operate upon or delete SRDF device pairs but can't create, modify or delete SRDF groups. This role also automatically includes Monitor rights. l —Grants user rights to perform control and configuration Device Management operations on devices. Storage Admin rights are required to create, expand or delete devices. This role also automatically includes Monitor rights. In addition to these user roles, Unisphere includes an administrative role, the Initial Setup User. This user, defined during installation, is a temporary role that provides administrator-like permissions for the purpose of adding local users and roles to Unisphere. For more information, see Operating as the initial setup user on page 32. Individual and group roles Users gain access to a storage system or component either directly through a role assignment and/or indirectly through membership in a user group that has a role assignment. If a user has two different role assignments (one as an individual and one as a member of a group), the permissions assigned to the user will be combined. For example, if a user is assigned a Monitor role and a StorageAdmin role through a group, the user is granted Monitor and StorageAdmin rights. User IDs Users and user groups are mapped to their respective roles by IDs. These IDs consist of a three-part string in the form: Type:Domain\Name Where: l Type—Specifies the type of security authority used to authenticate the user or group. Possible types are: n L — Indicates a user or group authenticated by LDAP. In this case, Domain specifies the domain controller on the LDAP server. For example: L:danube.com\Finance Indicates that user group Finance logged in through the domain controller danube.com. n C — Indicates a user or group authenticated by the Unisphere server. For example: C:Boston\Legal Indicates that user group Legal logged in through Unisphere sever Boston. n H — Indicates a user or group authenticated by logging in to a local account on a Windows host. In this case, Domain specifies the hostname. For example: H:jupiter\mason Indicates that user mason logged in on host jupiter. n D — Indicates a user or group authenticated by a Windows domain. In this case, Domain specifies the domain or realm name. For example: D:sales\putman Indicates user putman logged in through a Windows domain sales. l Name—specifies the username relative to that authority. It cannot be longer than 32 characters and spaces are allowed if delimited with quotes. Usernames can be for individual users or user groups. Understanding user authorization 71

72 Administration Within role definitions, IDs can be either fully qualified (as shown above), partially qualified, or unqualified. When the Domain portion of the ID string is an asterisk (*), the asterisk is treated as a wildcard, meaning any host or domain. When configuring group access, the Domain portion of the ID must be fully qualified. For example: l —Fully qualified path with a domain and username (for individual D:ENG\jones domain users). l —Fully qualified domain name and group name D:ENG.xyz.com\ExampleGroup (for domain groups). l —Partially qualified that matches username jones with any domain. D:*\jones l —Fully qualified path with a hostname and username. H:HOST\jones l H:*\jones —Partially qualified that matches username jones within any host. l jones —Unqualified username that matches any jones in any domain on any host. In the event that a user is matched by more than one mapping, the user authorization mechanism uses the more specific mapping. If an exact match (for example, D:sales D:*\putman ) is \putman ) is found, that is used; if a partial match (for example, found, that is used; if an unqualified match (for example, putman) is found, that is used; otherwise, the user is assigned a role of None. Authorization Adding authorization rules Before you begin l To perform this operation, you must be the Initial Setup User (set during installation), or a SecurityAdmin. To add authorization rules: Procedure 1. Select to open the Settings panel. Users and Groups > Authorized Users and Groups 2. Select Create 3. Click . . Possible values are: 4. Optional: Select an authentication Authority l Local Directory —Specifies to authenticate the user against the Local Authority repository. l Windows AD —Specifies to authenticate the user against the Active Directory domain. l LDAP-SSL —Specifies to authenticate the user against an LDAP directory. 5. Do the following depending on the authority: l Local Directory: Name . Select the user l Windows AD LDAP-SSL or a. Specify whether the rule is for an individual User or for a user Group . Domain used to authenticate the user/ b. Optional: Type the name of the group. Possible values are based on the authentication authority: 72 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

73 Administration Authority Domain Name Unisphere server hostname Local directory Windows OS Unisphere server domain Windows AD LDAP server domain LDAP-SSL Name of the user or group. User/group names can only contain c. Type the alphanumeric characters. 6. For , Read Only , or Admin . By Database Storage Analyzer , select None default, None . Database Storage Analyzer permissions are set to Roles roles . 7. On the tab, select the object and up to four role. This role is only enabled for DSA DSA Fast Hinting 8. Click the administrators. Local Replication , Remote Replication or Device 9. If you choose a role, click Select Storage Group(s) and in the edit dialog that Management opens choose between: a. Wildcard —A wildcard syntax used with the storage group component name to allow a single rule to apply to multiple storage groups. A simple wildcard syntax can be used with the component name to allow a single rule to apply to multiple SGs as follows: abc - Exactly these characters ? Any 1 character * Any zero or more characters + Zero or more additional occurrences of the previous match [a-z0-9] Any of these characters [!a-z] Anything but one of these characters All SG name comparisons are case-insensitive. The following examples show how they will be interpreted: This pattern Matches these Does not match these Storage Groups Storage Groups tg_* tg_DB_SG1 or tg_newSG tgNewSG or TG_sg_db prod_sg1 or prod_sga por prod_sg12 or prod_sgab prod_sg? Prod_sg2 prod_sg[0-9]+ prod_sg1 or prod_sg12 prod_sga or prod_sgab The only allowed characters are: a-zA-Z0-9_- along with the above *+?[]! wildcard characters. The only roles that can be assigned against storage groups are: Local Replication, Remote Replication and Device Management. Storage groups do not have to exist at the time that a matching Role Based Authentication Controls (RBAC) rule for them is defined. These storage groups-level RBAC rules are only applicable to parent and stand-alone SGs and not child SGs. Child SGs are protected by the RBAC rules, if any, on their parent SG. Understanding user authorization 73

74 Administration Note Unisphere for PowerMax does not support RBAC Device Group management. Storage Group b. c. Once your input or selection is complete, click Save . 10. Click . OK Editing authorization rules Before you begin l To perform this operation, you must be the Initial Setup User (set during installation), or a SecurityAdmin on all authorized storage systems. To modify authorization rules: Procedure 1. Settings panel. Select to open the Users and Groups Authorized Users and Groups 2. Select > 3. Select a storage system ID from the drop-down list. Modify 4. Select a rule and click . 5. On the Roles tab, add or remove from any of the available objects, being sure to not exceed the four roles/object limit. OK . 6. Click Removing authorization rules Before you begin To perform this operation, you must be the Initial Setup User (set during installation), or a SecurityAdmin on all authorized storage systems. Note Modify . To remove an authorization rule on a single object, select To remove authorization rules: Procedure 1. to open the Settings panel. Select Users and Groups > Authorized Users and Groups 2. Select 3. Select a storage system ID from the drop-down list. 4. . Select a rule and click OK . 5. Click Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 74

75 Administration Viewing authorization rules Procedure 1. Select Settings to open the panel. 2. Select > Authorized Users & Groups . Users and Groups After you finish Use the Authorized Users & Groups list view to view and manage authorization rules. The following properties display: l —User or group name. Name l Authority —Authentication authority. Possible values are: n Local Directory —Directory of users and encrypted passwords stored in a CST .xml file (users only, no groups). n —Local Windows users and groups. Windows OS n Windows AD —Windows Active Directory users and groups that are accessed through the SMAS server's domain. n LDAP-SSL —Users and groups on LDAP server that have been configured the Configure Authorization wizard. n Unsupported —Not supported. l Authentication Domain —Domain name. Possible values are based on the authentication authority: Authority Domain name Unisphere server hostname Local directory Windows OS Unisphere server domain Windows AD LDAP-SSL LDAP server domain Virtualization domain Virtualization domain Any authority Any The following controls are available: l Create — Adding authorization rules on page 72 l Remove Removing authorization rules on page 74 — l Modify Editing authorization rules on page 74 — Viewing the authorized users and groups list To view local user details. refer to Viewing local users details on page 78. To view the authorized users and groups list: Procedure 1. Select to open the Settings panel. Understanding user authorization 75

76 Administration 2. Select Authorized Users & Groups . Users and Groups > 3. Select your required storage system ID from the drop-down list. 4. To see more information on a user, select the user and on the right-hand side of the row, click the icon. The following controls are available: l - Adding authorization rules on page 72 Create l Modify - Editing authorization rules on page 74 l on page 74 - Removing authorization rules Viewing the authorized users and groups details To view the authorized users and groups list: Procedure 1. Settings Select panel. to open the > Authorized Users & Groups . 2. Select Users and Groups 3. Select your required storage system ID from the drop-down list. 4. To see more information on a user, select the user and on the right-hand side of the row, click the icon. 5. View the following information in the information dialog: name, authority, domain, storage system identiy, roles, and component name. View Certificate dialog box Use this dialog box to view contents of an SSL certificate. Local Users Creating local users Before you begin To perform this operation, you must be the Initial Setup User (set during installation), or SecurityAdmin on at least one storage system. This procedure explains how to create local users. Local users have accounts stored locally in the user database in the Unisphere server host. Procedure 1. Settings panel. Select to open the Users and Groups > Local Users 2. Select Create to open the Create Local User dialog box. 3. Click 4. Type a User Name . User names are case-sensitive and can only contain alphanumeric characters. 5. Optional: Type a Description . 76 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

77 Administration 6. Type and confirm a user Password . Passwords cannot exceed 16 characters. There are no restrictions on special characters when using passwords. However, these characters should not be used when creating user names: \ : , Roles 7. Select the storage system and click the tab and select one or more roles - up to four can be selected. None , Read Only , or Admin 8. For Database Storage Analyzer select . permissions are set to Read Only . Database Storage Analyzer By default, 9. Click the DSA Fast Hinting role. This role is only enabled for DSA administrators. It allows a user to create and modify FAST hints. . 10. Click OK Editing local users Before you begin l To perform this operation, you must be the Initial Setup User (set during installation), or SecurityAdmin on all authorized storage systems. l role from themselves. Users cannot remove the SecurityAdmin This procedure explains how to edit the roles associated with a user or group. To create local users: Procedure 1. Select Settings panel. to open the Users and Groups 2. Select > Local Users . Modify 3. Select a user and click Description . 4. Optional: Type a new 5. On the Roles tab, add or remove from any of the available objects, being sure to not exceed the four roles/object limit. . 6. Click OK Deleting local users Before you begin l To perform this operation, you must be the Initial Setup User (set during installation), or SecurityAdmin on all authorized storage systems. l Users cannot remove the SecurityAdmin role from themselves. This procedure explains how to delete local users and all fully-qualified authorization rules (rules in the format ). L:HostName\UserName Procedure 1. Select to open the Settings panel. Users and Groups > Local Users 2. Select 3. . Select a user and click Local Users 77

78 Administration 4. Click OK . Changing local user passwords Before you begin l To perform this operation, you must be the Initial Setup User (set during installation), or SecurityAdmin on at least one storage system. This procedure explains how to change a local user passwords. To change local directory user passwords: Procedure 1. to open the Settings panel. Select Local Users 2. Select > Users and Groups . Change Password 3. Select a user and click . 4. Type the user's Old Password and 5. Type a . New Password Confirm Password . OK 6. Click Viewing local users Before you begin l To perform this operation, you must be the Initial Setup User (set during installation), or Monitor on at least one storage system. To view users with a local Unisphere account: Procedure 1. Select to open the Settings panel. Users and Groups > Local Users 2. Select Local Users 3. The list view to allows you to view and manage local users. The following properties display: l User Name —User or group name. l Description —Optional description. The following controls are available: l — Viewing local users details on page 78. l Create — Creating local users on page 76. l — Editing local users on page 77. Modify l Change Password Changing local user passwords on page 78. — l — Deleting local users on page 77. Viewing local users details This procedure explains how to view the details of a local user. 78 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

79 Administration Procedure 1. to open the Settings Select panel. Local Users > 2. Select Users and Groups . 3. Select a user, hover over the row and click to see the details view. The following properties display: l —User or group name. Name l Authority l Domain l Symmetrix ID l Roles l Component Name Viewing authorization rules This procedure explains how to view the authorization rules associated with users and groups. Procedure 1. to open the Settings panel. Select Users and Groups > Authorized Users & Groups . 2. Select 3. Select the user and click to open the user's details view. 4. The following properties display: l —User or group name. Name l Authority —Authentication authority. Possible values are: n Local Directory —Directory of users and encrypted passwords stored in a CST .xml file (users only, no groups). n Windows OS —Local Windows users and groups. n Windows AD —Windows Active Directory users and groups that are accessed through the SMAS server's domain. n LDAP-SSL —Users and groups on LDAP server that have been configured the Configure Authorization wizard. l Authentication Domain — Domain name. Possible values are based on the authentication authority: Authority Domain name Local directory Unisphere server hostname Windows OS Windows AD Unisphere server domain Local Users 79

80 Administration Authority Domain name LDAP server domain LDAP-SSL Virtualization domain Virtualization domain Any authority Any The following controls are available: n — Adding authorization rules on page 72 Create n on page 74 Editing authorization rules Modify — n Delete on page 74 Removing authorization rules — Viewing user sessions This procedure explains how to view active user sessions for a storage system. Procedure 1. Settings panel. to open the Select 2. Select > User Sessions Users and Groups The following properties display: l —Name of the individual or group. An asterisk indicates the User Name current user. l —Date and time that the user logged in to the console. Start Time l —Address of the console. IP Address Roles and associated permissions The following tables detail the permissions that go along with each role in Unisphere. Note The Unisphere Initial Setup User has all permissions on a storage system until an Administrator or SecurityAdmin is added to the storage system. The roles and the acronyms used for them in these tables are: l Administrator (AD) l StorageAdmin (SA) l Monitor (MO) l SecurityAdmin (SecA) l Auditor (AUD) l None l PerfMonitor (PM) l Database Storage Analyzer Admin (DSA) 80 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

81 Administration l Local Replication l Remote Replication l Device Management User roles and associated permissions Table 2 SecA AUD None SA DSA Permissions MO AD PM No No Create/delete No Yes No No Yes No user accounts No No Yes No Yes No No Reset user No password Yes No Yes No No Create roles No Yes No (self- exclude d) Yes Yes Yes Yes Yes Yes Yes Change own Yes password No No No No Manage storage No No Yes Yes systems Discover storage No No Yes No No No No Yes systems Yes Yes No No Add/show No No No No license keys Yes No No Set alerts and Yes No No No No Optimizer monitoring options Yes No No No No No No Release storage Yes system locks No No No No Yes No No Yes Set Access Controls Set replication Yes Yes No No No No No No and reservation preferences Yes No Yes Yes Yes No No View the storage No system audit log No Yes Yes Yes Yes Yes No Yes Access performance data Start data traces Yes Yes Yes Yes No Yes No Yes No No Yes Yes No No No Set performance Yes thresholds/ alerts Yes Yes Yes Yes Yes Create and No Yes No manage Roles and associated permissions 81

82 Administration Table 2 User roles and associated permissions (continued) SA AD AUD None PM DSA MO Permissions SecA performance dashboards Yes No No Collect and No No No No No analyze database activity with DSA Table 3 Permissions for Local Replication, Remote Replication and Device Management roles Permissions Local Remote Device Management Replication Replication No Create/delete user No No accounts No No Reset user password No Create roles No No No Yes Yes Change own password Yes No No No Manage storage systems Discover storage No No No systems No No No Add/show license keys No No Set alerts and Optimizer No monitoring options No No Release storage system No locks Set Access Controls No No No Set replication and No No No reservation preferences View the storage No No No system audit log Yes Yes Yes Access performance data Start data traces Yes Yes Yes Set performance No No No thresholds/alerts Yes Yes Yes Create and manage performance dashboards No No Collect and analyze No database activity with Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 82

83 Administration Table 3 Permissions for Local Replication, Remote Replication and Device Management roles (continued) Permissions Local Device Management Remote Replication Replication Database Storage Analyzer No Perform control, No Yes configuration and expand operations on devices Create or delete devices No No No Perform local replication Yes No No operations (SnapVX, legacy Snapshot, Clone, BCV) Yes No No Create Secure SnapVX snapshots No Create, operate upon or No Yes delete SRDF device pairs No No No Create, modify or delete SRDF groups Link and launch Creating link-and-launch client registrations Before you begin To perform this operation, you must be an Administrator or SecurityAdmin. Link-and-launch is not supported with X.509 certificate-based user authentication. This procedure explains how to register other applications with the SMAS server. Once registered, users of the registered applications can launch Unisphere without logging in. Procedure 1. to open the Settings panel. Select Management Link and Launch > 2. Select 3. Click Create . 4. Type a unique Client ID . Client IDs can be up to 75 alphanumeric characters. Password associated with the client ID. 5. Type the Passwords can be up to 75 alphanumeric characters. Link and launch 83

84 Administration 6. Retype the password to confirm it. . 7. Click OK Editing link-and-launch client registrations Before you begin To perform this operation, you must be an Administrator or SecurityAdmin. This procedure explains how to change the password associated with a registered application. Procedure 1. Select Settings to open the panel. Link and Launch > 2. Select Management . 3. Select a registration, and click Edit . Current Password 4. Type the . 5. Type the New Password Passwords can be up to 75 alphanumeric characters. 6. Retype the new password to confirm it. . 7. Click OK Deleting link-and-launch client registrations Before you begin To perform this operation, you must be an Administrator or SecurityAdmin. Procedure 1. to open the Settings panel. Select 2. Select Link and Launch Management > 3. . Select a registration, and click OK . 4. Click Viewing link and launch client registrations Procedure 1. Select to open the Settings panel. Management > Link and Launch 2. Select After you finish The list view allows you to view and manage link and launch client Link and Launch registrations. The following property displays: l Client ID —Unique client ID. The following controls are available: 84 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

85 Administration l Creating link-and-launch client registrations Create on page 83 — l Editing link-and-launch client registrations Edit on page 84 — l — Deleting link-and-launch client registrations on page 84 Managing Database Storage Analyzer (DSA) environment preferences Before you begin Only a user with Administrator permission can specify DSA environment preferences. To specify DSA environment preferences: Procedure 1. Select panel. to open the Settings DSA Environment DSA Environments page. 2. Select to open the 3. Select an environment from the Environments drop-down list. 4. Modify any number of the following: l —Number of days (between 15 and 30) to Data retention for 5 min data retain 5 minute data. l —Number of months (between 12 and 36) Data retention for hourly data to retain hourly data. l —Number of months (between 12 and 36) to Data retention for daily data retain daily data. l —First threshold for DB First threshold for DB read response time (ms) read response time. l Second threshold for DB read response time (ms) —Second threshold for DB read response time. . 5. Click APPLY Managing data protection preferences Before you begin Only a user with Administrator permission can specify data protection preferences. To specify data protection preferences.: Procedure 1. Select Settings panel. to open the Data Protection Data Protection page. to open the 2. Select 3. Modify any number of the following: l Clone Copy Mode —Select the default behavior for creating clone sessions. Possible values are: n No Copy No Diff —Creates a nondifferential (full) copy session without a full background copy. Managing Database Storage Analyzer (DSA) environment preferences 85

86 Administration n Copy No Diff —Creates a nondifferential (full) copy session in the background. n —Creates a nondifferential (full) copy session in the PreCopy No Diff background before the activate starts. n Copy Diff —Creates a differential copy session in the background. In differential copy sessions, only those volume tracks that have changed since the full clone was performed are copied (that is, only new writes to the source volume are copied). n —Creates a differential copy session in the background PreCopy Diff before the activate starts. In differential copy sessions, only those volume tracks that have changed since the full clone was performed are copied (that is, only new writes to the source volume are copied). n VSE No Diff —Creates a VP Snap Copy session. l Clone Target —Select the default target volume. l —Select the Protection Setup Wizard SRDF Communication Protocol or . default SRDF communication protocol, GigE Fibre Channel l Protection Setup Wizard SRDF Number of Ports —Select the default number of ports to use with SRDF. 4. Click APPLY . Viewing authentication authority information Procedure 1. Select to open the Settings panel. Settings > Users and Groups > Authentication 2. Select 3. . Hover over an authentication type and click The authentication authority information is displayed. For LDAP-SSL, the following is displayed when LDAP is enabled.) l —Port number of the LDAP service. Typically, this value is 389 for Port LDAP and 636 for LDAPS. l Server —Hostname of IP address of the LDAP server used for authentication. l Port —Port number of the LDAP service. Typically, this value is 389 for LDAP and 636 for LDAPS. l Bind DN —Distinguished name (DN) of the privileged account used to perform operations, such as searching users and groups, on the LDAP directory. l —Distinguished name of the node at which to begin user User Search Path searches. l —Object class identifying users in the LDAP hierarchy. User Object Class l User ID Attribute —Attribute identifying the user login ID within the object. l Group Search Path —Distinguished name of the node at which to begin group searches. l —Object class identifying groups in the LDAP Group Object Class hierarchy. 86 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

87 Administration l Group Name Attribute —Attribute identifying the group name. l Group Member Attribute —Attribute indicating group membership for a user within the group object. l —Name of authenticated LDAP group. Limit Authentication to Group —Status of authentication (enabled or disabled). Status Limited Authentication Group(s) —Limited Authentication Group(s) names. Local User and Authorization operations l on page 77). Editing local users Modify Local User (see l Viewing local users details (see Viewing local users details on page 78). l on page 74). Modify Authorization rules (see Removing authorization rules l Viewing the authorized users Viewing the authorized users and groups details (see on page 76). and groups details Link and Launch operations l Creating link-and-launch client registrations (see Creating link-and-launch client registrations on page 83). l Editing link-and-launch client registrations (see Editing link-and-launch client registrations on page 84). Entering PIN number To enter the PIN number : Procedure 1. to open the panel. Settings Select 2. Select one of the following: . l Symmetrix Access Control > Access Control Entries l Access Groups > Symmetrix Access Control l Symmetrix Access Control > Access Pools A warning is displayed if you have read- only access. Enter PIN . 3. Click 4. Enter the PIN number. 5. Click . OK Report operations The following report operations are available: l Creating Compliance Reports on page 157). Create Compliance reports (see l Create performance reports (see Creating performance reports on page 579). Local User and Authorization operations 87

88 Administration l Modifying performance reports on page 583). Modify performance reports (see l Copying performance reports on page 580). Copy performance reports (see 88 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

89 CHAPTER 4 Storage Management l Understanding Storage Management ...90 l ...91 Tag and Untag operations l Viewing Storage Group Demand Reports ...91 l Viewing Service Level Demand Reports ... 92 l Viewing CKD volumes ... 92 l Viewing CKD volumes in CU image ...93 l Viewing Storage Group Compliance view ... 94 l ... 96 Dialog displayed when there is less than one week's data collected l Setting volume emulation ...96 l FAST association operations ... 97 l Removing DATA volumes ... 97 l ...97 Mapping volume operations l Rename operations ... 98 l Provisioning storage ...98 l Creating storage groups ... 112 l Understanding FAST ... 151 l Managing volumes ... 177 l Viewing disk groups ... 230 l ...237 Creating DATA volumes l Creating thin pools ... 240 l Creating thin volumes ... 256 l Understanding Virtual LUN Migration ... 258 l ... 263 Understanding Federated Tiered Storage l Understanding storage templates ...267 l Understanding FAST.X ... 272 l Viewing reservations ... 277 l Managing vVol ...278 l Understanding compression ... 287 Storage Management 89

90 Storage Management Understanding Storage Management Storage Management covers the following areas: l Storage Groups - Management of Storage groups. Storage groups are a collection of devices stored on the array that are used by an application, a server, or a collection of servers. Storage groups are used to present storage to hosts in masking/mapping, Virtual LUN Technology, FAST, and various base operations. l Service Levels- Management of service levels. A service level is the response time target for a storage group. The Service Level sets the storage array with the desired response time target for a storage group. It automatically monitors and adapts to the workload in order to maintain the response time target. The Service Level includes an optional workload type so it can be fine tuned to meet performance levels.. l Templates - Management of templates. Using the configuration and performance characteristics of an existing storage group as a starting point, you can create templates that will pre-populate fields in the provisioning wizard and create a more realistic performance reservation in your future provisioning requests. l Storage Resource Pools - Management of Storage Resource Pools. Fully Automated Storage Tiering (FAST) provides automated management of storage array disk resources to achieve expected service levels. FAST automatically configures disk groups to form a Storage Resource Pool (SRP) by creating thin pools according to each individual disk technology, capacity and RAID type. l Volumes - Management of volumes. A storage volume is an identifiable unit of data storage. Storage groups are sets of volumes. l External Storage - Management of external storage. FAST.X attaches external storage to storage systems directs workload movement to these external arrays while having access to the array features such as local replication, remote replication, storage tiering, data management, and data migration. In addition, it simplifies multi-vendor or Dell EMC storage array management. l Vvol - Management of Vvols. VMware VVols allow data replication, snapshots, encryption and so on to be controlled at the VMDK level instead of the LUN level, where these data services are performed on a per VM (application level) basis from the storage array. l FAST Policies - Management of FAST policies. A FAST policy is a set of one to three DP tiers or one to four VP tiers, but not a combination of both DP and VP tiers. Policies define a limit for each tier in the policy. This limit determines how much data from a storage group associated with the policy is allowed to reside on the tier. l Tiers - Management of storage tiers. FAST automatically moves active data to high-performance storage tiers and inactive data to low-cost, high-capacity storage tiers. l Thin Pools - Management of Thin pools. Storage systems are pre-configured at the factory with virtually provisioned devices. Thin Provisioning helps reduce cost, improve capacity utilization, and simplify storage management. Thin Provisioning presents a large amount of capacity to a host and then consumes space only as needed from a shared pool. Thin Provisioning ensures that thin pools can expand in small increments while protecting performance, as well as non-disruptive shrinking of thin pools to help reuse space and improve capacity utilization. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 90

91 Storage Management l Disk Groups - Management of disk groups. A disk group is a collection of physical drives within the storage array that share the same performance characteristics. l Vlun Migration - Management of VLUN migration. Virtual LUN Migration (VLUN Migration) enables transparent, nondisruptive data mobility for both disk group provisioned and virtually provisioned storage system volumes between storage tiers and between RAID protection schemes. Virtual LUN can be used to populate newly added drives or move volumes between high performance and high capacity drives, thereby delivering tiered storage capabilities within a single storage system. Migrations are performed while providing constant data availability and protection. Tag and Untag operations The following tag and untag operations are available: l Tagging and untagging Storage Group level - RecoverPoint tag and untag (see volumes for RecoverPoint (storage group level) on page 472). l Volume level - RecoverPoint tag and untag (see Tagging and untagging volumes on page 472). for RecoverPoint (volume level) l Untagging Data Protection> Open Replicator> RecoverPoint volumes untag (see on page 473). RecoverPoint tagged volumes Viewing Storage Group Demand Reports This procedure explains how to view storage groups on an SRP and their associated workloads. Before you begin: This feature requires HYPERMAX OS 5977 or higher. To view storage group demand reports: Procedure 1. Select the storage system. CAPACITY to open the CAPACITY dashboard. 2. Select Actions 3. Select a SRP instance from the drop down menu and in the panel, click STORAGE GROUP DEMAND . Some or all of the following properties display: l Storage Group —Name of the storage group. l Subscription (GB) —Amount of SRP capacity to which the storage group subscribed. l Allocated (GB) —The amount of allocated pool capacity (in GB). l Allocated (%) —The percentage of allocated pool capacity. l —The amount of allocated pool capacity (in GB). Used (GB) l Snapshot Allocated (GB) —The amount allocated to snapshots. l Compression Ratio —The compression ratio. l —The amount used by snapshots. SNAP Used (GB) l Snapshot Compression Ratio —The snapshot compression ratio. Tag and Untag operations 91

92 Storage Management l Emulation —Emulation type. This displays only if the storage system is capable of containing CKD devices. The following control is available: l —Exports the report to a PDF file. Export Report Viewing Service Level Demand Reports This procedure explains how to view demand that each service level is placing on the SRP. Before you begin: This feature requires HYPERMAX OS 5977 or higher. To view service level demand reports: Procedure 1. Select the storage system. to open the 2. Select dashboard. CAPACITY CAPACITY Actions 3. Select a SRP instance from the drop down menu and in the panel, click . SERVICE LEVEL DEMAND Results Some or all of the following properties display: Service Level Name of the service level. Allocated (GB) Total space that the service level has allocated on the SRP in GB. Allocated (%) Percentage of space that the service level has allocated on the SRP. Subscription (GB) Total space that the service level has subscribed on the SRP in GB. Subscription (%) Percentage of space that the service level has subscribed on the SRP. Viewing CKD volumes See below for procedure to view CKD volumes from the Hosts > Mainframe dashboard. To see the CKD volumes in a CU image, see Viewing CKD volumes in on page 93. CU image Procedure 1. Select the storage system. HOSTS > Mainframe and click on CKD Volumes in the Summary panel. 2. Select CKD Volumes The list view is displayed. Use the this list view to view and manage the volumes. The following properties display: however, not all properties may be available for every volume type: Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 92

93 Storage Management l Name —Assigned volume name. l Type —Type of volume. l —% of the volume that is allocated. Allocated % l —Volume capacity in Gigabytes. Capacity (GB) l Status —Volume status. l Emulation —Emulation type for the volume. l —Number of masking records for the volume. Host Paths l Reserved —Indicates whether the volume is reserved. l Split —The name of the associated split. l CU Image —The number of the associated CU image. l Base Address —Base Address. The following controls are available, however, not all controls may be available for every volume type: l on page 204 — Viewing CKD volume details l — Creating volumes Create on page 178 l — on page 191 Expand Expanding existing volumes l on page 188 — Deleting volumes l — on page 112 Create SG Creating storage groups l Set Volumes > Emulation Setting volume emulation on page 96 — l — Setting volume attributes on page 195 Set Volumes > Attribute l Set Volumes > Identifier on page 196 — Setting volume identifiers l — Setting volume status Set Volumes >Status on page 194 l — on page 197 Set Volumes > Replication QoS QOS for replication l Setting the SRDF GCM flag on page 434 Set Volumes > Set SRDF GCM — l — Set Volumes > Reset SRDF/Metro Identity Resetting original device on page 432 identity l Allocate/Free/Reclaim > Start Managing thin pool allocations on page — 244 l — Managing thin pool allocations on page Allocate/Free/Reclaim > Stop 244 l Changing volume — Configuration > Change Volume Configuration on page 190 configuration l Duplicating volumes on page 188 Configuration > Duplicate Volume — l — z/OS map from the volume list view on page Configuration > z/OS Map 333 l — z/OS unmap from the volume list view on Configuration > z/OS Unmap page 334 Viewing CKD volumes in CU image Viewing CKD volumes in CU image Viewing CKD volumes in CU image 93

94 Storage Management Procedure 1. Select the storage system. Hosts > 2. Select CU Images 3. . Select the CU image and click 4. In the details panel, click on the number in the field to Number of Volumes open the CKD Volumes list view.. list view to display and manage CKD volumes in a 5. Use the CKD Volumes CU image. Results Name — Symmetrix volume name. Type — Volume configuration. Status — Volume status. — Volume capacity in GBs. Capacity (GB) Emulation — Emulation type. UCB Address — Unit control block (address used by z/OS to access this volume. Volser — Volume serial number (disk label (VOL1) used when the volume was initialized). The following controls are available: — Viewing CU image details on page 329 — z/OS map from the volume list view on page 333 z/OS Map — z/OS Unmap z/OS unmap from the volume list view on page 334 Viewing Storage Group Compliance view Before you begin The user requires a minimum of Monitor permissions to perform this task. Definitions: l Workload Skew - Skew is represented by capacity and load pairs. There are two sources of skew for a storage group. One is using device stats. The other is using SG_PER_POOL chunks. There is an algorithm in WLP to merge these two lists to give us a usable skew profile. A skew profile is only useful if you have multiple chunks. If an SG has a single device, there is not enough data to calculate skew, the corresponding storage group per pool metrics can be used. Similarly, if an array has only one pool, the device stats are more meaningful for skew. l Workload Mixture - The mixture is the distribution of various I/O types as percentages of the total IOPS. These are useful for determining, for example, whether a workload is heavy read or heavy write, whether I/Os are mostly random or mostly sequential. To view the Storage Group (SG) Compliance view: Procedure 1. Select the storage system. STORAGE > Storage Groups to open the Storage Groups View . 2. Select 94 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

95 Storage Management 3. Select a storage group and click to view its details. VIEW ALL DETAILS 4. Select . tab. 5. Select the Compliance Charts are displayed for the following: l Response Time chart - this chart displays wait time weighted response time and (if applicable) the target service level response time band. The following section explains the data in the chart. n Actual: running I/O to Storage Group - Wait time weighted response time is calculated in buckets and displayed. If a bucket has no data, 0 is displayed. n Actual: no I/O to Storage Group - 0s are displayed. n Planned: SLO Response Time Max and SLO Response Time Min are displayed as a data band across the timeline. This is labeled "Planned". If the service level is Optimized, no plan is displayed, because there is no Response Time band for Optimized. n Excluded Data: If a recurring exclusion has been set via the Exclusion Windows dialog, the windows are represented by vertical gray plot bands. n Last Processed: A 2px dotted plot line marks the most recent SPA HOURLY timestamp processed by SPA for a given metric. It is not represented in the legend, but if you hover, you can see the timestamp associated. In normal successful/processing, this acts as a "Where am I" indicator. If WLP stops processing for some reason, it's a subtle debugging helper. l IOPS chart - This chart toggles between IO/sec and MB/sec, displaying IO rate weighted metric values, "planned" values, and (if set) Host IO Limits. The following section explains the data in the chart. n Actual: running I/O to Storage Group - IO Rate weighted total IOPS (or total MBPS) are calculated in buckets and displayed. If a bucket has no data, 0 is displayed. n Actual: no I/O to Storage Group - 0s are displayed. n Planned: Host I/O Limits for Standalone SG - Host IO Limit is displayed as a static value across the timeline. Host IO Limit is only shown on the chart it impacts. For example, if MBPS host IO limit is set, and the user has IOPS selected, they won't see anything unless they toggle to MBPS. n Planned: Host I/O Limits for Child SG, no limit for the parent SG - Host IO Limit is displayed as a static value across the timeline. Host IO Limit is only shown on the chart it impacts. For example, if MBPS host IO limit is set, and the user has IOPS selected, they won't see anything unless they toggle to MBPS. n Planned: No Host I/O Limits for Child SG and parent SG - If a cascaded SG has a host IO limit set at the parent, but no direct limit of its own, the host IO limit of any given child would be the parent limit minus whatever the siblings are using. n Planned: Host I/O Limits for Child SG and parent SG - If a cascaded SG has a host IO limit set at the parent, and a direct limit of its own, the host IO limit of any given child would be the more limiting of theparent limit minus whatever the siblings are using, or the child SGs own limit. n Excluded Data: If a recurring exclusion has been set via the Exclusion Windows dialog, the windows are represented by vertical gray plot bands. Viewing Storage Group Compliance view 95

96 Storage Management n Last Processed: A 2px dotted plot line marks the most recent SPA HOURLY timestamp processed by SPA for a given metric. It is not represented in the legend, but if you hover, you can see the timestamp associated. In normal successful/processing, this acts as a "Where am I" indicator. If WLP stops processing for some reason, it's a subtle debugging helper. l Workload Skew chart - This chart compares actual workload skew - represented by cumulative capacity and load percentages (ordered by access density) - to planned skew. If there is no IO data, Actual is displayed as 50% skew - a straight line from (0,0) to (100,100). If there is one Device in SG AND Only One Thin Pool, then themerged device ans sg per pool skew profile doesn't give us enough data points. Actual is displayed as 50% skew - a straight line from (0,0) to (100,100). If IO is running to the SG, the skew is a logarithmic curve (or stepped line graph in some cases). l I/O Mixture chart - This chart compares actual workload mixture to planned workload mixture. The inner pie represents the actual IO distribution. The outer donut represents the planned mixture. If there is no I/O to the storage group, the mixture distribution will be equal percentages for each IO type (20% read hit, 20% sequential write, etc.) and the tooltip will show the corresponding IO sizes as 0kB. Show Plan Select the slider to turn on or turn off the display of the plan. The plan is reference point used for comparison, and is a two week expiring performance reservation for subsequent provisioning suitability calculations. The following controls are available: l - Managing Data Exclusion Windows on page 158 Exclude Data l - Creating storage templates on page 267 Save As a Template l Reset Workload Plan - Resetting Workload Plan on page 177 l - Setting host I/O limits on page 132 Set Host I/O Limits Dialog displayed when there is less than one week's data collected This dialog is displayed when at least one week of data has not been collected for the selected storage group. It is recommended that you wait until you have at least one week or data. Alternatively, if you wish to proceed, click the Autofill the template checkbox. Click OK . workload with averages from the stats collected so far Setting volume emulation Before you begin You cannot set attributes for DATA volumes. Setting emulation for CKD volumes is not supported. If attempting to set attributes for multiple volumes of type FBA and CKD, a warning is displayed stating that the action will be applied only to FBA volumes. Setting emulation is not supported on masked/mapped volumes. To set volume emulation: Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 96

97 Storage Management Procedure 1. Select the storage system. STORAGE . > 2. Select Volumes 3. Select one of the volume type tabs. 4. Select a volume, click , and click . Set Volumes > Emulation 5. Select the Emulation type. 6. Do one of the following: l Add to Job List Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l , and click Expand to perform the operation now. Add to Job List Run Now FAST association operations The follow FAST association operations are available: l Associating storage groups Associating storage groups with FAST policies (see with FAST policies on page 169). l Associating FAST policies with storage groups (see Associating FAST policies with storage groups on page 168). l Re associating FAST polices and storage groups (see Reassociating FAST polices on page 170). and storage groups Removing DATA volumes This procedure explains how to remove DATA volumes on storage systems running Enginuity version 5876. Procedure 1. Select the storage system. STORAGE > Thin Pools to open the Thin Pools list view. 2. Select 3. Details view. to open its Select the thin pool and click 4. Click the number next to Number of Data Volumes . Remove . 5. Select a data volume and click OK . 6. Click Mapping volume operations The follow mapping volume operations are available: l Mapping volumes (see Mapping volumes on page 192). l Unmapping volumes on page 193). Unmapping volumes (see l Mapping CKD volumes (see Mapping CKD volumes on page 340). FAST association operations 97

98 Storage Management l on page 341). Unmapping CKD volumes (see Unmapping CKD volumes l on z/ OS map from the volume list view (see z/OS map from the volume list view page 333). l z/ OS unmap from the volume list view (see z/OS unmap from the volume list on page 334). view l z/OS map from the z/ OS map from the Volumes (Storage Groups) list view (see Volumes (Storage Groups) list view on page 335 ). l z/ OS unmap from the Volumes (Storage Groups) list view (see z/OS unmap from on page 335). the Volumes (Storage Groups) list view l z/OS z/ OS map FBA volumes from the Volumes (Storage Groups) list view (see map FBA volumes from the Volumes (Storage Groups) list view (HYPERMAX OS 5977 or higher) on page 338). l z/ OS map from the CU image list view (see z/OS map from the CU image list on page 332). view l z/ OS unmap from the CU image list view (see z/OS unmap from the CU image on page 333). list view Rename operations The follow rename operations are available: l Renaming disk groups Rename disk groups (see on page 237). l Rename storage tiers (see Renaming tiers on page 163). Provisioning storage With the release of HYPERMAX OS 5977 and the next generation storage systems, Unisphere introduces support for service level provisioning. Service level provisioning simplifies storage management by automating many of the tasks associated with provisioning storage. Service level provisioning eliminates the need for storage administrators to manually assign physical resources to their applications. Instead, storage administrators specify the storage performance and capacity required for the application and let the system provision the workload appropriately. By default, storage systems running HYPERMAX OS 5977 or higher are pre- configured with a single Storage Resource Pool (SRP) containing all the physical disks on the system organized into disk groups by technology, capacity, rotational speed, and RAID protection type. allows storage administrators to view all the SRPs configured on the system and the demand that storage groups are placing on them. In addition, storage systems are also pre-configured with a number of Service Level and workloads, which storage administrators use to specify the performance objectives for the application they are provisioning. When provisioning storage for an application, storage administrators assign the appropriate SRP, service level , and workload to the storage group containing the application's LUNs. 98 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

99 Storage Management Unisphere provides the following methods for provisioning storage: Recommended: This method relies on wizards to step you through the provisioning process, and is best suited for novice and advanced users who do not require a high level of customization (that is, the ability to create their own volumes, storage groups, and so on). Advanced: This method, as its name implies, is for advanced users who want the ability to control every aspect of the provisioning process. This section provides the high-level steps for each method, with links to the relevant help topics for more detail. Regardless of the method you choose, once you have completed the process you will have a masking view, in which the volumes in the storage group are masked to the host initiators and mapped to the ports in the port group. Before you begin: The storage system must already be configured. For instructions on provisioning storage systems running Enginuity 5876 or higher, on page 107. refer to Provisioning storage To provision storage: 1. Creating hosts on page 292 on page 292 1. Creating hosts Create Host Use the Use the Create Host dialog box to dialog box to group host initiators (HBAs). group host initiators (HBAs). 2. on page 178 Creating volumes on Using the Provision Storage wizard 2. page 100 Create one or more volumes on the storage system. Provision Storage Use the wizard, which will step you through the process 3. Use the Create Storage Group dialog of creating the storage group, port group, box to add the volumes you just created and masking view. to a storage group, and associate the storage group with a storage resource pool, a service level, and a workload. Creating port groups on page 316 4. Group Fibre Channel and/or iSCSI front- end directors. 5. Creating masking views on page 307 Provisioning storage 99

100 Storage Management Associate the host, storage group, and port group into a masking view. Using the Provision Storage wizard Before you begin l The storage system is running HYPERMAX OS 5977 or higher. l The user must have Administrator or StorageAdmin permission. l There are multiple ways to open the Provision Storage wizard. Depending on the method you use, some of steps listed below may not apply. For example, if you open the wizard from the Hosts view, the step on selecting a host does not apply. Or, if you open the wizard from the Provisoning Templates view, the steps on selecting the Service Level and Workload Type does not apply. When opening the wizard from the Provisoning Templates view, please also note the following: Based on the selected template, the appropriate fields (service level, workload type, size and number of volumes) will be filled in with values from the template. If the service level is not available on the default SRP on the selected storage system, it will default to the default service level (Diamond for AFA, Optimized for hybrid arrays When creating a storage group from the first page without adding it to a masking view, the storage group will be associated with the template but will be marked invalid (will not be included in the usage count for that template) until it is added to a masking view. If the selected template has host IO limits defined based on the provisioning request the limits will be set n Standalone SG: The limits will be set and can be modified. n Cascaded SG: The limits will be set on each of the children but the parent will have no limit set. There are multiple ways to open the Provisioning Storage wizard. Depending on the method you use, some of the following steps may not apply. For example, Provision selecting a storage group in the Storage Groups list view and clicking Storage will open the wizard on the Select Host/Host Group page because you are starting out with a storage group. This procedure explains how to use the Provision Storage wizard to provision storage systems running HYPERMAX OS 5977. In addition, you can also use a subset of the steps to simply create a storage group, without actually provisioning it. The maximum number of storage groups allowed on a storage system running HYPERMAX OS 5977 is 16,384. For HYPERMAX OS 5977 or higher, the maximum number of child storage groups allowed in a cascaded configuration is 64. For instructions on provisioning storage systems running Enginuity 5876, refer to on page 108. Using the Provision Storage wizard To use the Provisioning Storage wizard: Procedure 1. Select the storage system. 2. Select > Storage Groups . STORAGE Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 100

101 Storage Management 3. Do one of the following: l Create to open the Select the storage group and click Provision Storage wizard. l Provision Storage to Host Select the storage group and click to open the Provision Storage wizard (go to step 8). 4. Type a Storage Group Name name. Storage group names must be unique from other storage groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and dashes (-) are allowed. Storage group names are case- insensitive. 5. If required, select an Emulation type. 6. Select a Storage Resource Pool . To create the storage group outside of FAST None control, select . External storage resource pools are listed below the . External heading 7. Optional: Add one or more storage groups by hovering over the area to the right of the volume capacity and selecting . 8. Optional: Create a storage group with multiple volume sizes or edit the storage group by hovering over the area to the right of the volume capacity and selecting on page 149). (see Editing storage group volume details Service Level to set on the storage group. Service levels specify the 9. Select the characteristics of the provisioned storage, including average response time, Storage workload type, and priority. This field defaults to None if you set the to None. Possible values are: Resource Pool Service level Use case Performance type Diamond Ultra high HPC, latency sensitive Platinum Very high Mission critical, high rate OLTP High Very heavy I/O, database logs, datasets Gold Silver Price/Performance Database datasets, virtual applications Cost optimized Backup, archive, file Bronze Places the most active data on the highest Optimized Optimized (Default) performing storage and the least active on the most cost-effective storage. For all-flash storage systems running HYPERMAX OS 5977, the only service level available is Diamond and it is selected by default. Workload Type to assign it. 10. Select the Note Workload type is not supported for CKD storage groups. Using the Provision Storage wizard 101

102 Storage Management Note Starting with Unisphere 9.0, workloads are not supported on PowerMaxOS 5978 and higher. Workload types are used to refine the service level (that is, narrow the latency range). Possible values are OLTP or DSS, where OLTP workload is focused on optimizing performance for small block I/O and DSS workload is focused on Workload Type can also optimizing performance for large block I/O. The specify whether to account for any overhead associated with replication (OLTP_Rep and DSS_Rep). Capacity of each. 11. Type the number of Volumes and select the Note The maximum volume size supported on a storage system running HYPERMAX OS 5977 is 64 TB. It is possible to create an empty Storage Group with no volumes. 12. Optional: To set host I/O limits for the storage groups, click Set Host I/O Limits to open the Host I/O Limits dialog box. For instructions setting the limits, refer to the help page for the dialog box. When done, close the dialog box to return to the wizard. 13. Compression is enabled by default on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or higher when you are creating a storage Compression check box. group or storage container. To disable it, uncheck the For more information, refer to Understanding compression . 14. To create a storage group, without actually provisioning it, click one of the following; otherwise, click Next and continue with the remaining steps in this procedure: l Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Run Now to perform the operation now. Add to Job List , and click 15. Specify the host/host group to use by selecting an existing host/host group, or Next doing the following to create a new host/host group. When done, click . l to open the To create a new host, click dialog Create Host Create Host box. For instructions on creating a host, refer to the dialog's help page. l Create Host Group to open the Create To create a new host group, click dialog box. For instructions on creating a host, refer to the Host Group dialog's help page. 16. Select whether to use a New or an Existing port group, and then do the following depending on your selection: New: a. Optional: Edit the suggested Port Group Name by highlighting it and typing a new name over it. Port group names must be unique from other port groups on the storage system and cannot exceed 64 characters. Only 102 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

103 Storage Management alphanumeric characters, underscores ( _ ), and (-) are allowed. Port group names are case-insensitive. b. Select the ports to use. To view host-invisible ports (unmasked and Include ports not visible to the host unmapped), select . If a Fibre or iSCSI host was not selected, select the appropriate filter to filter the port list by iSCSI virtual ports or FC ports based on the selected host. If an empty host was selected, the radio button Fibre is selected by default. The Dir-Port table is filtered to only show either FC or iSCSI depending on the radio button selection. The following properties display: l Dir-Port —Storage system director and port in the port group. l —Port identifier. Identifier l —Number of initiators logged into the fabric. Initiators Logged In l PGs —Number of port groups where the port is a member. l Mappings —Number of mappings. l —Percentage of time that the port is busy. % Busy . c. Click Next Next . Existing: Select the port group and click 17. Optional: Edit the suggested name for the Masking View by highlighting it and typing a new name over it. Verify the rest of your selections. To change any of them, click Back . Note that some changes may require you to make additional changes to your configuration. 18. Optional: To receive alerts when the performance of the storage group changes, relative to its service level target, select Enable Compliance Alerts . For more information on Compliance Alerts, refer to Creating service on page 63. level compliance alerts policies Set Host I/O Limits 19. Optional: Click . This option is not displayed when you select an existing storage group and click . The option is displayed when you click . Create Provision 20. Optional: Determine if the storage system can handle the updated service level: l . The Suitability Check Click Run Suitability Check dialog box opens, indicating the suitability of the change. For information on interpreting the results, refer to the dialog's help page. This option is only available under certain circumstances. For more information, refer to Suitability Check on page 111. l OK to close the message. Click l If your updates are found to be unsuitable, modify the settings and run the check again until the suitability check passes. 21. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Add to Job List , and click Run Now to perform the operation now. Using the Provision Storage wizard 103

104 Storage Management Provisioning storage for mainframe With the release of HYPERMAX OS 5977 Q1 2016, Unisphere introduces support for service level provisioning for mainframe. Service level provisioning simplifies storage system management by automating many of the tasks associated with provisioning storage. Service level provisioning eliminates the need for storage administrators to manually assign physical resources to their applications. Instead, storage administrators specify the service level and capacity required for the application and the system provisions the storage group appropriately. You can provision CKD storage to a mainframe host using the Provision Storage wizard. For specific instructions about how to provision storage for mainframe, refer to Using the Provision Storage wizard for mainframe on page 104. The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured. Using the Provision Storage wizard To provision storage for Open Systems, refer to on page 100. Mapping CKD devices to CU images You can map CKD devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit images, referred to as CU images. Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x00 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image. For more information about how to map CKD devices to CU images, see the following tasks: l z/OS map from the CU image list view on page 332 l on page 333 z/OS map from the volume list view Using the Provision Storage wizard for mainframe Before you begin l The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured. l Depending on the type of configuration selected, not all of the steps listed below might be required. To provision storage to mainframe: Procedure 1. Select the storage system. > 2. Select to open the Mainframe Dashboard. Hosts Mainframe Provision Storage . The Provision Storage wizard for 3. In the Actions panel, click mainframe is displayed. Create Storage Group page, type a Storage Group Name . 4. In the Storage group names must be unique from other storage groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and dashes (-) are allowed. Storage group names are case- insensitive. If you want to create an empty storage group, proceed to the final step after typing the storage group name. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 104

105 Storage Management 5. Select a Storage Resource Pool . . External To create the storage group outside of FAST control, select None External storage resource pools are listed below the heading. 6. Select an and CKD-3380 . Emulation type. Available values are CKD-3390 to set on the storage group. Service Level 7. Select the Service levels specify the characteristics of the provisioned storage, including average response time, workload type, and priority. This field defaults to None if you set the Storage Resource Pool to None. Available values are: Use case Service level Performance level HPC, latency sensitive Ultra high Diamond Backup, archive, file Cost optimized Bronze Optimized Places the most active data on the (Default) highest performing storage and the least active on the most cost-effective storage. For all-flash storage systems, the only service level available is Diamond and it is selected by default. 8. Type the number of Model or Volume Capacity . Volumes and select either a type automatically updates the value. Model Selecting a Volume Capacity . Volume Capacity Alternatively, you can type the Note The maximum CKD volume size supported is 1182006 cylinders or 935.66 GB. It is possible to create an empty Storage Group with no volumes. 9. (Optional) Configure volume options: Note When using this option, Unisphere uses only new volumes when creating the storage group; it will not use any existing volumes in the group. a. Hover the cursor on the service level and click . b. Edit the Volume Identifier . The following options are available: None Do not set a volume identifier. Name Only All volumes will have the same name. Type the name in the Name field. Using the Provision Storage wizard for mainframe 105

106 Storage Management Name and VolumeID All volumes will have the same name with a unique volume ID appended to them. When using this option, the maximum number of characters Name allowed is 50. Type the name in the field. Name and Append Number All volumes will have the same name with a unique decimal suffix appended to them. The suffix will start with the value specified for the Append Number and increment by 1 for each additional volume. Valid Append Numbers must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. Type the name in the Name field. c. To Allocate capacity for each volume you are adding to the storage group, select this option. You can use the this option only for newly created volumes, not existing volumes. d. If you selected to allocate capacity in the previous step, you can mark the allocation as persistent by selecting Persist preallocated capacity through reclaim or copy . Persistent allocations are unaffected by standard reclaim operations and any TimeFinder/Clone, TimeFinder/Snap, or SRDF copy operations. . e. Click OK 10. (Optional) To add a child storage group, do one of the following: l On all-flash storage systems, click . Add Storage Group l On all other storage systems click Add Service Level . , Specify a , Volumes , and Model/Volume Capacity . Name Service Level Repeat this step for each additional child storage group. The maximum number of child storage groups allowed is 64. 11. To create a storage group, without actually provisioning it, click one of the Next following; otherwise, click and continue with the remaining steps in this procedure: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920 Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand CU Image page, select whether to use a New or an Existing CU image, 12. On the and then do the following depending on your selection: l New: a. Specify the following information for the new CU image: n CU Image Number n SSID n Base Address Split with which to associate the CU image. b. Select a l Existing: a. Select a CU image. 106 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

107 Storage Management b. To specify a new value for the base address, click Set Base Address . For more information about setting the base address, refer to Setting the base address on page 337. 13. Click . Next 14. On the Review page, review the summary information displayed. If the storage system is registered for performance, you can subscribe for compliance alerts for the storage group and run a suitability check to ensure that the load being created is appropriate for the storage system. . Enable Compliance Alerts To enable compliance alerts, select . To run a suitability check, click Run Suitability Check 15. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Run Now to perform the operation now. Add to Job List , and click Provisioning storage Provisioning storage refers to the process by which you make storage available to hosts. Unisphere provides the following methods for provisioning storage on storage systems running Enginuity 5876: Recommended: This method relies on wizards to step you through the provisioning process, and is best suited for novice and advanced users who do not require a high level of customization (that is, the ability to create their own volumes, storage groups, and so on). Advanced: This method, as its name implies, is for advanced users who want the ability to control every aspect of the provisioning process. This section provides the high-level steps for each method, with links to the relevant help topics for more detail. Regardless of the method you choose, once you have completed the process you will have a masking view, in which the volumes in the storage group are masked to the host initiators and mapped to the ports in the port group. Before you begin: Provisioning storage 107

108 Storage Management The storage system must already be configured. To provision storage: 1. Use the Create Host dialog box to group on page 292 1. Creating hosts Use the Create Host dialog box to group host initiators (HBAs). host initiators (HBAs). 2. Use the Provision Storage wizard, which will step you through the process of Creating volumes 2. on page 178 Create one or more volumes on the creating the storage group, port group, and masking view, and to optionally storage system. associate the storage group with a FAST 3. Use the Create Storage Group wizard to policy. create a storage group. If you want to add the volumes you created in step 2, be sure to set the wizard's Storage Group Adding Type to Empty, and then complete on page 114. volumes to storage groups on page 316 4. Creating port groups Group Fibre Channel and/or iSCSI front- end directors. 5. Creating masking views on page 307 Associate the host, storage group, and port group into a masking view. 6. Associate the storage group with a FAST policy Optional: Associate the storage you created in step 3 with an existing FAST policy and assign a priority value for the association. Using the Provision Storage wizard Before you begin The storage system is running Enginuity OS version 5876 and must already be configured and you must already have a host. For instructions on creating a host, refer to Creating hosts on page 292. Note the following recommendations: Port groups should contain four or more ports. Each port in a port group should be on a different director. There are multiple ways to open the Provisioning Storage wizard. Depending on the method you use, some of the following steps may not apply. For example, selecting a Provision Storage to Host storage group in the Storage Groups list view and clicking will open the wizard on the Select Host/Host Group page because you are starting out with a storage group. This procedure explains how use the Provision Storage wizard to provision storage systems running Enginuity OS 5876. The wizard steps you through the provisioning process, and is best suited for novice and advanced users who do not require a high level of customization, that is, the ability to create their own volumes, storage groups, and so on. In addition, you can also use a subset of the steps to simply create a storage group, without actually provisioning it. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 108

109 Storage Management The maximum number of storage groups allowed on a storage system running Enginuity 5876 is 8,192. For Enginuity 5876 or higher, the maximum number of child storage groups allowed in a cascaded configuration is 32. For users who want the ability to control every aspect of the provisioning process, on page 108. refer to the Advanced procedure in Using the Provision Storage wizard For instructions on provisioning storage systems running HYPERMAX OS 5977, refer to Using the Provision Storage wizard on page 100. To use the Provisioning Storage wizard: Procedure 1. Select the storage system. Storage Groups . > STORAGE 2. Select 3. Do one of the following: l to open the Select the storage group and click Create Provision Storage wizard. l Select the storage group and click to open the Provision Storage to Host Provision Storage wizard (go to step 8). 4. Type a Storage Group Name . Storage group names must be unique from other storage groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and dashes (-) are allowed. Storage group names are case- insensitive. Storage Group Type . 5. Select the 6. Do the following, depending on the storage group type: l Standard Storage Group: n Select the to add to the storage group and click NEXT . Volume Type n Do the following, depending on the volume type: – Virtual Volumes: type for the volumes to add to the storage a. Select the Emulation group. b. Optional: Select the Thin Pools containing the volumes to add to the storage group. c. Type the number of volumes and enter volume capacity information. d. Optional: To add more volumes, hover the cursor over the volume and click . e. Optional: To remove a previously added volume, hover the cursor over it and click . f. Optional: To edit a volume, hover the cursor over the volume and ) (see Editing storage group details on page 150) click edit ( – Regular Volumes: a. Select the Disk Technology on which the storage group will reside. Using the Provision Storage wizard 109

110 Storage Management b. Select the Emulation type for the volumes to add to the storage group. Protection level for the volumes to add to the storage c. Select the group. d. Type the number of volumes and enter volume capacity information. e. Optional: To add more volumes, hover the cursor over the volume . and click f. Optional: To remove a previously added volume, hover the cursor over it and click . g. Optional: To edit a volume, hover the cursor over the volume and on page 150) ) (see click edit ( Editing storage group details l Empty Storage Group: Note: It is possible to create an empty Storage Group with no volumes. Do one of the following: n Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer on page 920 and Previewing jobs on page 920. to Scheduling jobs n Add to Job List Run Now Expand to perform the operation , and click now. 7. If you want to create a storage group, without actually provisioning it, click one of the following; otherwise, click and continue with the remaining steps in NEXT this procedure: Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Add to Job List Run Now to perform the operation now. Expand , and click 8. Specify the host/host group to use by selecting an existing host/host group, or doing the following to create a new host or host group. l Create Host (see Creating hosts on page 292). To create a new host, click l Create Host Group (see Creating host To create a new host group, click on page 302). groups . NEXT 9. Click 10. Select whether to use a new or an existing port group, and then do the following depending on your selection. When done, click . NEXT New: a. Optional: Edit the suggested port group name by highlighting it and typing a new name over it. Port group names must be unique from other port groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Port group names are case-insensitive. 110 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

111 Storage Management b. Select the ports to use. To view host-invisible ports (unmasked and slider. unmapped), click Include ports not visible to the host The following properties display: l —Storage system director and port in the port group. Dir-Port l Identifier —Identifier. l —Number of initiators logged into the fabric. Initiators l —Number of port groups where the port is a member. PGs l Mappings —Number of volumes mapped to the port. . c. Click NEXT . Existing: Select the port group and click NEXT 11. Optional: Edit the suggested name for the Masking View by highlighting it and typing a new name over it. 12. Optional: To set host I/O limits for the storage groups, click Set Host I/O Limits . For information about setting the limits, refer to Setting host I/O limits on page 132. . Note Verify the rest of your selections. To change any of them, click BACK that some changes may require you to make additional changes to your configuration. 13. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Add to Job List Run Now to perform the operation now. Expand , and click Suitability Check restrictions The Suitability Check option is only available when: l The storage system is running HYPERMAX OS 5977 or higher. l The storage system is registered with the performance data processing option for statistics. l The workloads have been processed. l All the SGs involved have a service level and SRP set. l The target SRP does not contain only external disk groups (like XtreamIO). l The storage system is local. l The SG is not in a making view (only for the local provisioning wizard). The SG should be in a masking view for the Modify SG case. Suitability Check The Suitability Check option is only available when the storage system is running HYPERMAX OS 5977 or higher. This message indicates whether the storage system can handle the updated service level. Results are indicated with either of the following: l Indicates suitable. l Indicates non-suitable. Suitability Check restrictions 111

112 Storage Management In both cases, results are displayed in a bar chart by component (Front End, Back End, Cache) along with a score from 0 to 100 (viewed by hovering the cursor over the bar) indicating the components expected availability on the target storage system after the change. The current score for the component is shown in gray, with the additional load for the component shown in green or red indicating suitability. The additional score is red if the current and additional loads total more than 100. Creating storage groups This procedure explains how to create storage groups on storage systems running HYPERMAX OS 5977 or later. In addition to method described below, you can also create a storage group using the Provision Storage wizard, as described in Using the on page 100. Provision Storage wizard For instructions on creating storage groups on storage systems running Enginuity on page 108. Using the Provision Storage wizard 5876, refer to Before you begin: l The storage systems is running HYPERMAX OS 5977 or higher. l The user must have Administrator or StorageAdmin permission. l The maximum number of storage groups allowed on a storage system running HYPERMAX OS 5977 is 16,384. l For HYPERMAX OS 5977 or higher, the maximum number of child storage groups allowed in a cascaded configuration is 64. l A storage group can contain up to 4,096 volumes. l A volume can belong to multiple storage groups if only one of the groups is under FAST control. l You cannot create a storage group containing CKD volumes and FBA volumes. To create a storage group: Procedure 1. Select the storage system. 2. Select Volumes . Storage > 3. Select one or more volumes, click and select . Create SG . 4. Type a Storage Group Name Storage group names must be unique from other storage groups on the system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Storage group names are case-insensitive. 5. To create the storage group outside of FAST control, set Storage Resource to None ; otherwise, leave this field set to the default. Pool 6. Select the Service Level to set on the SG. Service level policies specify the characteristics of the provisioned storage, including maximum response time, workload type, and priority. This field defaults to None if you set the to None. Possible Storage Resource Pool values are: Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 112

113 Storage Management Service level Performance type Use case HPC, latency sensitive Diamond Ultra high Platinum Mission critical, high rate Very high OLTP Very heavy I/O, database High Gold logs, datasets Silver Price/Performance Database datasets, virtual applications Backup, archive, file Bronze Cost optimized Places the most active data Optimized (Default) on the highest performing storage and the least active on the most cost-effective storage. None For all-flash storage systems running HYPERMAX OS 5977, the only service level available is Diamond and it is selected by default. to assign it. 7. Refine the service level by selecting the Workload Type Note Workload type is not supported for CKD storage groups. Note Starting with Unisphere 9.0, workloads are not supported on PowerMaxOS 5978 and higher. Workload Type are: Possible values for the l OLTP l OLTP+REP l DSS l DSS+REP The workload type does not apply when the service level is Optimized or None. 8. Click Advanced Options to OK to create the storage group now, or click continue setting the advanced options, as described in the remaining steps. 9. Compression is enabled by default on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or higher when you are creating a storage Compression group or storage container. To disable the feature, uncheck the Understanding compression check box. For more information, refer to Enable Mobility ID checkbox to assign Mobility IDs to the 10. Optional: Click the volumes in the storage group. If you leave the checkbox unchecked, Compatibility IDs will be assigned to the volumes instead. Allocate Full Volume capacity . 11. Optional: Select Persist preallocated capacity through reclaim or copy 12. Optional: Click checkbox. Creating storage groups 113

114 Storage Management 13. If you selected to allocate capacity in the previous step, you can mark the allocation as persistent by selecting Persist preallocated capacity through . reclaim or copy Persistent allocations are unaffected by standard reclaim operations and any TimeFinder/Clone, TimeFinder/Snap, or SRDF copy operations. 14. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to on page 920. Previewing jobs Scheduling jobs on page 920 and l , and click Run Now to perform the operation now. Expand Add to Job List Adding volumes to storage groups This procedure explains how to add volumes to existing storage groups. Before you begin: A storage group can contain up to 4,096 volumes. A volume can belong to more than one storage group. To add volumes to storage groups: Procedure 1. Select the storage system. , select Storage Groups . 2. Under STORAGE 3. Select the storage group and click . 4. Click the number next to Volumes . Add Volumes to SG to open the Add Volumes to Storage Group wizard. 5. Click 6. Locate the volumes by selecting or typing values for any number of the following criteria: l —Filters the list for volumes with a specific capacity and Capacity equal to capacity type. l Volume ID —Filters the list for a volume with specific ID. l Volume Identifier Name —Filters the list for the specified volume name. l —Filters the list for the specified volume Volume configuration configuration. l Emulation —Filters the list for the specified volume emulation. l —Tick the checkbox to filter the list to exclude Exclude Volumes in use volumes in use. 7. Click to run the query. NEXT Results are displayed on the next page in the wizard. 8. Select the volumes and click OK . Copying volumes between storage groups This procedure explains how to copy volumes between storage groups. Before you begin: 114 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

115 Storage Management l Storage groups require Enginuity 5876 or HYPERMAX OS 5977 or later. l The user must have StorageAdmin permission. To copy volumes between storage groups: Procedure 1. Select the storage system. . 2. Under , select STORAGE Storage Groups 3. . Select the storage group and click 4. Click . Volumes 5. , and click Copy Volumes To SG to open Select one or more volumes click dialog box. the Copy Volumes to Storage Group Target Storage Group Name 6. Select the . . OK 7. Click Moving volumes between storage groups This procedure explains how to move volumes between storage groups. Before you begin: l Storage groups require Enginuity 5876 or HYPERMAX OS 5977 or later. l The user must have StorageAdmin permission. l To perform this operation without disrupting the host's ability to view the volumes, at least one of the following conditions must be met: n Each storage group must be a child of the same parent storage group, and the parent storage group must be associated with a masking view. n Each storage group must be associated with a masking view, and both masking views must contain a common initiator group and a common port group. In this scenario, the port groups can be different, but they must both contain the same set of ports, or the target port group can contain a superset of the ports in the source port group. n The source storage group is not in a masking view. To move volumes between storage groups: Procedure 1. Select the storage system. , select Storage Groups . 2. Under STORAGE 3. Select the storage group and click . Volumes . 4. Click 5. Move Volumes to SG to open , and click Select one or more volumes, click the Move Volumes to Storage Group dialog box. Target Storage Group Name . 6. Select the 7. Optional: By default, the operation will fail if at least one of the conditions above is not met. To override this default behavior, select Use force flag . Moving volumes between storage groups 115

116 Storage Management 8. Click OK . Removing volumes from storage groups This procedure explains how to remove volumes from storage groups. Before you begin: Storage groups require Enginuity 5876 or HYPERMAX OS 5977 or higher. To remove volumes from storage groups: Procedure 1. Select the storage system. STORAGE , select Storage Groups . 2. Under 3. . Select the storage group and click Volumes . 4. Click to open the Remove 5. Select one or more volumes and click Remove Volumes Volume dialog box. or Unmap , depending on the storage 6. To unbind the volumes, select Unbind operating environment. 7. Do one of the following: l to add this task to the job list, from which you can Add to Job List Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs Previewing jobs on page 920. on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand Storage Group operations The following storage group operations are available: l Expanding storage groups on page 116). Expanding Storage Group (see l on page 119 (5977 or Modifying storage groups Modifying Storage Group (see greater)). Expanding storage groups This procedure explains how to increase the amount of storage in a group accessible to the masking view or in the FAST Policy. Before you begin: l This procedure requires Enginuity OS 5876. l In this procedure you can optionally name the volumes you are adding the storage group. For more information, refer to Setting volume names on page 196. l Empty SGs are not displayed while creating a cascaded SG. To expand a storage group: Procedure 1. Select the storage system. 2. Select STORAGE > Storage Groups . Expand to open the Expand Storage Group 3. Select the storage group and click wizard. 116 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

117 Storage Management 4. Select a method for expanding the storage group. Possible values are: l —Expands the group using virtual volumes. Virtual Volumes l Regular Volumes —Expands the group using regular volumes. l —Expands the group by copying the configuration of volumes Copy Volume already in the group. 5. Click NEXT . 6. Do the following, depending on the method you are using: l Virtual Volumes: a. Select the Emulation type for the volumes to add to the storage group. containing the volumes to add to the b. Optional: Select the Thin Pools storage group. c. Type the number of volumes and enter volume capacity information. d. Optional: To add more volume sizes, hover the cursor over the volume and click . e. Optional: To remove a previously added volume, hover the cursor over it and click . f. Optional: To edit a volume, hover the cursor over the volume and click Editing storage group details edit ( ) (see on page 150) l Regular Volumes: a. Select the Disk Technology on which the storage group will reside. Emulation type for the volumes to add to the storage group. b. Select the Protection c. Select the level for the volumes to add to the storage group. d. Type the number of volumes and enter volume capacity information. e. Optional: To add more volume sizes, hover the cursor over the volume and click . f. Optional: To remove a previously added volume, hover the cursor over it and click . g. Optional: To edit a volume, hover the cursor over the volume and click Editing storage group details on page 150) ) (see edit ( l Copy Volume: Disk Technology on which the storage group will reside. a. Select the Emulation type for the volumes to add to the storage group. b. Select the c. Select the Protection level for the volumes to add to the storage group. d. Specify the capacity by typing the number of volumes, and entering volume capacity information. e. ) (see Optional: Hover the cursor over the volume and click edit ( on page 150) Editing storage group details 7. Do one of the following: Expanding storage groups 117

118 Storage Management l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. on page 920 and Scheduling jobs l Run Now , and click Add to Job List Expand to perform the operation now. Expanding ProtectPoint storage groups Before you begin l This feature requires HYPERMAX OS 5977 or higher. l You must have StorageAdmin permission. l The Data Domain appliance must be connected and zoned to the storage system. l Provide the Data Domain Admin the number and size of volumes that you added to the production storage group and request that they provide you with double the number of similar volumes (masked/visible to the storage system). For example, if the production storage group contains 10 volumes, the Data Domain Admin should provide you with the LUN numbers of 20 similar volumes. l CKD devices are not supported by ProtectPoint. This procedure explains how to increase the amount of storage in a storage group protected by ProtectPoint. To expand protected storage groups: Procedure 1. Select the storage system. , select Storage Groups . 2. Under STORAGE 3. , and click Expand ProtectPoint . Select the storage group, click Opens the Expand ProtectPoint wizard. Next . 4. Select the Point In Time Copy to expand and click Add to 5. Select the external LUNs to add to the backup storage group and click . Select the same number of external LUNs as the number of volumes Group added to the production storage group. Next and select the Restore Storage Group. 6. Click 7. Select the external LUNs to add to the restore storage group and click Add to Group . Select the same number of external LUNs as the number of volumes added to the production storage group. and verify your selections. To change any of them, click Back . Some Next 8. Click changes may require additional configuration changes. 9. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Expand , and click Run Now . Add to Job List Once the job has completed, provide the following information to the Data Domain Admin: l The LUN numbers added to the backup storage group. 118 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

119 Storage Management l The LUN numbers added to the restore storage group. l The name of the point in time copy. Modifying storage groups This procedure explains how to modify storage groups on storage systems running HYPERMAX OS 5977 or later. Before you begin: l You must be an Administrator or StorageAdmin. l The maximum number of storage groups allowed on a storage system is 16,384. l A storage group can contain up to 4,096 storage volumes. l A volume can belong to more than one storage group. l A volume can belong to multiple storage groups if only one of the groups is under FAST control. To modify a storage group: Procedure 1. Select the storage system. > 2. Select STORAGE Storage Groups to open the Modify Modify Storage Group 3. Select the storage group and click dialog box. 4. Do any number of the following: by highlighting it and typing a new name a. Change the Storage Group Name over it. Storage group names must be unique from other storage groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and dashes (-) are allowed. Storage group names are case-insensitive. Note the following about renaming storage groups: l If renaming a storage group with workload on it, you will have to wait some time before the workload is visible in the storage group's Details view. l When renaming a storage group configured compliance alerts, the compliance alerts will need to be deleted manually. For instructions, refer to Deleting compliance alerts policies on page 65. by selecting the new pool from the b. Change the Storage Resource Pool creates the storage group None drop-down menu. Setting this property to outside of FAST control. External storage resource pools are listed below the External heading. Service Level for the storage group. Service levels specify the c. Change the characteristics of the provisioned storage, including maximum response time, workload type, and priority. This field defaults to if you set the None Storage Resource Pool to None . Possible values are: Service Performance type Use case level Diamond Ultra high HPC, latency sensitive Modifying storage groups 119

120 Storage Management Service Use case Performance type level Very high Mission critical, high rate OLTP Platinum Gold High Very heavy I/O, database logs, datasets Silver Database datasets, virtual applications Price/Performance Bronze Cost optimized Backup, archive, file Optimized Places the most active data on the highest performing storage and the least (Default) active on the most cost-effective storage. For all-flash storage systems running HYPERMAX OS 5977, the only service level available is Diamond and it is selected by default. d. Change the Workload Type assigned to the service level. Note Starting with Unisphere 9.0, workloads are not supported on PowerMaxOS 5978 and higher. Volumes e. Add or remove . f. Do the following to change the capacity of the storage group, depending on whether the group contains volumes of the same capacity or mixed capacities: l If the group contains volumes of the same capacity, do one of the following: n Type or select an increased number of volumes in the Volumes drop- down menu. n Type or select an increased unit capacity of the volumes and/or Volume Capacity change the unit in the drop-down menus. Note GB In mixed FBA/CKD All Flash systems, volume capacity defaults to Cyl for CKD Storage Groups. for FBA Storage Groups and l If the group contains volumes of mixed capacities, click Edit custom capacity dialog box. Change the to open the Modify Custom Capacity Volumes by capacity, and click OK . You can only use the number of option for newly created volumes, Allocate capacity for each volume not existing volumes. The Additional Capacity figures are updated to reflect Total Capacity and any changes. Note The maximum volume size supported on a storage system running HYPERMAX OS 5977 is 64 TB. All Flash systems running the HYPERMAX OS 5977 Q2 2017 Service Release or higher supports a maximum CKD device size of up to 1,182,006 cylinders. g. SRDF storage group volume capacity can be expanded using the controls. In the case of SRDF Storage Groups, you need to specify a SRDF group 120 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

121 Storage Management number so that the dialog allowing you to remote volumes can also be on page 511). displayed (see Expanding remote volumes h. Optional: Add one or more storage groups by hovering over the area to the right of the volume capacity and selecting . i. Optional: Create a storage group with multiple volume sizes or edit the storage group by hovering over the area to the right of the volume capacity (see Editing storage group volume details on page 149). and selecting j. Optional: to add a child storage group, do one of the following: l . Add Storage Group On all-flash storage systems, click l . On all other storage systems, click Add Service Level l Modify any of the service level parameters, as described earlier in this procedure. 5. Compression is enabled by default on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or higher when you are creating a storage group or storage container. To disable the feature, uncheck the Enable Compression check box. In a cascaded setup, changes will be passed to each of the child storage groups. For more information on compression, refer to Understanding compression 6. Optional: To determine if the storage system can handle the updated service level: Run Suitability Check . The Suitability Check dialog box opens, a. Click indicating the suitability of the change. For information on interpreting the results, refer to the dialog's help page. This option is only available under certain circumstances. For more information, refer to Suitability Check restrictions on page 111. b. Click to close the message. OK c. If your updates are found to be unsuitable, modify the settings and run the check again until the suitability check passes. 7. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List Run Now to perform the operation now. , and click Expand Renaming storage groups This procedure explains how to rename storage groups. Before you begin: l Storage group names must be unique from other storage groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Storage group names are case-insensitive. l Storage groups require Enginuity 5876, or HYPERMAX OS 5977 or later. To rename a storage group: Renaming storage groups 121

122 Storage Management Procedure 1. Select the storage system. STORAGE . , select 2. Under Storage Groups 3. Rename , and click . Select the storage group, click 4. Type the new name. 5. Click OK . Protecting storage groups The Protect Storage Group wizard guides you through the process of protecting your storage group. Depending on the capabilities of the storage system, the following options may be available: l Creating snapshots on page 387. This is the Snap/VX —For instructions, refer to default method for storage systems running HYPERMAX OS 5977 or higher. l Protecting storage groups using TimeFinder/Clone —For instructions, refer to on page 122. This is the default method for storage systems TimerFinder/Clone running Enginuity 5876. l ProtectPoint —For instructions, refer to Protecting storage groups using ProtectPoint on page 124. This method is only available on storage systems running HYPERMAX OS 5977 or later. l RecoverPoint —For instructions, refer to Protecting storage groups using RecoverPoint on page 125. This is method is only available for storage systems running Enginuity 5876. l SRDF on page —For instructions, refer to Protecting storage groups using SRDF 126. This method is available for storage systems, subject to connectivity rules. l SRDF/Metro Protecting storage groups using SRDF/ —For instructions, refer to Metro on page 127. This is method is only available for storage systems running HYPERMAX 5977 or higher. Protecting storage groups using TimerFinder/Clone Before you begin: l This feature requires the Enginuity 5876.163.105 or later. This feature does not apply to storage systems running HYPERMAX OS 5977 or later. l The storage group must contain only thin volumes (except gatekeepers under 10 MB) and they must all be of the same type (either BCV or standard thin volumes (TDEVs). This restriction also applies to cascaded storage groups, that is, all volumes in the parent and child storage groups must be thin and of the same type. l The SYMAP_ALLOW_DEV_INT_MULTI_GRPS option must be enabled. For instructions on enabling the option, refer to "Editing the Options file" in Solutions Enabler Installation Guide . the l Meta volumes are not supported. To protect storage groups using TimeFinder/Clone: Procedure 1. Select the storage system. STORAGE > Storage Groups . 2. Select 122 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

123 Storage Management 3. Select the storage group and click Protect . . 4. If not already selected, select Point In Time Using Clone NEXT 5. Click . 6. Type the name of the device group that will hold the target volumes ( Device Group Name ). 7. Select the thin pool to which the target volumes will be bound ( Bind to Pool ). If the source storage group contains thin volumes bound to different thin pools, or if its a cascaded storage group with child storage groups containing volumes bound to different thin pools, selecting a single thin pool will result in all target volumes being bound to that single pool. 8. Clear the Create Replica Storage Group option in which case a storage group for the target volumes will not be created. Leaving the option selected allows you to optionally change the name of replica storage group ( Storage Group Name ). Changing the name will also change the target volume storage group name. for 9. z/OS Only: If the storage group contains CKD volumes, type a New SSID Select ... to open a dialog from which you can select an the target, or click SSID . 10. Select the mode in which to create the clone session Clone Copy Mode The mode you specify here will override the default mode specified in the preferences. Possible values are: l No Copy No Diff — Create a nondifferential (full) copy session without a full background copy. l — Creates a nondifferential (full) copy session in the Copy No Diff background. l — Creates a nondifferential (full) copy session in the PreCopy No Diff background before the activate starts. l Copy Diff — Creates a differential copy session in the background. In differential copy sessions, only those volume tracks that have changed since the full clone was performed are copied (that is, only new writes to the source volume will be copied). l PreCopy Diff — Creates a differential copy session in the background before the activate starts. In differential copy sessions, only those volume tracks that have changed since the full clone was performed are copied (that is, only new writes to the source volume will be copied). l VSE No Diff — Creates a VP Snap Copy session. 11. Select the type of volumes to use as the targets ( Clone Targets ). NEXT . 12. Click 13. Verify your selections, and then do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Protecting storage groups 123

124 Storage Management Protecting storage groups using ProtectPoint Before you begin l The storage system must be running HYPERMAX OS 5977. l You must have StorageAdmin permission. l The Data Domain appliance must be connected and zoned to the storage system. l Provide the Data Domain Admin the number and size of volumes in the production storage group and request that they provide you with double the number of similar volumes (masked/visible to the storage system). For example, if the production storage group contains 10 volumes, the Data Domain Admin should provide you with the LUN numbers of 20 similar volumes. l CKD devices are not supported by ProtectPoint. To protect storage groups using ProtectPoint: Procedure 1. Select the storage system. 2. Under STORAGE , select Storage Groups . 3. Select the storage group and click Protect . Backup Using ProtectPoint 4. Select . . NEXT 5. Click OK . 6. Click and click Next 7. Type the name of the Point In Time Copy Name . Backup Storage Group , or leave the system-generated 8. Type a name for the suggestion. 9. Select the external LUNs to add to the backup storage group and click Add to . Storage Group Note that the external LUNs you select must match in number and capacity the volumes in the production storage group. . 10. Click NEXT New Restore Storage Group , or leave the system- 11. Type a name for the generated suggestion. 12. Select the external LUNs to add to the restore storage group and click Add to Storage Group . Note that the external LUNs you select must match in number and capacity the volumes in the production storage group. NEXT . 13. Click 14. Verify your selections. To change any of them, click BACK . Note that some changes may require you to make additional changes to your configuration. 15. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand 124 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

125 Storage Management 16. Once the job completes successfully, provide the following information to the Data Domain Admin: l The LUN numbers used in the backup storage group l The LUN numbers used in the restore storage group l The name of the point in time copy Protecting storage groups using RecoverPoint Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation you must be a StorageAdmin. l The storage group being replicated must be masked to the host. l The storage group being replicated must not contain any volumes that are already tagged for RecoverPoint. l Connectivity to the RecoverPoint system/cluster is available. l RecoverPoint 4.1 is setup and operational. For each cluster in the setup, gatekeepers and repository volumes must be configured in their relevant masking view. uses a default journal masking view naming convention. l Depending on the options selected as part of the Protect Storage Group wizard and the existing configuration, some values for some options might populate automatically. Procedure 1. Select the storage system. > Storage Groups . 2. Select STORAGE . Protect 3. Select the storage group and click 4. On the Select Technology page, select Remote Replication using . RecoverPoint NEXT 5. Click . page, specify the following information: 6. On the Configure RecoverPoint l RecoverPoint System —RecoverPoint system. l RecoverPoint Group Name —Name of the RecoverPoint group. l RecoverPoint Cluster —RecoverPoint cluster. l Production Name —Name of the production. l Data Initiator Group —Data initiator group. l —Journal thin pool. Journal Thin Pool l Journal Port Group —Journal port group. l Data Initiator Group —Journal initiator group. 7. Click NEXT . Add Copies page, specify the following information: 8. On the l RecoverPoint Cluster —RecoverPoint cluster. Protecting storage groups 125

126 Storage Management l Copy Name —Name of the RecoverPoint copy. l Mode —Specify whether the mode is Synchronous or Asynchronous. l —Storage system. Array l —Specify whether the RecoverPoint copy targets a Target Storage Group new storage group or an existing group. l —Name of storage group to be copied. Copy Storage Group l —Name of data thin pool. Data Thin Pool l Data Port Group —Name of data port group. l Journal Thin Pool —Name of journal thin pool. l Journal Port Group —Name of journal port group. . 9. Click Add Copy table. Copy Summary Lists the copy in the NEXT . 10. Click page, verify your selections. To change any of them, click 11. On the FINISH BACK . Some changes may require you to make additional changes to your configuration. 12. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Scheduling jobs Previewing jobs on page 920. l Add to Job List Run Now to perform the operation now. Expand , then click Protecting storage groups using SRDF This procedure explains how to protect storage groups using SRDF. Before you begin: l You must have StorageAdmin permission. l Connectivity to remote storage system must be available. l All storage systems involved must be discoverable and manageable from the console. l The SRDF wizard in Unisphere 8.1 and higher releases supports the mandatory creation of a storage group and the optional creation of a device group. The storage group may contain non-concurrent SRDF devices of any one SRDF type, or may contain non-SRDF devices. l The following validation check is performed by the wizard to determine if selected storage group be SRDF protected: Volumes in the storage group need to be all TDEV’s, or all volumes in the storage group need to be R1s and in the same SRDF Group, or all volumes need to be R2s and in the same SRDF Group. l The SRDF wizard in Unisphere 8.2 and higher releases supports the creation of SRDF protection for CKD Storage Groups. l Set the default number of ports to use with SRDF. To set this number, refer to on page 85. Managing data protection preferences To protect storage groups using SRDF: Procedure 1. Select the storage system. 126 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

127 Storage Management 2. Under Storage Groups . STORAGE , select . 3. Select the storage group and click Protect 4. Select . Remote Replication Using SRDF . 5. Click NEXT 6. Select the Remote Array ID. To update the list of remote systems, click . Scan 7. Select the Replication Mode . For more information, refer to SRDF session on page 442. modes to automatically select a SRDF group or Manual to select a 8. Select Automatic SRDF group from a list. Establish Pairs 9. Optional: To not start pair mirroring, clear the option. 10. Do the following, depending on the storage operating environment (target system): For HYPERMAX OS 5977 or later: Remote Storage Group Name , and optionally select a Optional: Change the Remote Service Level . Changing the name will also change the target volume storage group name. For Enginuity 5876: . a. Optional: Change the Remote Storage Group Name b. Select the Remote Thin Pool to which the target volumes will be bound. If the source storage group contains thin volumes bound to different thin pools, or if it is a cascaded storage group with child storage groups containing volumes bound to different thin pools, selecting a single thin pool will result in all target volumes being bound to that single pool. c. Optional: Select the Remote FAST Policy . This is the FAST policy associated with the remote storage group. New SSID d. z/OS Only: If the storage group contains CKD volumes, type a Select ... to open a dialog from which you can select for the target, or click an SSID. 11. For HYPERMAX OS 5977 or later, click Create Device Group check box and select the Device Group Name that will hold the target volumes. NEXT . 12. Click 13. Verify your selections. To change any of them, click BACK . Note that some changes may require you to make additional changes to your configuration. 14. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Protecting storage groups using SRDF/Metro This procedure explains how to protect storage groups using SRDF/Metro, in order to improve support for host applications in high availability environments. Before you begin: Protecting storage groups 127

128 Storage Management l SRDF requires HYPERMAX OS 5977 or later. l You must have StorageAdmin permission. l Connectivity to remote storage system must be available. l All storage systems involved must be discoverable and manageable from the console. l CKD devices are not supported by SRDF/Metro. You are not allowed to set RDF devices in the non-Metro RDF mirror to operate in Synchronous mode. For systems running PowerMaxOS 5978 or higher, the create pair operation is blocked if the device ID types of each individual SRDF device pair are not the same (both Compatibility ID or both Mobility ID) on both sides. Device type ID conversion from a Compatibility ID to a Mobility ID is not allowed on a device once it is part of an SRDF/ Metro session. Candidate IDs are restricted to those running PowerMaxOS 5978 or higher if the source storage group has devices with Mobility ID in them. Protecting a storage group using SRDF/Metro from the protection wizard is allowed when one or more of the devices in the storage group have the GCM flag set. To protect storage groups using SRDF Metro: Procedure 1. Select the storage system. STORAGE Storage Groups . 2. Under , select . Protect 3. Select the storage group and click page displays. Select Protection Type The High Availability using SRDF/Metro 4. Select . 5. Click NEXT . Remote Array ID . To update the list of remote arrays, click Scan . 6. Select the Establish Pairs 7. Optional: To stop the initiation of pair mirroring, clear the option. Establish Pairs Witness or Bias . 8. If is checked, choose Protected by If Witness is unavailable on the local or remote array, the option is disabled and Bias is selected by default. If available, Witness is selected by default. For storage systems running HYPERMAX OS 5977 Q3 2016 or higher, when the radio button is selected, the Witness Candidate (Remote Array) field Witness displays a list of physical and Virtual witnesses instances which are enabled. Disabled virtual witness instances are not displayed. 9. Optional: Change the Remote Storage Group Name , and optionally select a . Remote Service Level Changing the name will also change the target volume storage group name. 10. Optional: To disable compression, clear the option. Compression 11. Click NEXT . 12. Verify your selections. To change any of them, click BACK . Note that some changes may require you to make additional changes to your configuration. 13. Do one of the following: 128 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

129 Storage Management l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. on page 920 and Scheduling jobs l to perform the operation now. , and click Add to Job List Expand Run Now Converting storage groups to cascaded This procedure explains how to non-disruptively convert a standalone storage group to cascaded storage group. Once complete, the original storage group will serve as the parent to a new child storage group. Before you begin: l You must have Administrator or StorageAdmin permission. l The storage system must be running HYPERMAX OS 5977 or later. To convert storage groups: Procedure 1. Select the storage system. > Storage Groups 2. Select STORAGE 3. , and click > Convert to Select the storage group, click SG Maintenance Cascaded . 4. Type a new name over the system-suggested child storage group name. Storage group names must be unique from other storage groups on the system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Storage group names are case-insensitive. 5. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Add to Job List , and click Expand to perform the operation now. Run Now Changing Storage Resource Pools for storage groups This procedure explains how to change the Storage Resource Pool of a parent storage group, with child service levels using different Storage Resource Pools. In eNAS environments, you can also perform this operation from the File Storage System > System Dashboard > File Dashboard > File Storage Groups page ( ). Groups Before you begin: l The storage system must be running HYPERMAX OS 5977 or later. l You must have Administrator or StorageAdmin permission. To change the Storage Resource Pool for storage groups: Procedure 1. Select the storage system. STORAGE , select Storage Groups . 2. Under Converting storage groups to cascaded 129

130 Storage Management 3. Change SRP Select the storage group, click to open the , and select Change SRP dialog box. 4. Select the new SRP. 5. (Optional) Change the Service Level for the SG. Service levels specify the characteristics of the provisioned storage, including maximum response time, workload type, and priority. This field defaults to None if you set the Storage Resource Pool to None. Possible values are: Service level Performance type Use case HPC, latency sensitive Ultra high Diamond Platinum Very high Mission critical, high rate OLTP Gold High Very heavy I/O, database logs, datasets Database datasets, virtual Silver Price/Performance applications Cost optimized Bronze Backup, archive, file Optimized (Default) Places the most active data on the highest performing storage and the least active on the most cost-effective storage. For all-flash storage systems, the only service level available is Diamond and it is selected by default. 6. (Optional) Refine the service level by selecting the Workload Type to assign it. (This step is not applicable for storage systems running PowerMaxOS 5978.) 7. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l , and click Run Now to perform the operation now. Expand Add to Job List Adding or removing cascaded storage groups This procedure explains how to add or remove child storage groups from parent storage groups. Before you begin: To add or remove cascaded storage groups: Procedure 1. Select the storage system. Storage > Storage Groups to open the Storage Group list view. 2. Select 3. Select the parent storage group and click Details view. to open its 4. Click the number next to Storage Groups to open the child Storage Groups list view. 130 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

131 Storage Management 5. Do the following, depending on whether you are adding or removing storage groups: l Adding storage groups: Add a. Click . b. Select one or more storage groups. c. Do one of the following: a. Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more Previewing jobs on page 920 and information, refer to Scheduling jobs on page 920. , and click b. Expand to perform the operation Add to Job List Run Now now. l Removing storage groups: Remove . a. Select one or more storage groups and click b. Click OK . Renaming storage groups This procedure explains how to rename storage groups. Before you begin: l Storage group names must be unique from other storage groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Storage group names are case-insensitive. l Storage groups require Enginuity 5876, or HYPERMAX OS 5977 or later. To rename a storage group: Procedure 1. Select the storage system. STORAGE , select Storage Groups . 2. Under 3. Rename . , and click Select the storage group, click 4. Type the new name. 5. Click OK . Deleting storage groups This procedure explains how to delete storage groups. Before you begin: l Storage groups require Enginuity 5876 or HYPERMAX OS 5977 or later. l You cannot delete a storage group that is part of a masking view or associated with a FAST Policy. l Before you can delete a child storage group, you must first remove it from its parent. l When a storage group configured compliance alerts (requires HYPERMAX OS 5977 or higher) is deleted or renamed, the compliance alerts will Renaming storage groups 131

132 Storage Management need to be deleted manually. For instructions, refer to Deleting compliance alerts on page 65. policies To delete a storage group: Procedure 1. Select the storage system. STORAGE Storage Groups . , select 2. Under 3. . , and select Select the storage group, click Delete . OK 4. Click Setting host I/O limits Host I/O limits (quotas) is a feature that can be used to limit the amount of Front End (FE) Bandwidth and I/Os per second (IOPs) that can be consumed by a set of storage volumes over a set of director ports. The bandwidth and I/Os against the set of volumes over the set of director ports will be monitored by the Symmetrix system to ensure that it will not exceed the user specified maximum bandwidth or maximum IOPs placed on these. This feature allows you to place limits on the FE Bandwidth and IOPs consumed by applications on the storage system. Host I/O limits are defined as storage group attributes – the maximum bandwidth (in MB per second) and the maximum IOPs (in I/Os per second). For a cascaded storage group, a host I/O limit can be added for the parent and/or the child storage group. If set for both, the child limits cannot exceed that of the parent. The Host I/O limit for a storage group can be either active or inactive, only the active Host I/O limit can limit the FE bandwidth and IOPs of the volumes in a storage group. The Host I/O limit will become active when a provisioning view is created using the storage group and will become inactive when the view is deleted. When a view is created on a parent storage group with a Host I/O limit, the limit will be shared among all the volumes in all child storage groups. The Host I/O limit of the storage group will apply to all the director ports of the port group in the provisioning view. The Host I/O limit is divided equally among all the directors in the port group independent of the number of ports on each director. For this reason it is recommended that you configure only one of the ports of a director in the port group. Before you begin: l The storage system must be running Enginuity 5876.159.102 or later, or HYPERMAX OS 5977 or later. l For Enginuity 5876.159.102 up to HYPERMAX OS 5977, the maximum number of quotas per array is 2,000. For HYPERMAX OS 5977 and later, the maximum number is 16,000. l For more information on setting host I/O limits, refer to the Solutions Enabler Array . This guide is part of the Solutions Enabler Complete Management CLI Product Guide . Documentation Set To set host I/O limits: Procedure 1. Select a storage system. STORAGE , select Storage Groups . 2. Under 132 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

133 Storage Management 3. Select the storage group and select Set Host Set Host I/O Limits to open the I/O Limits dialog box. 4. Select and type values for one or both of the following: l MB/Sec —Maximum bandwidth (in MB per second). Valid values range from 1 MB/sec to 100,000 MB/sec. l IO/Sec —Maximum IOPs (in I/Os per second). Valid values range from 100 IO/Sec to 2,000,000 IO/sec, in 100 increments. 5. To configure a dynamic distribution of host I/O limits, set Dynamic Distribution to one of the following; otherwise, leave this field set to Never (default). This feature requires Enginuity 5876.163.105 or later. l Always —Enables full dynamic distribution mode. When enabled, the configured host I/O limits will be dynamically distributed across the configured ports, thereby allowing the limits on each individual port to adjust to fluctuating demand. l —Enables port failure capability. When enabled, the fraction of OnFailure configured host I/O limits available to a configured port will adjust based on the number of ports currently online. 6. Click OK Splitting storage groups This procedure explains how to split cascaded storage groups on storage systems. Unisphere supports the splitting of storage groups in two different ways: l During the split operation, a specified child storage group is removed from the parent storage group. A new masking view is created on this child storage group with the same initiator groups and port groups of the parent storage group masking view. l During the split operation, a new storage group with the user specified name will be created. The user specified devices from the source standalone storage group are moved to the newly created storage group and a new masking view is created on the new storage group using the same initiator groups and port groups of the source standalone storage group masking view. To split a storage group: Procedure 1. Select the storage system. > Storage Groups STORAGE 2. Select 3. Select the storage group, click SG Maintenance > Split From . , and click 4. Do one of the following: l When splitting a child storage group from its parent masking view and moving it to a standalone masking view, select a child storage group and specify a new masking view name. l When splitting a standalone storage group into two storage groups each with a masking view, specify a new storage group name, specify a new masking view name, and select the volumes to be added to the new storage group. Splitting storage groups 133

134 Storage Management 5. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920. on page 920 and l Run Now to perform the operation now. Expand Add to Job List , and click Merging storage groups This procedure explains how to merge storage groups in order to create a cascaded storage group. Unisphere support merging the masking views of a source standalone storage group and a target storage group which has a common initiator group and ort group. The target storage group may be a parent storage group or a standalone storage group. In the case of the target being a parent storage group, during the merge operation, the source standalone SG is added to the target parent storage group and uses the parent storage group masking view. In the case of the target storage group being a standalone storage group, all devices in the source standalone storage group are moved to the target storage group. The source standalone storage group and its masking view are deleted. To split a storage group: Procedure 1. Select the storage system. 2. Select STORAGE > Storage Groups 3. SG Maintenance > Merge Into . Select the storage group, click , and click 4. Select the target storage group. 5. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l , and click Run Now to perform the operation now. Expand Add to Job List Managing VP compression on thin volumes in storage groups The following explains how to manage VP compression on the thin volumes in a storage group. Before you begin: This feature requires Enginuity 5876.159.102 or higher. This feature is not supported on storage systems running HYPERMAX OS 5977 or later. To manage VP compression on storage groups: Procedure 1. Select the storage system. 2. Select > Volumes and click the Virtual panel. STORAGE 3. , and click VP Compression . Select a volume, click 134 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

135 Storage Management 4. Select one of the following compression operations: l —Starts compressing the thin volumes in the storage group. CompressStart l CompressStop —Stops compressing the thin volumes in the storage group. l —Starts uncompressing the thin volumes in the storage UnompressStart group. l UncompressStart —Stops uncompressing the thin volumes in the storage group. . OK 5. Click Viewing storage groups This procedure explains how to view storage groups on a storage system running HYPERMAX OS 5977 or higher. There are multiple ways to view the same information. Depending on the method you use, some of the properties and controls may not apply. For information on viewing cascaded storage groups, see Viewing cascaded storage groups on page 141. Procedure 1. Select the storage system. list view. Storage Groups 2. Select Storage Groups STORAGE > to open the The following properties display: l Name — Name of the storage group. l — How well the storage group is complying with its service Compliance level, if applicable. Possible values are: n Critical—Storage group is performing well below service level targets. n Marginal—Storage group is performing below service level target. n Stable—Storage group is performing within the service level target. n Storage group has no assigned service level. n Compliance information is being collected. l SRP — Name of SRP that the storage group belongs to, if any. l —Name of the service level associated with the storage Service Level group. If there is no service level associated with the group, then file displays N/A. l Capacity (GB) —Total capacity of the storage group in GB. l Emulation —Emulation associated with the storage group. . The following controls are available: l Viewing storage group details on page 140. — Viewing storage groups 135

136 Storage Management l Using the Provision Storage wizard Create on page 100. — l Modifying storage groups Modify on page 119. — l on page 100. Provision Using the Provision Storage wizard — l — Protecting storage groups Protect on page 122. l on page 132. — Set Host I/O Limits Setting host I/O limits l on page 194. — Set Volumes > Set Volume Status Setting volume status l — Set Volumes > Replication QoS Setting copy pace (QoS) for storage groups on page 197. l Creating a non-disruptive migration (NDM) session Migrate — on page 502 l — Managing thin pool allocations Allocate/Free/Reclaim > Start on page 244 l — Allocate/Free/Reclaim > Stop on page Managing thin pool allocations 244 l Converting storage groups to SG Maintenance > Convert to Cascaded — cascaded on page 129. l Splitting storage groups on page 133. SG Maintenance > Split From — l — Merging storage groups on page 134. SG Maintenance > Merge Into l — SG Maintenance > Remove Adding or removing cascaded storage groups on page 130. l on — Change SRP Changing Storage Resource Pools for storage groups page 129. l Delete Deleting storage groups on page 131. — l Managing thin pool allocations on page 244. Expand ProtectPoint — Viewing storage groups This procedure explains how to view storage groups on a storage system running Enginuity 5876. There are multiple ways to view the same information. Depending on the method you use, some of the properties and controls may not apply. Viewing storage group for To view storage groups associated with a FAST policy, see on page 171. FAST policies For information on viewing cascaded storage groups, see Viewing cascaded storage on page 141. groups Procedure 1. Select the storage system. STORAGE > Storage Groups to open the Storage Groups list view. 2. Select Storage Groups The list view allows you to view and manage storage groups on a storage system. The following properties display: l — Name of the storage group. Name l FAST Policy —Policy associated with the storage group. l Capacity (GB) —Total capacity of the storage group in GB. l —Emulation type. Emulation l Masking Views — Number of masking views associated with the storage group. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 136

137 Storage Management The following controls are available: l Viewing storage group details on page 140. — l Using the Provision Storage wizard Create on page 108. — l on page 116 . Expand Expanding storage groups — l Provision Storage to Host — Using the Provision Storage wizard on page 108. l on page 122. — Protect Protecting storage groups l on page 132. — Set Host I/O Limits Setting host I/O limits l — on page FAST > Associate Associating FAST policies with storage groups 168. l Disassociating FAST policies and storage groups on FAST > Disassociate — page 170. l Reassociating FAST polices and storage groups on FAST > Reassociate — page 170. l FAST > Pin — on page 173. Pinning and unpinning volumes l — on page 173 . FAST > Unpin Pinning and unpinning volumes l Binding/Unbinding/Rebinding thin volumes on page 257. FAST > Bind — l — on page 257. FAST > Unbind Binding/Unbinding/Rebinding thin volumes l — Binding/Unbinding/Rebinding thin volumes FAST > Rebind on page 257. l Migrate — Creating a non-disruptive migration (NDM) session on page 502 l — Allocate/Free/Reclaim > Start Managing thin pool allocations on page 244 l Allocate/Free/Reclaim > Stop — on page Managing thin pool allocations 244 l — on page 133. SG Maintenance > Split From Splitting storage groups l Merging storage groups on page 134. SG Maintenance > Merge Into — l Tagging and untagging volumes for RecoverPoint — RecoverPoint > Tag (storage group level) on page 472. l — Tagging and untagging volumes for RecoverPoint RecoverPoint > Untag (storage group level) on page 472. l — Deleting storage groups on page 131. Delete l — Renaming storage groups on page 121. Rename l Assign Dynamic Cache Partition on — Assigning dynamic cache partitions page 945. l VP Compression Managing VP compression on thin volumes in storage — groups on page 134. l — QOS for replication on page 197. Replication QOS l — Assigning array priority to individual volumes Assign Symmetrix Priority on page 189 l — Migrating regular storage group volumes on page 261. VLUN Migration l — Setting optimized read miss on page 193. Set Optimized Read Miss Viewing storage groups 137

138 Storage Management Storage Group details l Viewing storage group details on storage systems running HYPERMAX OS 5977 or Viewing storage group details later (see on page 138). l Viewing storage group details on storage systems running Enginuity OS 5876 (see on page 140). Viewing storage group details Viewing storage group details This procedure explains how to view configuration details for storage groups on storage systems running HYPERMAX OS 5977 or later. To view storage groups on a storage system running Enginuity OS 5876, refer to Viewing storage group details on page 140. In eNAS operating environments, there are multiple ways to view the same information. Depending on the method you use, some of the properties and controls may not apply. Procedure 1. Select the storage system. > Storage Groups to open the 2. Select list view. STORAGE Storage Groups 3. . Select the storage group and click The following properties display: l SRP —Name of SRP that the storage group belongs to, if any. l Compliance —How well the storage group is complying with its service level, if applicable. l Service Level —Service level associated with the storage group. If there is no service level associated with the group, then this field displays N/A. l Volumes —Number of volumes in the storage group. l —Number of child storage groups. Child Storage Groups l Masking Views —Number of masking views associated with the storage group. l —Number of of SnapVX snapshots associated with the SnapVX Snapshots storage group. l —SRDF. SRDF l Symmetrix ID —Name of the storage group. l —Total capacity of the storage group in GB. Capacity (GB) l VP Saved —The percentage of space saved on the storage group. l Compression — If compression is enabled on this storage group a tick will appear. If it's disabled a horizontal dash will appear. l — Current compression ratio for the storage group. Compression Ratio l —Timestamp of the most recent changes to the storage Last Updated group. l Host I/O Limit —Whether the host I/O limit feature is enabled. For more information, see Setting host I/O limits on page 132. l — Maximum bandwidth (in MB per second). Valid Host I/O Limit (MB/Sec) values range from 1 MB/sec to 100,000 MB/sec. 138 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

139 Storage Management l Host I/O Limit (IO/Sec) —Maximum IOPs (in I/Os per second). Valid values range from 100 IO/Sec to 100,000 IO/sec. l —Emulation type. Emulation l —Workload type. Workload Type l — When enabled, the configured host I/O limits will Dynamic Distribution be dynamically distributed across the configured ports, thereby allowing the limits on each individual port to adjust to fluctuating demand. l Is Child —Indicates whether the storage group is or is not a child storage group. l —Number of storage groups of which this Parent Storage Group(s) storage group is a child. This field only displays for child storage groups. l RecoverPoint —Indicates RecoverPoint usage. Links are also provided to views for objects contained in and associated with the storage group. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number next to Volumes will open a view listing the volumes contained in the storage group. . 4. Click VIEW ALL DETAILS and Details Volumes Volumes A view with two tabs, is displayed. Clicking the Viewing volumes in tab displays a view of the volumes in the storage group (see storage groups Details displays a view with two on page 142). Clicking the panel and a Capacity panel. panels, a Properties Properties panel displays the following: The l Symmetrix ID —Name of the storage group. l —How well the storage group is complying with its service Compliance level, if applicable. l —Service level associated with the storage group. If there is Service Level no service level associated with the group, then this field displays N/A. l —Type of the workload associated with the storage group. Workload Type l —Storage resource pool (SRP) containing the storage group. SRP l Masking Views —Number of masking views associated with the storage group. l Emulation —Emulation type. l —Timestamp of the most recent changes to the storage Last Updated group. l Host I/O Limit —Whether the host I/O limit feature is enabled. For more Setting host I/O limits on page 132. information, see l SnapVX Snapshots —Number of of SnapVX snapshots associated with the storage group. l SRDF —SRDF. l —Indicates that the storage group is or is not a child storage group. Is Child l Child Storage Groups —Number of child storage groups. l —Indicates RecoverPoint usage. RecoverPoint Capacity panel displays the following: The l —Total capacity of the storage group in GB. Capacity (GB) Viewing storage groups 139

140 Storage Management l Volumes —Number of volumes in the storage group. l Allocated Capacity —Number of volumes in the storage group. l — The percentage of space saved on the storage group. VP Saved l — If compression is enabled on this storage group a tick will Compression appear. If it's disabled a horizontal dash will appear. l — Current compression ratio for the storage group. Compression Ratio The following controls are available: l on page 116. Storage Group operations — l Set Host I/O Limits — Setting host I/O limits on page 132. Results Viewing storage group details This procedure explains how to view storage groups on a storage system running Enginuity OS 5876. There are multiple ways to view the same information. Depending on the method you use, some of the properties and controls may not apply. Procedure 1. Select the storage system. 2. Select > Storage Groups to open the Storage Groups list view. STORAGE 3. Select the storage group and click . The following properties display: l Symmetrix ID —Identity of the storage system. l FAST Policy —Policy associated with the storage group. l —Capacity of the storage group in GB. Capacity (GB) l Volumes —Number of volumes in the storage group. l —Number of child storage groups. Child Storage Groups l Masking Views —Number of masking views associated with the storage group. l —Number of SRDFs associated with the storage group. SRDF l Emulation —Emulation type. l VP Saved —The percentage of space saved on the storage group. l Last Updated —Timestamp of the most recent changes to the storage group. l —Whether the host I/O limit feature is enabled. For more Host I/O Limit Setting host I/O limits information, see on page 132. l —Maximum bandwidth (in MB per second). Valid Host I/O Limit (MB/Sec) values range from 1 MB/sec to 100,000 MB/sec. l —Maximum IOPs (in I/Os per second). Valid Host I/O Limit (IO/Sec) values range from 100 IO/Sec to 100,000 IO/sec. l — When enabled, the configured host I/O limits will Dynamic Distribution be dynamically distributed across the configured ports, thereby allowing the limits on each individual port to adjust to fluctuating demand. 140 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

141 Storage Management l Is Child —Indicates whether the storage group is or is not a child storage group. l —Number of storage groups of which this Parent Storage Group(s) storage group is a child. This field only displays for child storage groups. Links are also provided to views for objects contained in and associated with the storage group. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number next to will open a view listing the volumes contained in the storage group. Volumes 4. Click . VIEW ALL DETAILS A view with two tabs, is displayed. Clicking the Volumes Details and Volumes Viewing volumes in tab displays a view of the volumes in the storage group (see storage groups on page 142). Clicking the Details displays a view with two panel and a Capacity panel. panels, a Properties Properties The panel displays the following: l —Name of the storage group. Symmetrix ID l Masking Views —Number of masking views associated with the storage group. l Emulation —Emulation type. l —Timestamp of the most recent changes to the storage Last Updated group. l Host I/O Limit —Whether the host I/O limit feature is enabled. For more Setting host I/O limits on page 132. information, see l SRDF —SRDF. l —Indicates that the storage group is or is not a child storage group. Is Child l Child Storage Groups —Number of child storage groups. l RecoverPoint —Indicates RecoverPoint usage. panel displays the following: The Capacity l —Total capacity of the storage group in GB. Capacity (GB) l Volumes —Number of volumes in the storage group. l Allocated Capacity —Number of volumes in the storage group. l — The percentage of space saved on the storage group. VP Saved The following controls are available: l on page 116. Storage Group operations — l Setting host I/O limits Set Host I/O Limits — on page 132. Results Viewing cascaded storage groups Procedure 1. Select the storage system. STORAGE > Storage Groups to open the Storage Groups list view. 2. Select Viewing storage groups 141

142 Storage Management 3. Details Select the storage group and click view. to open its Child Storage Groups to open the Storage Groups 4. Click the number next to list view. 5. Optional: Use the list view to view and manage cascaded Child Storage Groups storage groups. 6. The following properties (depending on the storage operating environment) display: l —Name of the storage group. Name l —Indicates Compliance status. Compliance l SRP —SRP associated with the storage group. l Service Level —Service level associated with the storage group. l Capacity (GB) —Total capacity of the storage group in GB. l —Emulation type.. Emulation l Masking Views —Number of masking views associated with the storage group. 7. The following controls are available: l Viewing storage group details on page 140 — l Adding or removing cascaded storage groups on page 130 Add — l Remove — Adding or removing cascaded storage groups on page 130 Viewing volumes in storage groups Procedure 1. Select the storage system. 2. Select Storage Groups to open the Storage Group list view. STORAGE > 3. to open its view. Select the storage group and click Details to open the Volumes list view. 4. Click the number next to Volumes list view to view and manage the volumes in a storage group. Use the Volumes 5. The following properties display: l Volume —Assigned volume name. l —Type of volume. Type l Allocated % —Percentage allocated. l —Volume capacity in Gigabytes. Capacity (GB) l Emulation —Emulation type for the volume. l Status —Volume status. l —Whether the volume is pinned. Pinning volumes prevents any Pinned automated process such as FAST or Optimizer from moving them. The following controls are available, depending on the storage operating environment: 142 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

143 Storage Management l Creating volumes Create on page 178. — l Adding volumes to storage groups Add Volumes to SG on page 114. — l on page 116. Remove Volumes Removing volumes from storage groups — l — Expanding existing volumes Expand on page 191 l Copy Volumes to SG — Copying volumes between storage groups on page 114. l on page Move Volumes to SG Moving volumes between storage groups — 115. l — Setting volume emulation on page 96. Set Volumes > Emulation l Setting volume attributes Set Volumes > Set Volume Attributes — on page 195. l on page — Set Volumes > Set Volume Identifiers Setting volume identifiers 196. l Set Volumes > Set Volume Status Setting volume status on page 194. — l QOS for replication Set Volumes > Replication QoS — on page 197. l Managing thin pool allocations on page Allocate/Free/Reclaim > Start — 244 . l — Managing thin pool allocations on page Allocate/Free/Reclaim > Stop 244 . l — Configuration > Change Volume Configuration Changing volume on page 190. configuration l on page 192. — Configuration > Map Mapping volumes l on page 193. Configuration > Unmap — Unmapping volumes l z/OS map from the Volumes (Storage — Configuration > z/OS Map Groups) list view on page 335 and z/OS map FBA volumes from the on Volumes (Storage Groups) list view (HYPERMAX OS 5977 or higher) page 338. l Configuration > z/OS Unmap z/OS unmap from the Volumes (Storage — Groups) list view on page 335 and z/OS unmap FBA volumes from the on page 339. Volumes (Storage Groups) list view l — Assigning dynamic cache partitions on Assign Dynamic Cache Partition page 945 (Only available on storage systems running 5876). l Assigning array priority to individual volumes — Assign Symmetrix Priority on page 189 (Only available on storage systems running 5876). l Pin on page 173 (Only available on storage — Pinning and unpinning volumes systems running 5876). l — Pinning and unpinning volumes on page 173 (Only available on Unpin storage systems running 5876). to view the Volume in Storage Group details view. Click The following properties display: l Masking Info —Number of masking views associated with the storage group. l —Number of associated storage groups. Storage Groups Viewing storage groups 143

144 Storage Management l SRP —Number of associated SRPs. l FBA Front End Paths —Number of associated FBA Front End Paths. l —RDF Info. RDF Info l —Volume name. Volume Name l Physical Name —Physical name. l Volume Identifier —Volume identifier. l — Volume configuration. Type l Encapsulated Volume — Whether external volume is encapsulated. Relevant for external disks only. l Encapsulated WWN — World Wide Name for encapsulated volume. Relevant for external disks only. l Encapsulated Device Flag — Encapsulated device flag. l — Encapsulated device array. Encapsulated Device Array l — Encapsulated device name. Encapsulated Device Name l Status — Volume status. l — Whether the volume is reserved. Reserved l Capacity (GB) —Volume capacity in GBs. l Capacity (MB) —Volume capacity in MBs. l Capacity (CYL) —Volume capacity in cylinders. l Compression Ratio — Volume emulation. l — Volume emulation. Emulation l — AS400 Gatekeeper indication. AS400 Gatekeeper l Symmetrix ID — Symmetrix system on which the volume resides. l Symmetrix Vol ID — Symmetrix volume name/number. l HP Identifier Name — User-defined volume name (1-128 alpha-numeric characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l — Numeric value (not to exceed 32766) with VMS Identifier Name relevance to VMS systems. This value is mutually exclusive of the HP ID. l Nice Name — Nice name generated by Symmetrix Enginuity. l WWN — World Wide Name of the volume. l — External Identity World Wide Name of the External Identity WWN volume. l — Name of the device group in which the volume resides, if DG Name applicable. l CG Name — Name of the device group in which the volume resides, if applicable. l — Defines the attached BCV to be paired with the standard Attached BCV volume. l — Volume to which this source volume would Attached VDEV TGT Volume be paired. l RDF Type — RDF configuration. 144 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

145 Storage Management l Geometry - Type — Method used to define the volume's geometry. l Geometry - Number of Cylinders — Number of cylinders. l Geometry - Sectors per Track — Number of sectors per track, as defined by the volume's geometry. l — Number of tracks per cylinder, as Geometry - Tracks per Cylinder defined by the volume's geometry. l Geometry - 512 Block Bytes — Number of 512 blocks, as defined by the volume's geometry. l —Geometry capacity in GBs. Geometry - Capacity (GB) l Geometry - Limited — Indicates whether the volume is geometry limited. l SSID — Subsystem ID. l Capacity (Tracks) — Capacity in tracks. l — Volume SA status. SA Status l — Host access mode. Host Access Mode l Pinned —Whether the volume is pinned. l RecoverPoint Tagged —Indicates whether volume is tagged for RecoverPoint. l Service State — Service state. l — Type of user-defined label. Defined Label Type l Dynamic RDF Capability — RDF capability of the volume. l Mirror Set Type — Mirror set for the volume and the volume characteristic of the mirror. l — Volume status information for each member in the Mirror Set DA Status mirror set. l — Number of invalid tracks for each mirror in the Mirror Set Invalid Tracks mirror set. l — Priority value assigned to the volume.Valid values are 1 Priority QoS (highest) through 16 (the lowest). l Copy Pace - RDF — Copy pace priority during RDF operations. l Copy Pace - Mirror Copy — Copy pace priority during mirror operations. l Copy Pace - Clone — Copy pace priority during clone operations. l — Copy pace priority during virtual LUN operations. Copy Pace - VLUN l Dynamic Cache Partition Name — Name of the cache partition. l — Compressed size (GB). Compressed Size (GB) l Compressed Percentage — Compressed percentage. l — Compressed Size Per Pool (GB). Compressed Size Per Pool (GB) l XtremSW Cache Attached — Indicates whether XtremSW cache is attached to the volume. l Base Address — Base address. l — AS400 Gatekeeper indication. AS400 Gatekeeper l Mobility ID Enabled — Mobility ID enabled indication. l — GCM Flag set indication. GCM Viewing storage groups 145

146 Storage Management l Optimized Read Miss — Cacheless read miss status. l — Persistent Allocation indication. Persistent Allocation l PowerPath Hosts — Number of PowerPath hosts. l Mounted — Mounted indication. l — Process. Process l — Last time used. Last time used Viewing Storage Group Compliance view Before you begin The user requires a minimum of Monitor permissions to perform this task. Definitions: l Workload Skew - Skew is represented by capacity and load pairs. There are two sources of skew for a storage group. One is using device stats. The other is using SG_PER_POOL chunks. There is an algorithm in WLP to merge these two lists to give us a usable skew profile. A skew profile is only useful if you have multiple chunks. If an SG has a single device, there is not enough data to calculate skew, the corresponding storage group per pool metrics can be used. Similarly, if an array has only one pool, the device stats are more meaningful for skew. l Workload Mixture - The mixture is the distribution of various I/O types as percentages of the total IOPS. These are useful for determining, for example, whether a workload is heavy read or heavy write, whether I/Os are mostly random or mostly sequential. To view the Storage Group (SG) Compliance view: Procedure 1. Select the storage system. STORAGE > Storage Groups to open the Storage Groups View . 2. Select 3. Select a storage group and click to view its details. . 4. Select VIEW ALL DETAILS Compliance tab. 5. Select the Charts are displayed for the following: l Response Time chart - this chart displays wait time weighted response time and (if applicable) the target service level response time band. The following section explains the data in the chart. n Actual: running I/O to Storage Group - Wait time weighted response time is calculated in buckets and displayed. If a bucket has no data, 0 is displayed. n Actual: no I/O to Storage Group - 0s are displayed. n Planned: SLO Response Time Max and SLO Response Time Min are displayed as a data band across the timeline. This is labeled "Planned". If the service level is Optimized, no plan is displayed, because there is no Response Time band for Optimized. n Excluded Data: If a recurring exclusion has been set via the Exclusion Windows dialog, the windows are represented by vertical gray plot bands. n Last Processed: A 2px dotted plot line marks the most recent SPA HOURLY timestamp processed by SPA for a given metric. It is not 146 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

147 Storage Management represented in the legend, but if you hover, you can see the timestamp associated. In normal successful/processing, this acts as a "Where am I" indicator. If WLP stops processing for some reason, it's a subtle debugging helper. l IOPS chart - This chart toggles between IO/sec and MB/sec, displaying IO rate weighted metric values, "planned" values, and (if set) Host IO Limits. The following section explains the data in the chart. n Actual: running I/O to Storage Group - IO Rate weighted total IOPS (or total MBPS) are calculated in buckets and displayed. If a bucket has no data, 0 is displayed. n Actual: no I/O to Storage Group - 0s are displayed. n Planned: Host I/O Limits for Standalone SG - Host IO Limit is displayed as a static value across the timeline. Host IO Limit is only shown on the chart it impacts. For example, if MBPS host IO limit is set, and the user has IOPS selected, they won't see anything unless they toggle to MBPS. n Planned: Host I/O Limits for Child SG, no limit for the parent SG - Host IO Limit is displayed as a static value across the timeline. Host IO Limit is only shown on the chart it impacts. For example, if MBPS host IO limit is set, and the user has IOPS selected, they won't see anything unless they toggle to MBPS. n Planned: No Host I/O Limits for Child SG and parent SG - If a cascaded SG has a host IO limit set at the parent, but no direct limit of its own, the host IO limit of any given child would be the parent limit minus whatever the siblings are using. n Planned: Host I/O Limits for Child SG and parent SG - If a cascaded SG has a host IO limit set at the parent, and a direct limit of its own, the host IO limit of any given child would be the more limiting of theparent limit minus whatever the siblings are using, or the child SGs own limit. n Excluded Data: If a recurring exclusion has been set via the Exclusion Windows dialog, the windows are represented by vertical gray plot bands. n Last Processed: A 2px dotted plot line marks the most recent SPA HOURLY timestamp processed by SPA for a given metric. It is not represented in the legend, but if you hover, you can see the timestamp associated. In normal successful/processing, this acts as a "Where am I" indicator. If WLP stops processing for some reason, it's a subtle debugging helper. l Workload Skew chart - This chart compares actual workload skew - represented by cumulative capacity and load percentages (ordered by access density) - to planned skew. If there is no IO data, Actual is displayed as 50% skew - a straight line from (0,0) to (100,100). If there is one Device in SG AND Only One Thin Pool, then themerged device ans sg per pool skew profile doesn't give us enough data points. Actual is displayed as 50% skew - a straight line from (0,0) to (100,100). If IO is running to the SG, the skew is a logarithmic curve (or stepped line graph in some cases). l I/O Mixture chart - This chart compares actual workload mixture to planned workload mixture. The inner pie represents the actual IO distribution. The outer donut represents the planned mixture. If there is no I/O to the storage group, the mixture distribution will be equal percentages for each IO type (20% read hit, 20% sequential write, etc.) and the tooltip will show the corresponding IO sizes as 0kB. Viewing storage groups 147

148 Storage Management Select the Show Plan slider to turn on or turn off the display of the plan. The plan is reference point used for comparison, and is a two week expiring performance reservation for subsequent provisioning suitability calculations. The following controls are available: l Exclude Data on page 158 - Managing Data Exclusion Windows l Save As a Template on page 267 - Creating storage templates l on page 177 Reset Workload Plan - Resetting Workload Plan l - Setting host I/O limits Set Host I/O Limits on page 132 Viewing storage group performance details Before you begin l The storage system is running HYPERMAX OS 5977 or higher. l To perform this operation, a Monitor role is required. l The storage system must be local and registered for performance. Procedure 1. Select the storage system. STORAGE > Storage Groups to open the Storage Groups View . 2. Select 3. Select a storage group and click to view its details. . 4. Select VIEW ALL DETAILS 5. Optional: Choose a day or time to retrieve performance related data. 6. Select the Performance tab. Charts are displayed for the following: l Read and Write response times l Host MBs Read and Written per second l Host reads and writes per second l Read and Write response times l FE Directors - Name, % Busy and queue depth utilization. l FE Port - Name, % busy, and host I/Os per second. l Related SGs - Name, response time, host I/Os per second, and host MBs per second. Select Storage Resource Pool Use this dialog box to select a Storage Resource Pool for the operation. Note To create the storage group outside of FAST control, set Storage Resource Pool to None; otherwise, leave this field set to the default. 148 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

149 Storage Management Select SSID Use this dialog box to select an SSID for the operation. Task in Progress Use this dialog box to monitor the progress of a configuration change operation Procedure . 1. To view detailed information, click Show Task Details Once a task completes, a success or failure message displays. Select SRDF group Use this dialog box to select a SRDF group. Editing storage group volume details To edit storage group details for a storage system running Hypermax OS 5977 or higher: Procedure Volume Config tab. 1. Click the 2. To name the volumes you are adding to the storage group, select one of the and type a following Volume Identifiers Name Note This option is only available when modifying storage groups with new volumes. Note that when modifying storage groups with some new and some existing volumes, the identifiers will only be applied to the new volumes. l None —Allows the system to name the volumes (Default). l —All volumes will have the same name. Name Only l —All volumes will have the same name with a unique Name + VolumeID Symmetrix volume ID appended to them. When using this option, the maximum number of characters allowed is 50. l —All volumes will have the same name with a Name + Append Number unique decimal suffix appended to them. The suffix will start with the value specified for the Append Number and increment by 1 for each additional volume. Valid Append Numbers must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. 3. Optional: Click Enable Compression checkbox. Enable Mobility ID checkbox to assign Mobility IDs to the 4. Optional: Click the volumes in the storage group. If you leave the checkbox unchecked, Compatibility IDs will be assigned to the volumes instead. Allocate capacity for each volume checkbox. 5. Optional: Click 6. Optional: Click Persist preallocated capacity through reclaim or copy checkbox. Volume Size tab. 7. Click the Select SSID 149

150 Storage Management 8. Enter a volume size, capacity and capacity unit. 9. Optional: Add one or more volume sizes by hovering over the area to the right . of the volume capacity and selecting 10. Optional: Click to remove a volume size. 11. Click . APPLY The Storage Group page in the wizard displays Mixed Capacities for the Mixed Capacities row. Click to reopen this dialog. Editing storage group details To edit storage group details for a storage system running Enginuity 5876: Procedure 1. To name the volumes you are adding to the storage group, select one of the and type a Name following Volume Identifiers Note This option is only available when expanding storage groups with new volumes. Note that when expanding storage groups with some new and some existing volumes, the identifiers will only be applied to the new volumes. l —Allows the system to name the volumes (Default). None l Name Only —All volumes will have the same name. l Name + VolumeID —All volumes will have the same name with a unique Symmetrix volume ID appended to them. When using this option, the maximum number of characters allowed is 50. l —All volumes will have the same name with a Name + Append Number unique decimal suffix appended to them. The suffix will start with the value Append Number and increment by 1 for each additional specified for the volume. Valid Append Numbers must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. l Use BCV volumes . To only use BCVs in the storage group, select l To only use volumes from a specific disk group, select the Disk Group. (applicable for regular volumes only) OK . 2. Click Modify Custom Capacity dialog box Use this dialog box to modify the capacity of a storage group with mixed capacities. To modify the capacity of the storage group, type new values for the volumes and click to return to the Modifying Storage Groups dialog box. OK Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 150

151 Storage Management Understanding FAST Note This section describes FAST operations for storage systems running HYPERMAX OS 5977 or higher. Fully Automated Storage Tiering (FAST) automates management of storage system disk resources on behalf of thin volumes. FAST automatically configures disk groups to form a Storage Resource Pool by creating thin pools according to each individual disk technology, capacity and RAID type. FAST technology moves the most active parts of your workloads (hot data) to high- performance flash disks and the least-frequently accessed storage (cold data) to lower-cost drives, leveraging the best performance and cost characteristics of each different drive type. FAST delivers higher performance using fewer drives to help reduce acquisition, power, cooling, and footprint costs. FAST is able to factor in the RAID protections to ensure write heavy workloads go to RAID 1 and read heavy workloads go to RAID 6. This process is entirely automated and requires no user intervention. FAST further provides the ability to deliver variable performance levels through service levels. Thin volumes can be added to storage groups and the storage group can be associated with a specific service level to set performance expectations. FAST monitors the storage groups performance relative to the service level and automatically provisions the appropriate disk resources to maintain a consistent performance level. Understanding service levels A service level is the response time target for the storage group. The service level allows you set the storage array with the desired response time target for the storage group. It automatically monitors and adapts to the workload in order to maintain (or meet) the response time target. The service level includes an optional workload type so you can further tune expectations for the workload storage group to provide just enough flash to meet your performance objective. Renaming Service Levels Before you begin l To perform this operation, you must be a StorageAdmin. l This feature requires HYPERMAX OS 5977 or higher. l The service level must be unique from other service levels on the storage system and cannot exceed 32 characters. Only alphanumeric characters, underscores ( _ ), and hyphens (-) are allowed. However, service level names cannot start or end with an underscore or hyphen. Once a service level is renamed, all active management and reporting activities will be performed on the newly named service level. The original, pre-configured service level name will be maintained in the Service Level View for future reference. All other references to the original service level will display the new name. Procedure 1. Select the storage system. Understanding FAST 151

152 Storage Management 2. Select Service Levels to open the Service Level view. Storage > 3. . Hover over the service level name and click 4. Type the new name over the existing name. Type the new name over the to complete the renaming process. To cancel the existing name and click . renaming, click Reverting to original service level names Before you begin l To perform this operation, you must be a StorageAdmin. l This feature requires HYPERMAX OS 5977 or higher. l The service level must be unique from other service levels on the storage system and cannot exceed 32 characters. Only alphanumeric characters, underscores ( _ ), and hyphens (-) are allowed. However, service level names cannot start or end with an underscore or hyphen. Procedure 1. Select the storage system. > Service Levels to open the 2. Select view. Storage Service Level 3. Hover over the service level name and click . 4. Type the original, pre-configured name. Type the new name over the existing name and click to complete the renaming process. To cancel the renaming, click . Viewing service levels Before you begin This feature requires HYPERMAX OS 5977 or higher. A service level is the response time target for the storage group. The service level allows you set the storage array with the desired response time target for the storage group. It automatically monitors and adapts to the workload in order to maintain (or meet) the response time target. The service level includes an optional workload type so you can further tune expectations for the workload storage group to provide just enough flash to meet your performance objective. Procedure 1. Select the storage system. STORAGE > Service Levels to open the Service Level view. 2. Select For all-flash storage systems running HYPERMAX OS 5977, the only service level available is Diamond. Available service levels are displayed in card format. Each service level card shows the service level name (display name if it has been renamed), the expected average response time (in ms) and available headroom. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 152

153 Storage Management Clicking and selecting a card displays a table. The table gives more details for different workload types available for each service level. None workload type will be selected by default. The details table is only visible for FBA service levels. The columns in the table include workload type, target response time, headroom, I/O density, I/O size, write %, skew and usage count for each workload type. All columns are sortable and all columns (except workload type) can be hidden. Hover near the write % to view a pop-out showing more details on the mixture and similarly hover near the skew % to view a pop-out showing more details about the skew. 3. Optional: To rename a service level, hover over the service level card and click . Type the new name over the existing name and click to complete the . renaming process. To cancel the renaming, click 4. Optional: To provision storage using a service level, select the service level card . and a corresponding workload type and click Provision wizard, with the service level and This opens the Provision Storage to Host the workload type will be populated by default. For CKD Provisioning wizard, only the service level will be selected by default. For more information on using the wizard, refer to on page 100. Using the Provision Storage wizard Changing service level This functionality only applies to storage systems running HYPERMAX OS 5977 or higher and does not apply to all-flash storage systems. For all-flash storage systems running HYPERMAX OS 5977 and higher, the only service level available is Diamond. To change the service level: Procedure 1. Select the storage system. STORAGE > Storage Groups to open the Storage Groups View . 2. Select 3. to view Select a storage group where performance data is available and click its details. Details panel(alternatively, select 4. Double click on the Compliance icon in the VIEW ALL DETAILS Details panel and select the Compliance tab). in the VIEW DETAILS (located at the top right of the Response Time panel). 5. Click Change Service Level . 6. Click 7. Change the service level. 8. Click OK . Understanding Storage Resource Pool details Storage Resource Pool is a collection of data pools that provide FAST a domain for capacity and performance management. By default, a single default Storage Resource Pool is factory pre-configured. Additional Storage Resource Pools can be created with Understanding Storage Resource Pool details 153

154 Storage Management a service engagement. FAST performs all its data movements within the boundaries of the Storage Resource Pool. Modifying Storage Resource Pool details Before you begin l This feature requires HYPERMAX OS 5977 or higher. l You must have Administrator or StorageAdmin permission. Procedure 1. Select the storage system. Storage Resource Pools to open the Storage Resource 2. Select Storage > view. Pools Modify . 3. Click 4. Modify any number of the following: l Storage Resource Pool Name —Name of the storage resource pool. To . The name of the change this value, type a new description and click Apply storage resource pool must be unique and it cannot exceed 32 characters. It can include only alphanumeric, underscore, and hyphen characters, but cannot begin with an underscore or hyphen character. l Description —Optional description of the pool. To change this value, type a . The description cannot exceed 127 new description and click Apply characters. It can contain only alphanumeric, hyphen, underscore, space, period, and comma characters. l Reserved Capacity % (0 - 80) —The percentage of the capacity of the Storage Resource Pool to be reserved for volume write I/O activities. Valid values for the percentage are from 1 to 80. NONE disables it. For example, if you set the reserved capacity on a Storage Resource Pool to 30%, then the first 70% of the pool capacity is available for general purpose operations (host I/O allocations, local replication tracks and SRDF/A DSE allocations) and the final 30% of the pool capacity is reserved strictly for volume write I/O activities. Note that existing TimeFinder snapshot sessions created on volumes in the Storage Resource Pool are invalid if the free capacity of the Storage Resource Pool, as a percentage of the usable capacity, goes below the reserved capacity. l Usable by RDFA DSE —Specifies whether the Storage Resource Pool can be used for SRDF/A DSE operations. This field does not display for external SRPs. The maximum amount of storage from a Storage Resource Pool that can be used for DSE is controlled by the system wide dse_max_cap setting, as described in the Solutions Enabler SRDF Family CLI User Guide . 5. Click OK . Viewing Storage Resource Pools Before you begin l This feature requires HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. STORAGE > Storage Resource Pools to open the Storage Resource 2. Select view. Pools 154 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

155 Storage Management The following properties display: l — Name of the storage resource pool. . Name l —Used usable capacity, expressed as a Used Usable Capacity (%) percentage. l Total Usable Capacity (TB) —Total usable capacity. l Allocated Subscribed Capacity (%) —Allocated subscribed capacity, expressed as a percentage. l Total Subscribed Capacity (TB) —Total subscribed capacity. The following controls are available: l Modify — Modifying Storage Resource Pool details on page 154 l Add EDisks on page 274 — Adding external disks l to view the following details: Click l — Name of the storage resource pool. Name l Description — Description. l Default Emulation —The default emulation for the pool (FBA or CKD). l Overall Efficiency —The current compression efficiency on this storage resource pool. l Compression State — Indicates whether compression is enabled or disabled for this storage resource pool. l —The effective used capacity, expressed as a Effective Used Capacity (%) percentage. l Usable Capacity (TB) —Usable capacity of all the disk groups in the Storage Resource Pool, excluding any external disk groups used for FTS encapsulation. l —Sum of the volume allocations, snapshot Allocated Capacity (TB) allocations, and SRDF/A DSE allocations on the Storage Resource Pool. l —Difference between the usable and allocated Free Capacity (GB) capacities. l — Percentage of the configured sizes of all the thin Subscription (TB) volumes subscribed against the Storage Resource Pool. l Reserved Capacity % (0 - 80) —Percentage of the Usable Capacity that will be reserved for non-snapshot activities. Existing TimeFinder snapshot sessions created on volumes in the Storage Resource Pool can go invalid if the Free Capacity of the Storage Resource Pool, as a percentage of the Usable Capacity, goes below the Reserved Capacity. l —Specifies whether the Storage Resource Pool can Usable by RDFA DSE be used for SRDF/A DSE operations. This field does not display for external SRPs. l FBA Service Levels —Number of FBA service levels.. l —Number of Disk Groups. Disk Groups The panel also provides links to views displaying objects contained in the pool. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number next to Disk Groups will open a view listing the disk groups in the storage resource pool. Understanding Storage Resource Pool details 155

156 Storage Management Changing Storage Resource Pools for storage groups This procedure explains how to change the Storage Resource Pool of a parent storage group, with child service levels using different Storage Resource Pools. In eNAS environments, you can also perform this operation from the File Storage System Dashboard File Dashboard > File Storage System Groups page ( > > Groups ). Before you begin: l The storage system must be running HYPERMAX OS 5977 or later. l You must have Administrator or StorageAdmin permission. To change the Storage Resource Pool for storage groups: Procedure 1. Select the storage system. 2. Under Storage Groups . STORAGE , select 3. Select the storage group, click to open the , and select Change SRP dialog box. Change SRP 4. Select the new SRP. 5. (Optional) Change the Service Level for the SG. Service levels specify the characteristics of the provisioned storage, including maximum response time, workload type, and priority. This field defaults to None if you set the Storage to None. Possible values are: Resource Pool Service level Performance type Use case Ultra high HPC, latency sensitive Diamond Very high Platinum Mission critical, high rate OLTP High Very heavy I/O, database Gold logs, datasets Silver Price/Performance Database datasets, virtual applications Bronze Cost optimized Backup, archive, file Places the most active data Optimized (Default) on the highest performing storage and the least active on the most cost-effective storage. For all-flash storage systems, the only service level available is Diamond and it is selected by default. Workload Type to assign it. 6. (Optional) Refine the service level by selecting the (This step is not applicable for storage systems running PowerMaxOS 5978.) 7. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs 156 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

157 Storage Management l , and click Expand Run Now Add to Job List to perform the operation now. Service level compliance Each service level and workload type has a response band associated with it. When a storage group (workload) is said to be compliant, it means that it is operating within the associated response time band. When assessing the compliance of a storage group, Workload Planner calculates its weighted response time for the past 4 hours and for the past 2 weeks, and then compares the two values to the maximum response time associated with its given service level. If both calculated values fall within (under) the service level defined response time band, the compliance state is STABLE. If one of them is in compliance and the other is out of compliance, then the compliance state is MARGINAL. If both are out of compliance, then the compliance state is CRITICAL. Creating Compliance Reports This procedure explains how to create Compliance reports. Compliance Reports allow you to view storage group performance against service levels over a period of time. Before you begin: l This feature requires HYPERMAX OS 5977 or later. l The user must be a StorageAdmin permissions or higher. To create Compliance Reports: Procedure 1. Select the storage system. SG COMPLIANCE . 2. Select panel, click 3. Within the . Compliance VIEW COMPLIANCE REPORT Schedule . 4. Click General tab, do any number of the following: 5. On the Name a. Type a for the report. for the report. b. Type a Description Generated Time c. Select the time zone in which the report will be generated ( Zone ). tab, do any number of the following: 6. On the Schedule . First Runtime a. Select the b. Select the Day(s) to Run . c. Select the number of days that the report should be retained. Email Send report to and type an email address. tab, select 7. Optional: On the 8. Click OK . Viewing compliance reports This procedure explains how to view storage group performance against service levels over a period of time. Before you begin: l This feature requires HYPERMAX OS 5977 or later. Service level compliance 157

158 Storage Management l The user must be a StorageAdmin permissions or later. To view service level compliance reports: Procedure 1. Select the storage system. . 2. Select SG COMPLIANCE Compliance panel, click VIEW COMPLIANCE REPORT . 3. Within the 4. Customize the report by doing the following: a. Select the time period. For the time period you select, the storage group's compliance is assessed in 30 minute intervals, and then its overall Service level compliance state is displayed based on the method described in compliance on page 157. For example, if you select Last 24 hours , the storage group's compliance state is assessed 48 times, and then its calculated compliance state is displayed in this report. b. and select whether to view the compliance information as a chart Click or as numbers. The following properties display: l —Name of the storage group. Storage Group l Service Level —Service level associated with the storage group. l —Percentage of time the storage group performed within the % Stable service level target. l % Marginal —Percentage of time the storage group performed below the service level target. l % Critical —Percentage of time the storage group performed well below the service level target. The following controls are available: l —Save the report as a PDF file. Export l Creating Compliance Reports on page 157. Schedule — l — Performance Dashboards on page 518 Monitor Save Report Results dialog box Use this dialog box to save service level compliance reports in PDF. Managing Data Exclusion Windows This procedure explains how to manage Data Exclusion Windows for calculating headroom and suitability. Peaks in storage system statistics can occur due to: l anomalies or unusual events l recurring maintenance during off-hours that fully loads the storage system Due to the way this data is condensed and used, unexpected headroom and suitability results can occur. There are two ways to improve the handling of these cases: l —when the one-time exclusion period value is set, all One-time exclusion period statistics before this time are ignored. This helps resolve the first case above, 158 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

159 Storage Management where a significant one time peak distorts the results due to reliance on two weeks of data points. This is set system-wide for all components. l —You can select one or more 4 hour windows to use Recurring exclusion period in admissibility checks. This is set system-wide for all components. Recurring exclusion periods are repeating periods of selected weekday or time slot combinations where collected data is ignored for the purposes of compliance and admissibility considerations. The data is still collected and reported, but it is not used in those calculations. Before you begin: l This feature requires HYPERMAX OS 5977 or higher. l The user must have StorageAdmin permissions or higher. To manage Data Exclusion Windows: Procedure 1. Select the storage system. 2. Select SG Compliance . Actions EXCLUDE DATA . 3. In the panel, select Results Compliance Settings page allows you to view and set the one-time exclusion The period and recurring exclusion periods for a selected storage system. It consists of two panels. The One-time Exclusion Period panel displays 84 component utilizations (two weeks worth of data) in a single chart that allows you to set the one-time exclusion period value from a given time slot, resulting in all time slots prior to the selected time slot being ignored for the purposes of calculating compliance and Recurring Exclusion Periods admissibility values. The panel displays the same data, but in a one-week format that allows you to select repeating recurring exclusion periods during which any collected data is ignored. Each bar in the chart represents a utilization score calculated for that time slot. The score itself is the highest value of four component types, that is, the “worst performing” of the four components is the one that determines the overall value returned. The exact type and identifier of the selected component can be seen in the tool tip for a specific bar. The four component types that are represented in the bars are: l Front End Port l Back End Port l RDF Port l Thin Pool The bars in both panels represent the same data using the same color coding scheme. The colors of the bars signify the following states: Green represents a utilization value that meets the best practice limit. Red represents a utilization value that exceeds the best practice limit. Blue represents a utilization value this is being ignored before the one-time exclusion period. Gray represents a utilization value that is being ignored as part of a window. No Bar – If no data was collected or calculated during a time slot, there is no bar present. One-time Exclusion Period panel consists of a chart that is labeled with the The component utilization value as the y-axis and the time slot as the x-axis. Each time slot Service level compliance 159

160 Storage Management is a four hour window during which data was collected and a utilization score was calculated. There is also a horizontal line representing the best practice utilization of 100%. The x -axis is labeled with the dates of the time slots, that is, the dates of the midnight time slots are labeled with that date and other time slots are blank. The top-right of this panel has a filter which allows you to include all components used in utilizations calculations or filter for only those used in headroom calculations. This can be helpful when headroom values are causing suitability problems in other areas, but those issues are masked by other component utilizations on this chart. The filters All components, for suitability (the default selection) and Back-end are: . When you select a value, the page is reloaded with components only, for headroom data from the server, filtered according to the selection made. Both charts are updated to reflect this data. When the user selects a value the page will be reloaded with data from the server, filtered according to the selection made. Both charts will be updated to reflect this data. You can select and set a time slot before which all collected data will be ignored. You select the time slot by clicking on the desired bar. The selected bar and all previous bars are changed to the one-time exclusion period coloring reflecting this selection. In addition, one-time exclusion period selection is also dynamically displayed in the Recurring Exclusion Periods chart as selections are made. If you try to set the selection to the last bar on the right an error is displayed and the action will not be allowed. In addition, selection is also dynamically displayed in the Windows chart as selections are made. You can deselect a selected bar by clicking it again. The chart then reverts to the value set when the page was loaded. One-time exclusion period bars are only displayed in the Recurring Exclusion Periods chart under these conditions: l Both buckets corresponding to the Recurring Exclusion Periods chart slot are before the one-time exclusion period. l One of the buckets is before the one-time exclusion period and the other bucket has no data collected. The panel has two buttons to set and clear any changes made: l —writes the selected one-time exclusion period value to Set One-time Exclusion the database. This value will then be in effect and will be shown in all future views of this page. This button is enabled when a one-time exclusion period is selected. Clicking OK confirms the operation. l Clear One-time Exclusion —clears any previously set one-time exclusion value. This button is only enabled if a one-time exclusion value is set when the page is first loaded. Clicking OK confirms the operation The Recurring Exclusion Periods panel consists of seven charts, one for each day of the week. Each chart has a bar for each four hour time slot during which data is collected and a utilization score is calculated. Each bar represents two bars shown in the One-time Exclusion Period Panel chart. The bar shown in this chart is the One-time Exclusion Period Panel highest value (“worst performing”) bar of the two bars. One-time exclusion period bars are only displayed in the Recurring Exclusion Periods chart under the following conditions: l Both buckets corresponding to the Recurring Exclusion Periods chart slot are in the one-time exclusion period. l One of the buckets is in the one-time exclusion period and the other bucket has no data collected. 160 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

161 Storage Management In this panel, you click a time slot to select or deselect it. Clicking on a selected time slot will deselect it. As selections are made, both charts will be dynamically updated with the appropriate color coding. The panel has two buttons to set and clear any changes made: l Set Recurring Exclusions —writes the selected recurring exclusions period value(s) to the database. These values will then be in effect and will be shown in all future views of this page. This button is enabled when a recurring exclusion period is selected. Clicking OK confirms the operation. l —clears any previously set recurring exclusion period Clear Recurring Exclusions values. This button is only enabled if a recurring exclusion period value is set when the page is first loaded. Clicking OK confirms the operation. At the bottom of the page is a panel that contains the legend indicating the meanings associated with the different bar colors. On the right hand side in this panel is text detailing the last time a one-time exclusion period or Window was changed. If you hover over this text, the name of the user (fully qualified user name) that performed the last update operation is displayed. If the database has never had a one-time exclusion period or Window set, the field and tool tip text displays “Not yet modified”. Alerts There is a system alert generated each time a user changes a one-time exclusion period value or a recurring exclusion period value. Symmetrix tiers Creating tiers Before you begin l This feature requires Enginuity 5876. l The maximum number of tiers that can be defined on a storage system is 256. l When a disk group or thin pool is specified, its technology type must match the tier technology. l Disk groups can only be specified when the tier include type is static. l A standard tier cannot be created if it will: n Lead to a mix of static and dynamic tier definitions in the same technology. n Partially overlap with an existing tier. Two tiers partially overlap when they share only a subset of disk groups. For example, TierA partially overlaps with TierB when TierA contains disk groups 1 & 2 and TierB contains only disk group 2. (Creating TierA will fail.) To create a tier: Procedure 1. Select the storage system. STORAGE > Tiers to open the Tiers list view. 2. Select Create Create Tier dialog box. to open the 3. Click When this dialog box first opens, the chart displays the configured and unconfigured space on the selected storage system. Once you select a disk group or thin pool, this chart displays the configured and unconfigured space of the selected object. Tier Name . 4. Type a Symmetrix tiers 161

162 Storage Management Tier names must be unique and cannot exceed 32 characters. Only alphanumeric characters, hyphens ( - ), and underscores ( _ ) are allowed, however, the name cannot start with a hyphen or an underscore. Each tier name must be unique per Symmetrix system (across both DP and VP tier types), ignoring differences in case. 5. If the storage system on which you are creating the tier is licensed to perform FAST and FAST VP operations, select a Tier Type. Possible values are: l DP Tier —A disk group tier is a set of disk groups with the same technology type. A disk group tier has a disk technology type and a protection type. To add a disk group to a tier, the group must only contain volumes on the tier's disk technology type and match the tier protection type. l —A virtual pool tier is a set of thin pools. A virtual pool tier has a disk VP Tier technology type and a protection type. To add a thin pool to a tier, the thin pool must only contain DATA volumes on the tier's disk technology type and match the tier protection type. 6. If creating a VP tier, select the Emulation type of the thin pools to include in the tier. Only thin pools containing volumes of this emulation type will be eligible for inclusion in the tier. on which the tier will reside. Only disk 7. Select the type of Disk Technology groups or thin pools on this disk technology will be eligible for inclusion in the tier. disk technology for the tier, then select the type of 8. If you selected External External Technology . 9. Select the RAID Protection Level for the tier. Only disk groups or thin pools on this disk technology will be eligible for inclusion in the tier. 10. Depending on the type of tier you are creating, select the disk groups or virtual pools to include in the tier. 11. Optional: Select Include all future disk groups on matching technology for this tier. Tiers created in this manner are considered dynamic tiers. Tiers created without this option are considered static tiers. OK . 12. Click Modifying tiers Before you begin l This feature requires Enginuity 5876. l You can only modify tiers that are not part of a policy. For instructions on removing a tier from a policy, refer to Modifying FAST policies. l You cannot create blank tiers in Unisphere (that is, tiers without disk groups or thin pools); however, you can use Unisphere to add disk groups or thin pools to blank tiers that were created in Solutions Enabler. To modify a tier: Procedure 1. Select the storage system. STORAGE > Tiers to open the Tiers list view. 2. Select 162 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

163 Storage Management 3. Select the tier and click Modify . 4. Add or remove disk groups/thin pools by selecting/clearing the corresponding check box. OK 5. Click . Renaming tiers Before you begin l This feature requires Enginuity 5876. l Tier names must be unique and cannot exceed 32 characters. Only alphanumeric characters, hyphens ( - ), and underscores ( _ ) are allowed, however, the name cannot start with a hyphen or an underscore. Each tier name must be unique per storage system (across both DP and VP tier types), ignoring differences in case. To rename a tier: Procedure 1. Select the storage system. > 2. Select to open the Tiers list view. STORAGE Tiers 3. Rename . , and click Select the tier, click 4. Type a new name for the tier. OK 5. Click . Deleting tiers Before you begin l This feature requires Enginuity 5876. l You cannot delete tiers that are already part of a policy. To delete such a tier, you must first remove the tier from the policy. For instructions, refer to Modifying FAST policies. To delete a tier: Procedure 1. Select the storage system. > Tiers to open the Tiers list view. 2. Select STORAGE 3. Select the tier and click . 4. Click OK . Viewing Symmetrix tiers Before you begin This feature requires Enginuity 5876. Procedure 1. Select the storage system. STORAGE > Tiers to open the Tiers list view. 2. Select Tiers list view allows you to view and manage the tiers on a Symmetrix The system. Symmetrix tiers 163

164 Storage Management The following properties display: l —Name of the tier. Name l —Tier type. Possible values are: Type n Disk Group — A disk group tier is a set of disk groups with the same technology type. A disk group tier has a disk technology type and a protection type. To add a disk group to a tier, the group must only contain volumes on the tier's disk technology type and match the tier protection type. n Virtual Pool — A virtual pool tier is a set of thin pools. A virtual pool tier has a disk technology type and a protection type. To add a thin pool to a tier, the thin pool must only contain DATA volumes on the tier's disk technology type and match the tier protection type. l Technology —Disk technology on which the tier resides. l Emulation —Emulation type of the thin pools in the tier. l Protection —RAID protection level assigned to the volumes in the tier. l Used Capacity —Amount of storage that has already been used on the tier, in GB. l Capacity (GB) —Amount of free/unused storage on the tier, in GB. The following controls are available: l — Viewing Symmetrix tier details on page 164 l Create — Creating tiers on page 161 l — Modifying tiers on page 162 Modify l — Deleting tiers on page 163 l — on page 163 Rename Renaming tiers Viewing Symmetrix tier details Before you begin This feature requires Enginuity 5876. Procedure 1. Select the storage system. > Tiers to open the Tiers list view. 2. Select STORAGE 3. Select the tier and click Tier Demand Report panel or to open its Details its panel. panel provides a graphic representation of the tier's The Tier Demand Report used capacity over free space. Details panel: The following properties display in the l — Name of the tier. Name [OutOfTier]: If on a given technology there exists volumes that do not reside on any tier they will be shown as [OutOfTier]. This can happen when the 164 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

165 Storage Management protection type of volumes does not match the tier protection type, or when tiers are only defined on a subset of disk groups in a technology. l — Whether the tier is static (Yes) or dynamic (No). With a Is Static dynamic tier, the FAST controller will automatically add all future disk groups on matching disk technology to the tier. Tiers without this option enabled are considered static. l — Tier type. Possible values are: Type n DP — A disk group tier is a set of disk groups with the same technology type. A disk group tier has a disk technology type and a protection type. To add a disk group to a tier, the group must only contain volumes on the tier's disk technology type and match the tier protection type. n VP — A virtual pool tier is a set of thin pools. A virtual pool tier has a disk technology type and a protection type. To add a thin pool to a tier, the thin pool must only contain DATA volumes on the tier's disk technology type and match the tier protection type. l Technology — Disk technology on which the tier resides. Disk Location —Internal or external. l — RAID protection level assigned to the volumes in the RAID Protection tier. l Attribute — Status of the tier on the technology type. Possible values are: n Tier in a FAST Policy associated with storage groups. n Tier in a FAST Policy unassociated with storage groups. n Tier not in any FAST Policy. l — Amount of free/unused storage on the tier, in GB. Total Capacity (GB) l Free Capacity (GB) — Unconfigured space in Gigabytes in this tier. Free capacity for each disk group in the tier will only count toward tier free capacity if the disk group has enough usable disks to support the tier target protection type. l FAST Usage (GB) — Sum of hypers of all volumes in FAST storage group with matching RAID protection that reside on this tier. l FAST Free (GB) — If the tier is in a FAST policy associated with a storage group, the FAST Free capacity in Gigabytes is the sum of FAST Usage, Free capacity and Space occupied by Not Visible Devices (Unmapped/ Unmasked). If the tier is not in any FAST policy or in policies where none of the policies are associated to a storage group, then the FAST Available capacity is same as FAST Usage. l —The calculated upper limit for the storage Maximum SG Demand (GB) group on the tier. l Excess (GB) — Difference between FAST Free and Max SG Demand. If the tier is not in a FAST policy or in policies where none of the policies are associated to a storage group, then this value is Not applicable. l — Number of Thin Pools. Clicking the number next Number of Thin Pools Number of Thin Pools opens a view listing the associated thin pools. to Symmetrix tiers 165

166 Storage Management Viewing thin pools in a storage tier Procedure 1. Select the storage system. Tiers to open the Tiers list view. Storage 2. Select > 3. Select the tier and click Details to open its view. Thin Pool 4. Click the number next to Number of Thin Pools to open the tier's view. This view allows you to view and manage a tier's thin pool. The following properties display: l Name —Pool name. l Technology —Disk technology type. l —Protection configuration. Configuration l —Pool emulation type based on the first volume added to the Emulation pool . l Allocated Capacity — Percent capacity allocated to the pool . l Enabled Capacity (GB) —Pool capacity in Gigabytes. The following controls are available: l Viewing thin pool details on page 246 — l Create — Creating thin pools on page 240 l — Modify l — Expanding thin pools Expand on page 241 l — on page 243 Delete Deleting thin pools l Start Write Balancing on — Starting and stopping thin pool write balancing page 242 l Stop Write Balancing — Starting and stopping thin pool write balancing on page 242 l — Binding/Unbinding/Rebinding thin volumes on page 257 Bind FAST policies Creating FAST policies Before you begin l This feature requires Enginuity 5876. l The maximum number of policies allowed per storage system is 256. l Policies must contain either disk group tiers or virtual pool tiers, but not a combination of both disk group and virtual pool tiers. l Disk group tier policies can contains from one to three tiers. l Virtual pool tier policies can contain from one to four tiers. Only one out of the four tiers can be an external tier. 166 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

167 Storage Management l Each tier must be unique and there can be no overlapping disk groups/thin pools. l The first tier added to a policy determines the type of tier the policy will contain. l A policy cannot have an empty tier. l You cannot create blank policies (that is, policies without at least one tier) in Unisphere; however, you can create such policies in Solutions Enabler. The Solutions Enabler Array Controls and Management CLI User Guide contains instructions on creating blank policies. Unisphere does allow you to manage blank policies. l You cannot add a standard tier to a policy if it will result in a configuration where two tiers share a common disk group. A FAST policy is a set of one to three DP tiers or one to four VP tiers, but not a combination of both DP and VP tiers. Policies define a limit for each tier in the policy. This limit determines how much data from a storage group associated with the policy is allowed to reside on the tier. Storage groups are sets of volumes. Storage groups define the volumes used by specific applications. Storage groups are associated with FAST policies, and all of the volumes in the storage group come under FAST control. The FAST controller can move these volumes (or data from the volumes) between tiers in the associated policy. A storage group associated with a FAST policy may contain standard volumes and thin volumes, but the FAST controller will only act on the volumes that match the type of tier contained in the associated policy. For example, if the policy contains thin tiers, then the FAST controller will only act on the thin volumes in the associated storage group. Procedure 1. Select the storage system. STORAGE > FAST Policies . 2. Select Create 3. Click . . Policy names must be unique and cannot exceed 32 4. Type a Policy Name characters. Only alphanumeric characters, hyphens ( - ), and underscores ( _ ) are allowed, however, the name cannot start with a hyphen or an underscore. 5. Select the host type. . 6. Select the volume Emulation 7. Select a tier to add to the policy and then specify a storage group capacity for the tier (% MAX of Storage Group). This value is the maximum amount (%) of the associated storage group's logical capacity that the FAST controller can allocate to the tier. This value must be from 1 to 100. The total capacities for a policy must equal to or be greater than 100. 8. Repeat the previous step for any additional tiers you want to add. OK . 9. Click Modifying FAST policies Before you begin l This feature requires Enginuity 5876. l Policy names must be unique and cannot exceed 32 characters. Only alphanumeric characters, hyphens ( - ), and underscores ( _ ) are allowed, however, the name cannot start with a hyphen or an underscore. FAST policies 167

168 Storage Management Procedure 1. Select the storage system. STORAGE . > 2. Select FAST Policies . 3. Select a policy and click Modify 4. Optional: Modify the . Policy names must be unique and cannot Policy Name exceed 32 characters. Only alphanumeric characters, hyphens ( - ), and underscores ( _ ) are allowed, however, the name cannot start with a hyphen or an underscore. 5. Optional: Change the host type. Emulation . 6. Optional: Change the volume 7. Optional: Select a tier to modify for the policy and then specify a storage group capacity for the tier (% MAX of Storage Group). This value is the maximum amount (%) of the associated storage group's logical capacity that the FAST controller can allocate to the tier. This value must be from 1 to 100. The total capacities for a policy must equal to or be greater than 100. 8. Repeat the previous step for any additional tiers you want to modify. OK 9. Click . Deleting FAST policies Before you begin l This feature requires Enginuity 5876. l You cannot delete a policy that has one or more storage groups associated with it. To delete such a policy, you must first disassociate the policy from the storage groups. To delete a FAST Policy: Procedure 1. Select the storage system. STORAGE 2. Select . > FAST Policies . Delete 3. Select the policy and click OK . 4. Click Associating FAST policies with storage groups Before you begin Storage groups and FAST policies can only be associated under the following conditions: l The storage system is running Enginuity 5876. l The target FAST policy needs to have a least one pool that is part of the source policy in re-association activity. l The volumes in the new storage group are not already in a storage group associated with a FAST policy. l The policy has at least one tier. l The storage group only contains meta heads; meta members are not allowed. l The storage group does not contain moveable volumes. When a storage group is associated with a policy, you cannot add non-moveable volumes to it. Non- moveable volumes include: 168 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

169 Storage Management n CKD EAV n DRV n SFS n iSeries, ICOS, ICL n SAVE volumes n VDEVs n Diskless volumes l The storage group cannot contain a volume that is part of another storage group already associated with another policy. l The storage system has fewer than the maximum number of allowed associations (8,192). The procedure for associating FAST policies and storage groups, depends on whether you are associating a storage group with a policy or policy with a storage group. To associate a FAST policy with a storage group: Procedure 1. Select the storage system. 2. Select FAST Policies . > STORAGE . 3. Select the policy and click Associate Storage Groups 4. Select one or more storage groups to be associated with the FAST policy. 5. To have FAST factor the R1 volume statistics into move decisions made for the Enable FAST VP RDF Coordination . R2 volume, select This attribute can be set on a storage group, even when there are no SRDF volumes in the storage group. This feature is only available if the storage system is part of an SRDF setup. Both R1 volumes and R2 volumes need to be running Enginuity version 5876 or higher for the FAST VP system to coordinate the moves. OK 6. Click Associating storage groups with FAST policies Before you begin Storage groups and FAST policies can only be associated under the following conditions: l The storage system is running Enginuity 5876. l The target FAST policy needs to have a least one pool that is part of the source policy in re-association activity. l The volumes in the new storage group are not already in a storage group associated with a FAST policy. l The policy has at least one tier. l The storage group only contains meta heads; meta members are not allowed. l The storage group does not contain moveable volumes. When a storage group is associated with a policy, you cannot add non-moveable volumes to it. Non- moveable volumes include: n CKD EAV FAST policies 169

170 Storage Management n DRV n SFS n iSeries, ICOS, ICL n SAVE volumes n VDEVs n Diskless volumes l The storage group cannot contain a volume that is part of another storage group already associated with another policy. l The storage system has fewer than the maximum number of allowed associations (8,192). To associate a storage group with a FAST policy: Procedure 1. Select the storage system. > 2. Select . STORAGE Storage Groups 3. Select the storage group, click , and select . FAST > Associate 4. Select a policy and click OK . Disassociating FAST policies and storage groups Procedure 1. Select the storage system. Storage > Storage Groups to open the Storage Groups list view. 2. Select 3. Select the storage group, click . FAST > Disassociate , and select OK . 4. Click Reassociating FAST polices and storage groups Before you begin l This feature requires Enginuity 5876. l The storage group name must be valid. l The storage group and policy must already exist on the storage system. l The storage group must be in an association before performing a reassociation. l The new policy for the storage group, must have the same emulation as the storage group. Mix emulation association will result in an error. l The storage group cannot be associated with an empty policy, and the reassociated policy must contain at least one tier. l The total of the capacity percentage for the target FAST policy must add up to at least 100%. l If the FAST policy contains VP Tiers, all of the thin devices in the storage group must be bound to any VP pool in a tier in the policy. None of the thin devices can be bound to a pool outside of the policy. This procedure explains how to reassociate a storage group with a new policy. When reassociating a storage group, all the current attributes set on the original association 170 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

171 Storage Management automatically propagate to the new association. This feature eliminates the previous process of disassociating a storage group, then associating the group to a new policy, and entering the attributes, such as priority, on the association. Procedure 1. Select the storage system. > to open the Storage Groups list view. 2. Select Storage Groups Storage 3. Select the storage group, click , and select FAST > Reassociate . . OK 4. Select a policy and click Viewing FAST policies Before you begin This feature requires Enginuity 5876. Procedure 1. Select the storage system. > FAST Policies . 2. Select STORAGE FAST Policies Use the list view to view and manage FAST policies on a storage system. The following properties display: l —Name of the policy. Name l Type —Type of the policy. l —Storage tier associated with the policy. Tier 1 l Tier 2 —Storage tier associated with the policy. l —Storage tier associated with the policy. Tier 3 l —Storage tier associated with the policy. Tier 4 l Up to 4 tiers is supported only for FAST VP policies. FAST policies support up to 3 tiers. l Storage Groups —Storage groups associated with the policy. The following controls are available: l on page 172 — Viewing FAST policy details l Create — Creating FAST policies on page 166 l — Modify l Delete on page 168 — Deleting FAST policies l Associate Storage Group Associating storage groups with FAST policies — on page 169 Viewing storage group for FAST policies Procedure 1. Select the storage system. 2. Select STORAGE > FAST Policies . FAST policies 171

172 Storage Management 3. Details Select the policy, click panel. , and select the 4. Click the number next to Storage Groups . The following properties display: l Name —Name of the storage group. l —Policy associated with the storage group. FAST Policy l —Total capacity of the storage group in GB. Capacity l Volumes —Number of volumes contained in the storage group. l Masking Views —Number of masking views associated with the storage group. Viewing storage groups Refer to on page 135 for information on properties and controls for the storage group. Viewing FAST policy details Before you begin This feature requires Enginuity 5876. Procedure 1. Select the storage system. > FAST Policies 2. Select STORAGE . 3. Select the policy and click to open its Tier Demand Report panel or its panel. Details Tier Demand Report The panel includes graphic representations of the used and free space available for each tier in the policy. In addition, each chart includes markers for the following metrics: l —The calculated upper limit for the storage group on the Max SG Demand tier. l Available to FAST —The amount of storage available for FAST operations on the tier. panel, the following properties display: In the Details l Name — Name of the policy. To rename the policy, type a new name over the existing and click Apply . Policy names must be unique and cannot exceed 32 characters. Only alphanumeric characters, hyphens ( - ), and underscores ( _ ) are allowed, however, the name cannot start with a hyphen or an underscore. l Tier 1 - 3 (for FAST DP) l Tier 1 - 4 (for FAST VP) — Symmetrix tier associated with the policy, followed by the maximum amount (%) of the associated storage group's logical capacity that the FAST controller can allocate to the tier. This value must be from 1 to 100. The total capacities for a policy must be greater than or equal to 100. l Storage Groups — Number of Storage Groups. Clicking the number next to will open a view listing the associated storage groups. Storage Groups 172 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

173 Storage Management l Number of Tiers — Number of Tiers. Clicking the number next to Number will open a view listing the tiers in the policy. of Tiers Viewing FAST storage groups This feature is only supported on storage systems running Enginuity OS 5876. Up to 4 tiers is supported only for FAST VP policies. FAST policies support up to 3 tiers. Procedure 1. Select the storage system. 2. Select SG COMPLIANCE on the dashboard. . 3. Select VIEW FAST STORAGE GROUPS The following properties display: l Name —Name of the storage group. l FAST Policy —FAST Policy associated with the storage group. l —Icon indicating compliance. Compliant l Tier 1 % —Storage tier percentage associated with the policy. l Tier 2 % —Storage tier percentage associated with the policy. l Tier 3 % —Storage tier percentage associated with the policy. l —Storage tier percentage associated with the policy. Tier 4 % l —Out of Policy percentage. Out of Policy % Pinning and unpinning volumes Before you begin This feature requires Enginuity 5876. Pinning volumes prevents any automated process such as FAST from moving them. However, you can still migrate a pinned volume with Virtual LUN Migration. Note The capacity of pinned volumes is counted for compliance purposes. Procedure 1. Select the storage system. STORAGE > Volumes . 2. Select 3. Select the volume type by selecting a tab. 4. , and select one of the following: Select one or more volumes, click l FAST > Pin —To pin the volumes. l —To unpin the volumes. FAST > Unpin OK . 5. Click Pinning and unpinning volumes 173

174 Storage Management Time windows Understanding time windows Time windows are used by FAST, FAST VP, and Symmetrix Optimizer to specify when data can be collected for performance analysis and when moves/swaps can execute. There are two types of time windows: l —Specify when performance samples can be taken Performance time windows for analysis. l Move time windows —Specify when moves/swaps are allowed to start or not start. In addition, performance and move time windows can be further defined as open or closed: l —When creating performance time windows, this specifies that the data Open collected in the time window should be included in the analysis. When creating move time windows, this specifies that the moves can start within the time window. This type of time window is also referred to as inclusive. l —When creating performance time windows, this specifies that the data Closed collected in the time window should be excluded from analysis. When creating move time windows, this specifies that the moves cannot start within the time window. This type of time window is also referred to as exclusive. Creating and modifying time windows Before you begin l This feature requires Enginuity OS 5876. l Time windows are used by FAST and Optimizer. Changes made to FAST time windows may also affect Optimizer. l The maximum number of time windows that can be defined on a storage system is 128. Procedure 1. To create time windows: 1. Select the storage system. > . DASHBOARD SG COMPLIANCE 2. Navigate to 3. From within the FAST Status Report panel, select FAST VP or FAST DP (If the storage system is licensed for both FAST DP and FAST VP) for which the time window will apply. 4. Click next to the type of time window you want to create or modify. Depending on your selection, either the Performance Time Window or the dialog opens. Move Time Window 5. If you are creating or modifying an open time window, select the day(s) or week in which to define the time window and click . ADD 6. Select one of the following options: l Always open — Creates a single open time window for the entire week (Sunday to Saturday). 174 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

175 Storage Management l All weekend (Fri:18:00 - Mon:00:00) — Creates a single open time window for the weekend (17:00 Friday to 8:00 Monday). l 9:00-17:00 , Monday-Friday — Creates five time windows, one for each day of the work week. l 17:00-8:00, Monday-Friday — Creates five time windows, one for each of night of the work week. l — Allows you to define your own time window. Custom . OK 7. Click 8. If you are creating or modifying a closed time window, select the Start Time . checkbox and click ADD . 9. Select the start date and time and the end date and time and click OK 10. Define the following parameters: l Workload Analysis Period — Specifies the amount of workload sampling to maintain for sample analysis. Possible values are specified in units of time (hours, days, or weeks) and can range from 2 hours to 4 weeks, with the default being one week. l Time to Sample before First Analysis — Specifies the minimum amount of workload sampling to complete before analyzing the samples for the first time. When setting this parameter, be sure to allow enough time (usually a week) to establish a good characterization of the typical workload. This parameter allows you to begin operations before the entire Workload period has elapsed. Possible values range from 2 hours to the value specified for Workload Analysis Period parameter, with the default being eight the hours. 11. Click SAVE . Deleting time windows Before you begin Time windows are used by FAST. Procedure 1. To delete time windows: 1. Select the storage system. > SG COMPLIANCE . 2. Navigate to DASHBOARD panel, select FAST VP or FAST DP (If the FAST Status Report 3. From within the storage system is licensed for both FAST DP and FAST VP) for which the time window will apply. 4. next to the type of time window you want to create or modify. Click Performance Time Window or the Depending on your selection, either the Move Time Window dialog opens. 5. If you are deleting an open time window, select the day(s) and click REMOVE . Start Time checkbox and 6. If you are deleting a closed time window, select the REMOVE . click Time windows 175

176 Storage Management FAST Movement Time Window dialog box Use this dialog box to manage movement time windows, including the following tasks: l on page 174 Creating and modifying time windows l Deleting time windows on page 175 FAST Performance Time Window dialog box Use this dialog box to manage performance time windows, including the following tasks: l on page 174 Creating and modifying time windows l on page 175 Deleting time windows Manage Closed Movement Time Windows dialog box Use this dialog box to manage closed movement time windows, including the following tasks: l Creating and modifying time windows on page 174 l on page 175 Deleting time windows Manage Closed Performance Time Windows dialog box Use this dialog box to manage closed movement time windows, including the following tasks: l Creating and modifying time windows on page 174 l on page 175 Deleting time windows Manage Open Movement Time Windows dialog box Use this dialog box to manage open movement time windows, including the following tasks: l Creating and modifying time windows on page 174 l Deleting time windows on page 175 Manage Open Performance Time Windows dialog box Use this dialog box to manage open performance time windows, including the following tasks: l Creating and modifying time windows on page 174 l on page 175 Deleting time windows Understanding Workload Planner Workload Planner is a FAST component used to display performance metrics for applications and to model the impact of migrating the workload from one storage system to another. Workload Planner is supported on storage systems running Enginuity 5876 or HYPERMAX OS 5977. For storage groups to be eligible for Workload Planning, they must meet the following criteria: Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 176

177 Storage Management l On a locally attached storage system registered for performance. See Registering on page 592for instructions on registered storage systems. storage systems l Belong to only one masking view. l Under FAST control: n For storage systems running HYPERMAX OS 5977, they must be associated with a service level. n For storage systems running Enginuity 5876, they must be associated with a FAST policy. l Contain only FBA volumes. In addition, the Unisphere server must be on an open systems host. Delete a reference workload to confirm. This dialog allows you to delete a reference workload. Click OK Resetting Workload Plan Before you begin To perform this operation, a StorageAdmin role is required. Resetting the workload plan requires one week of data. This procedure explains how to set the performance baseline expectations of a storage group to the characteristics currently measured for the previous two weeks. Procedure 1. Select the storage system. 2. Select Storage Groups . STORAGE > 3. Select the storage group and click the Compliance icon to open its details view. Compliance tab. 4. If not already displaying, click the Reset Workload Plan 5. Click . and the projected . 6. Review the Current Scores New Baseline . 7. If satisfied, click OK Results Workload Planning tab updates with the newly calculated Once complete, the performance metrics. Managing volumes Volumes view For storage systems running HYPERMAX OS 5977 or higher, the provides you with a single place from which to view and manage all the volumes types on the system. Note For instructions on managing volumes on storage systems running Enginuity versions 5876, refer to Managing volumes on page 178. To view volumes associated with a host initiator, refer to Viewing volumes associated on page 315. with host initiator Volumes view: To use the Managing volumes 177

178 Storage Management Procedure 1. Select the storage system. Storage to open the Volumes list view. > 2. Select Volumes For field and control descriptions, refer to the following volume-specific help pages: l TDEV Viewing thin volumes on page 223 — l Viewing DATA volumes on page 239 DATA — l Viewing CKD volumes on page 92 CKD — Managing volumes For storage systems running Enginuity OS version 5876, the Volumes view provides you with a single place from which to view and manage all the volume types on the system. Note For instructions on managing volumes on storage systems running HYPERMAX Managing volumes on page 177. OS 5977, refer to To use the Volumes view: Procedure 1. Select the storage system. 2. Select Volumes to open the Volumes view. Storage > Viewing regular The Regular Volumes list view is displayed by default (see volumes on page 217). Virtual tab to see the Virtual Volumes list view (see Viewing virtual Click the on page 224). volumes tab to see the Virtual Volumes list view (see Meta Viewing meta Click the volumes on page 208). tab to see the Virtual Volumes list view (see Viewing private Click the Private on page 215). volumes Creating volumes This procedure explains how to create volumes. Procedure 1. Select the storage system. STORAGE > Volumes and click Create to open the Create Volume 2. Select dialog box. 3. Do the following, depending on the storage operating environment and the type of volumes you are creating: l HYPERMAX OS 5977 or higher: n TDEV Creating thin volumes on page 185 — n — Creating CKD volumes on page 330 CKD 178 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

179 Storage Management n Creating virtual gatekeeper volumes Virtual Gatekeeper on page 186 — Note The maximum volume size supported on a storage system running HYPERMAX OS 5977 or higher is 64 TB. l Enginuity 5876: n Creating DATA volumes — DATA on page 179 n on page 180 — Diskless Creating diskless volumes n — Creating DRV volumes on page 181 DRV n Creating gatekeeper volumes on page 181 Gatekeeper — n — on page 182 Regular Creating regular volumes n — on page 183 SAVE Creating SAVE volumes n TDEV — Creating thin volumes on page 184 n Creating VDEV volumes on page 187 VDEV — In addition, you can also create volumes using a storage template. Creating DATA volumes This procedure explains how to create DATA volumes on storage systems running Enginuity version 5876. Procedure 1. Select the storage system. STORAGE > Thin Pools to open the Thin Pools list view. 2. Select 3. view. to open its Details Select the thin pool and click . Number of Data Volumes 4. Click the number next to Create Volumes . 5. Click as the Configuration . 6. Select DATA . 7. Select the Disk Technology External disk technology is an option if the storage system has FTS (Federated Tiered Storage) enabled and available external storage. 8. Select the Emulation type. Protection level. 9. Select the RAID Number of Volumes 10. Specify the capacity by typing the , and selecting a Volume Capacity . You can also manually enter a volume capacity. 11. To add the new volumes to a specific thin pool, select one from Add to Pool . Pools listed are filtered on technology, emulation, and protection type. 12. Click Advanced Options to continue setting the advanced options, as described next. Add to Pool . The advanced options presented depend on the value selected for Complete any of the following steps that are appropriate: a. Select the Disk Group (number and name) in which to create the volumes. The list of disk groups is already filtered based on the technology type selected above. Creating volumes 179

180 Storage Management b. To enable the new volumes in the pool, select Enable volume in pool . c. To rebalance allocated capacity across all the DATA volumes in the pool, Start Write Balancing . select d. Click APPLY . 13. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Creating private volumes The following private volumes can be created: on page 179 Creating DATA volumes Creating diskless volumes on page 180 Creating DRV volumes on page 181 Creating gatekeeper volumes on page 181 Creating SAVE volumes on page 183 Creating diskless volumes This procedure explains how to create diskless volumes on storage systems running Enginuity version 5876. Procedure 1. Select the storage system. 2. Select STORAGE > Volumes and click the Virtual tab. DLDEV . 3. Filter on Type and select Create 4. Click . type. Configuration 5. Select the Emulation type. 6. Select the 7. Specify the capacity by typing the Number of Volumes , and selecting a Volume Capacity . You can also manually enter the volume capacity. 8. To add the new volumes, select one from Add to Pool . Advanced Options to continue setting the advanced options, as described 9. Click next. Setting advanced options a. Modify the Volume Identifier . Dynamic Capability to the volumes, select one of the following. b. To assign Otherwise, leave this field set to None. l — Creates a dynamic R1 RDF volume. RDF1_Capable l RDF2_Capable — Creates a dynamic R2 RDF volume. l RDF1_OR_RDF2_Capable — Creates a dynamic R1 or R2 RDF volume. Define Meta panel only displays when attempting to create a volume The larger than the value specified in the Minimum Auto Meta Size. 180 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

181 Storage Management c. If Auto Meta is enabled on the system, and if you are attempting to create , specify values for the volumes larger than the Minimum Meta Capacity panel: following in the Define Meta l — Size of the meta members to use Member capacity (Cyl/MB/GB) when creating the meta volumes. l Configuration (Striped/Concatenated) — Whether to create striped or concatenated meta volumes. 10. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Expand Run Now to perform the operation now. Add to Job List , and click Creating DRV volumes This procedure explains how to create DRV volumes on storage systems running Enginuity version 5876. Procedure 1. Select the storage system. > Volumes and click the Private tab. STORAGE 2. Select . 3. Filter on Type and select DRV 4. Select DRV as the Configuration . Create . 5. Click Configuration 6. Select the type. type. Emulation 7. Select the Number of Volumes , and selecting a 8. Specify the capacity by typing the Volume Capacity . You can also manually enter a volume capacity. Add to Pool . 9. To add the new volumes, select one from to continue setting the advanced options, as described 10. Click Advanced Options next. Setting Advanced options: To create the volumes from a specific disk group, select one (disk group number and name) from Disk Group . If Auto meta is enabled on the system then it displays as enabled with a green check mark. 11. Do one of the following: l to add this task to the job list, from which you can Add to Job List Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs Previewing jobs on page 920. on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand Creating gatekeeper volumes This procedure explains how to create gatekeeper volumes on storage systems running Enginuity version 5876. Creating volumes 181

182 Storage Management Procedure 1. Select the storage system. STORAGE and click the Regular panel. > 2. Select Volumes . 3. Select the volume and click Create 4. Select as the Configuration . Gatekeeper 5. Select the Emulation type. Number of Volumes 6. Type the to create. 7. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l , and click Run Now to perform the operation now. Expand Add to Job List Creating regular volumes This procedure explains how to create regular volumes on storage systems running Enginuity version 5876. Procedure 1. Select the storage system. > Volumes and click the Regular 2. Select STORAGE panel. Create . 3. Click 4. Select the Configuration . Disk Technology . 5. Select the disk technology is an option if the storage system has FTS (Federated External Tiered Storage) enabled and available external storage. Emulation type. 6. Select the 7. Select the RAID level. Protection 8. Specify the capacity to create by typing the Number of Volumes , and selecting a Volume Capacity . You can also manually enter a volume capacity. to continue setting the advanced options, as described 9. Click Advanced Options next. Setting Advanced options: a. z/OS Only: Type the SSID for the new volume, or click Select... to open a dialog from which you can select an SSID. This is required for volumes on storage systems with ESCON or FICON directors (or mixed systems). Disk Group , select one (disk group b. To create the volumes from a specific number and name). c. To name the new volumes, select one of the following Volume Identifiers : and type a Name l None — Allows the system to name the volumes (Default). l Name Only — All volumes will have the same name. l — All volumes will have the same name with a unique Name + VolumeID storage system volume ID appended to them. When using this option, the maximum number of characters allowed is 50. 182 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

183 Storage Management l Name + Append Number — All volumes will have the same name with a unique decimal suffix appended to them. The suffix will start with the and increment by 1 for each value specified for the Append Number Append Numbers additional volume. Valid must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. Setting volume names For more information on naming volumes, refer to on page 196. Dynamic Capability d. To assign to the volumes, select one of the . None following; otherwise, leave this field set to l RDF1_Capable — Creates a dynamic R1 RDF volume. l RDF2_Capable — Creates a dynamic R2 RDF volume. l — Creates a dynamic R1 or R2 RDF volume. RDF1_OR_RDF2_Capable e. If Auto Meta is enabled on the system, and if you are attempting to create Minimum Meta Capacity , specify values for the volumes larger than the following in the panel: Define Meta l — Size of the meta members to use Member capacity (Cyl/MB/GB) when creating the meta volumes. l — Whether to create striped Configuration (Striped/Concatenated) or concatenated meta volumes. 10. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Run Now to perform the operation now. Add to Job List , and click Creating SAVE volumes This procedure explains how to create SAVE volumes on storage systems running Enginuity version 5876. Procedure 1. Select the storage system. > Volumes and click the Private tab. 2. Select STORAGE . SAVE 3. Filter on Type and select 4. Select SAVE as the Configuration . Disk Technology . 5. Select the External disk technology is an option if the Symmetrix system has FTS 6. (Federated Tiered Storage) enabled and available external storage. 7. Select the Emulation type. 8. Select the RAID Protection level. Number of Volumes , and selecting a 9. Specify the capacity by typing the . You can also manually enter a volume capacity. Volume Capacity 10. To add the new volumes to a specific pool, select one from Add to pool . SNAP and SRDF/A DSE pools listed are filtered on technology, emulation, and protection type selected above. Creating volumes 183

184 Storage Management 11. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920. on page 920 and l Run Now , and click Add to Job List Expand to perform the operation now. l Advanced Options Click to continue setting the advanced options, as described next. If Auto meta is enabled on the system then it displays as enabled with a green check mark. Setting Advanced options: a. Select the Disk Group (number and name) in which to create the volumes. The list of disk groups is already filtered based on technology type selected above. . b. To enable the new volumes in the pool, select Enable volume in pool If Auto meta is enabled on the system then it displays as enabled with a green check mark. c. Do one of the following: n Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more on page 920 and Previewing information, refer to Scheduling jobs on page 920. jobs n Expand Add to Job List, and click Run Now to perform the operation now. Creating thin volumes This procedure explains how to create thin volumes on storage systems running Enginuity version 5876. For instructions on creating thin volumes on storage systems Creating thin volumes running HYPERMAX OS 5977 or higher, refer to on page 185. Procedure 1. Select the storage system. Volumes , click on the Virtual > Create . 2. Select STORAGE tab and select 3. Select Configuration (TDEV or BCV + TDEV or Virtual Gatekeeper) . Emulation type. 4. Select the 5. Specify the capacity by typing the Number of Volumes , and selecting a Volume Capacity . You can also manually enter a volume capacity. Bind to Pool . 6. To bind the new volumes to a specific thin pool, select one from Only thin pools with enabled DATA volumes and matching emulation are available for binding (except AS/400 which will bind to an FBA pool). to continue setting the advanced options Advanced Options 7. Click Setting Advanced options: a. To name the new volumes, select one of the following Volume Identifiers Name : and type a l — Allows the system to name the volumes (Default). None l Name Only — All volumes will have the same name. 184 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

185 Storage Management l Name + VolumeID — All volumes will have the same name with a unique Symmetrix volume ID appended to them. When using this option, the maximum number of characters allowed is 50. l — All volumes will have the same name with a Name + Append Number unique decimal suffix appended to them.The suffix will start with the value specified for the and increment by 1 for each Append Number additional volume. Valid Append Numbers must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. on Setting volume names For more information on naming volumes, refer to page 196. b. To Allocate Full Volume Capacity , select the option. c. If you selected to allocate capacity in the previous step, you can mark the allocation as persistent by selecting Persist preallocated capacity through reclaim or copy . Persistent allocations are unaffected by standard reclaim operations and any TimeFinder/Clone, TimeFinder/Snap, or SRDF copy operations. Dynamic Capability d. To assign to the volumes, select one of the following; otherwise, leave this field set to None . l RDF1_Capable — Creates a dynamic R1 RDF volume. l RDF2_Capable — Creates a dynamic R2 RDF volume. l RDF1_OR_RDF2_Capable — Creates a dynamic R1 or R2 RDF volume. e. If Auto Meta is enabled on the system, and if you are attempting to create volumes larger than the Minimum Meta Capacity , specify values for the Define Meta panel: following in the l Member capacity (Cyl/MB/GB) —Size of the meta members to use when creating the meta volumes. l —Whether to create striped or Configuration (Striped/Concatenated) concatenated meta volumes. 8. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Expand Add to Job List, and click Run Now to perform the operation now. l Click Advanced Options to continue setting the advanced options, as described next. Creating thin volumes This procedure explains how to create thin volumes on storage systems running HYPERMAX OS 5977. For instructions on creating thin volumes on storage systems running Enginuity 5876, refer to Creating thin volumes on page 184. Procedure 1. Select the storage system. STORAGE > Volumes and click Create to open the Create Volume 2. Select dialog box. 3. Select TDEV as the Configuration . Creating volumes 185

186 Storage Management 4. Select the Emulation type. , and selecting a 5. Specify the capacity by typing the Number of Volumes Volume Capacity . You can also manually enter a volume capacity. Select , select the 6. Optional: To add the volumes to a storage group, click storage group, and then click . OK Advanced Options 7. Click to set the advanced options: l Enable Mobility ID Optional: Click the checkbox to assign Mobility IDs to the volume. If you leave the checkbox unchecked, a Compatibility ID will be assigned to the volume instead. l If creating thin volumes or a thin BCVs, you can specify to Allocate Full Volume Capacity . In addition, you can mark the preallocation on the thin volume as persistent by selecting Persist preallocated capacity through reclaim or copy . Persistent allocations are unaffected by standard reclaim operations. l Volume Identifiers To name the new volumes, select one of the following Name : and type a n None — Allows the system to name the volumes (Default). n Name Only — All volumes will have the same name. n — All volumes will have the same name with a unique Name + VolumeID Symmetrix volume ID appended to them. When using this option, the maximum number of characters allowed is 50. n Name + Append Number — All volumes will have the same name with a unique decimal suffix appended to them. The suffix will start with the Append Number and increment by 1 for each value specified for the Append Numbers additional volume. Valid must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. Setting volume names For more information on naming volumes, refer to on page 196. 8. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Expand Add to Job List , and click Run Now to perform the operation now. Creating virtual gatekeeper volumes Before you begin The storage system must be running HYPERMAX OS 5977 or higher. This procedure explains how to create virtual gatekeeper volumes. Procedure 1. Select the storage system. Create Volumes 2. Select STORAGE to open the Create Volume > and click dialog box. Virtual Gatekeeper as the Configuration . 3. Select 186 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

187 Storage Management 4. Optional: Select the Emulation type. . 5. Type the Number of Volumes 6. Optional: To add the volumes to a storage group, select the storage group and then click OK . 7. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920. Scheduling jobs on page 920 and Previewing jobs l , and click Run Now to perform the operation now. Add to Job List Expand Creating VDEV volumes Procedure 1. Select the storage system. > STORAGE and click the Virtual tab. 2. Select Volumes . VDEV 3. Filter on Type and select as the Configuration . VDEV 4. Select type. Emulation 5. Select the Number of Volumes , and selecting a 6. Specify the capacity by typing the Volume Capacity . If Auto meta is enabled on the system then it displays as enabled with a green check mark. Select... to open a 7. z/OS Only: Type the SSID for the new volume, or click dialog from which you can select an SSID. This is required for volumes on storage systems with ESCON or FICON directors (or mixed systems). 8. Do one of the following: l Add to Job List Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l , and click Run Now Expand Add to Job List to perform the operation now. l to continue setting the advanced options, as Click Advanced Options described next. Setting Advanced options: Enable SCSI3 Persistent Reservation status — For Enginuity a. View 5876 and higher this feature is pre-set by SYMAPI and cannot be changed. It is displayed as enabled for Enginuity 5876 and higher, except for CDK and AS/400 emulations. b. If Auto Meta is enabled for the system, and if you are attempting to , specify values Minimum Meta Capacity create volumes larger than the Define Meta panel: for the following in the n — Size of the meta members to Member capacity (Cyl/MB/GB) use when creating the meta volumes. n — Whether to create Configuration (Striped/Concatenated) striped or concatenated meta volumes. c. Do one of the following: n Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more Creating volumes 187

188 Storage Management information, refer to Previewing Scheduling jobs on page 920 and jobs on page 920. n Run Now Add to Job List Expand to create the volumes and click now. Select Storage Group Use this dialog box to select a storage group for the operation. Deleting volumes This procedure explains how to delete volumes. Procedure 1. Select the storage system. > 2. Select . STORAGE Volumes 3. Navigate to the volume that you wish to delete. 4. Select the volume and click Duplicating volumes Before you begin You cannot duplicate RDF, SFS, or VAULT volumes. If you are duplicating a thin volume that is bound to a pool, the newly created volumes will be bound to the same pool. If you are duplicating a DATA volume that is part of a pool, the newly created DATA volumes will be part of the same pool. The initial state of the volume will be DISABLED. The following explains how to duplicate volumes. Procedure 1. Select the storage system. STORAGE > Volumes . 2. Select 3. Navigate to the volume that you wish to duplicate. 4. Configuration > Duplicate Volume , and click . Select the volume, click 5. Type the Number of Volumes (duplicates) to make. 6. z/OS Only: You can optionally change the SSID number for the new volumes by to open a dialog from which you can Select... typing a new value, or clicking select an SSID. By default, this field displays the SSID of the volume you are copying. Advanced Options to continue setting the advanced options. 7. Click To name the new volumes, select one of the following Volume Identifiers and type a Name: l None — Allows the system to name the volumes (Default). l Name Only — All volumes will have the same name. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 188

189 Storage Management l Name + VolumeID — All volumes will have the same name with a unique Symmetrix volume ID appended to them. When using this option, the maximum number of characters allowed is 50. l — All volumes will have the same name with a Name + Append Number unique decimal suffix appended to them. The suffix will start with the value specified for the Append Number and increment by 1 for each additional volume. Valid Append Numbers must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. on Setting volume names For more information on naming volumes, refer to page 196. 8. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Previewing jobs Scheduling jobs l and click Expand to create the volumes now. Add to Job List Run Now Assigning array priority to individual volumes Before you begin This feature requires Enginuity 5876. This procedure explains how to prioritize the service time of the host I/O to an individual volume. To prioritize the service time of the host I/O to groups of volumes Assigning array priority to groups of (device groups or storage groups), refer to on page 189. volumes To assign host priority to individual volumes: Procedure 1. Select the storage system. > Volumes . 2. Select Storage 3. Click on the appropriate volume panel. 4. , and select Assign Symmetrix Priority . Select the volume, click OK . 5. Select an array priority from 1 (the fastest) to 16 (the slowest) and click Assigning array priority to groups of volumes Before you begin This feature requires Enginuity 5876. This procedure explains how to prioritize the service time of the host I/O to groups of volumes (device groups or storage groups). Procedure 1. Select the storage system. 2. Do one of the following: l n STORAGE > Storage Groups To assign priority to storage groups, select to open the Storage Groups list view . Assigning array priority to individual volumes 189

190 Storage Management n Assign Symmetrix Select the storage group, click , and select dialog box. to open the Priority Assign Symmetrix Priority l n To assign priority to device groups, select DATA PROTECTION > Device Groups to open the Device Groups list view . n Assign Symmetrix Priority , and select Select the device group, click to open the Assign Symmetrix Priority dialog box. 3. Select an array priority from 1 (the fastest) to 16 (the slowest) and click OK . OK 4. Click . Changing volume configuration Before you begin l On storage systems running Enginuity 5876 or higher, you cannot increase or decrease the mirror protection of a volume. l When adding DRV attributes, volumes must be unmapped. l Full swap operations require the R1 and R2 devices to be the same size. l Only the head of a metavolume can have its type changed. The metamembers will automatically have the changes applied. l You cannot convert one member of a RAID set to unprotected without converting all the members to unprotected. l When adding/removing SRDF attributes, there are no restrictions on I/O. The SRDF pair must be split or failed over. If failed over, the R1 device must be unmapped. l When adding/removing BCV attributes, there are no restrictions on I/O. The standard/BCV pair must be split. This procedure explains how to change a volume's configuration. Procedure 1. Select the storage system. > Volumes 2. Select STORAGE . 3. Navigate to the volume. 4. Select the volume, click , and click Configuration > Change Volume Configuration . New Configuration for the selected volumes. Only valid configurations 5. Select a are listed. The remaining fields in the dialog box are active or inactive depending on the configuration type. 6. z/OS Only: Type the SSID for the new volume created by removing a mirror, or click Select... to open a dialog from which you can select an SSID. This is required for volumes on storage systems with ESCON or FICON directors (or mixed systems). This field is optional on storage systems running Enginuity 5876 or higher when reducing the number of mirrors. 7. Do one of the following: 190 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

191 Storage Management l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920 and on page 920. Scheduling jobs l to run the job now. and click Add to Job List Expand Run Now Expanding existing volumes Before you begin l Requires HYPERMAX OS 5977 or later (HYPERMAX OS 5977.1125.1125 or later for CKD volumes). l You must be logged in as an Administrator. l You can expand a volume up to 64 TB (for FBA volumes) or 1,182,006 cylinders (for CKD volumes). l When expanding a CKD volume above 565,250 cylinders, the new size must be a multiple of 1113 cylinders. If you specify an amount that is not a multiple, the system rounds it up. l Consider consulting with your operating system vendor or cluster vendor for support of online LUN expansion l You cannot expand a FBA volume when any of the following operations are in progress: n Free all n Reclaim n Deallocation l Restrictions apply when a volume: n is a gatekeeper n is an ACLX n is Celerra FBA n is AS400 n is VP encapsulated n is part of a SnapVX session defined n is being replicated n is part of an SRDF pair n is part of an ORS session n a TDAT l For CKD volumes, you cannot expand a volume that is: n A CKD 3380 volume n Marked as Soft Fenced Procedure 1. Select the storage system. 2. Select STORAGE > Volumes , click Expand Volume to open the , and click dialog box. Expand Volume The Expand Volume dialog box appears. Expanding existing volumes 191

192 Storage Management 3. In the Expand Volume dialog box, type or select Volume Capacity field of the and Additional Capacity the new capacity of the volume. The Total Capacity figures update automatically. SRDF storage group volume capacity can be expanded using the controls. In the case of SRDF Storage Group volumes, you need to specify a SRDF group number so that the dialog allowing you to remote volumes can also be displayed Expanding remote volumes on page 511). (see 4. To reserve the volume, select Reserve Volumes . 5. Do one of the following: l to add this task to the job list, from which you can schedule Add to Job List or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Expand Run Now to perform the operation now. Add to Job List , and click Mapping volumes Procedure 1. Select the storage system. > Volumes . STORAGE 2. Select 3. Navigate to the volume. 4. Select the volume, click , and click Configuration > Map . 5. Select one or more Ports . Note When performing this operation on storage systems running HYPERMAX OS 5977 or higher, only ACLX-disabled ports will be available for selection. . In addition you can also type Reserve Volumes 6. To reserve the volumes, select Comments and select an Expiration . The default values for Reserve reserve Comments are set in the Symmetrix preferences for volumes Volumes and reservations. If the volumes are not automatically reserved you can optionally reserve them here. 7. Click Next . 8. To change an automatically generated LUN address, do the following; otherwise, click Next to accept the generated address. Set Dynamic LUN Address dialog box. a. Double-click the address to open the Starting LUN , double-click it and type a new address over it, b. To use a new or select an address and click Next Available LUN to increment the generated address to the next available address. When done, click Apply Starting LUN . OK to return to the mapping wizard. c. Click Next . d. Click 192 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

193 Storage Management 9. Verify your selections in the Summary page. To change any of your selections, . Note that some changes may require you to make additional click Back changes to your configuration. 10. Do one of the following: l to add this task to the job list, from which you can schedule Add to Job List or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Run Now to perform the operation now. Add to Job List Expand , and click Unmapping volumes This procedure explains how to unmap volumes. Procedure 1. Select the storage system. > STORAGE . 2. Select Volumes 3. Navigate to the volume. 4. Configuration > Unmap . , and click Select the volume, click 5. Select one or more ports. . In addition you can also type 6. To reserve the volumes, select Reserve Volumes and select an Expiration . The default values for Reserve Comments reserve and Comments are set in Setting preferences on page 49 for volumes Volumes reservations. If the volumes are not automatically reserved you can optionally reserve them here. . 7. Click Next Summary page. To change any of your selections, 8. Verify your selections in the click Back . Note that some changes may require you to make additional changes to your configuration. 9. Do one of the following: l to add this task to the job list, from which you can Add to Job List Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Setting optimized read miss Before you begin The optimized read miss feature is supported only for EFD volumes with FBA or AS400 D910 emulation attached to an XtremSW Cache Adapter. However, starting with Enginuity 5876.280, you can use optimized read miss without a XtremeSW Cache Adapter. To use optimized read miss without the adapter, you must set the Optimized mode to On . Read Miss The optimized read miss feature reduces I/O processing overhead of read miss operations for both DA and DX emulations. The feature is supported on storage systems running Enginuity 5876.163.105 or higher. This feature is not supported on storage systems running HYPERMAX OS 5977 or higher. This procedure explains how to set the optimized miss feature at the volume level. You can also perform this operation at the storage group or the device group level. Unmapping volumes 193

194 Storage Management Procedure 1. Select the storage system. > STORAGE Volumes 2. Select . 3. Navigate to the volume. 4. Select the volume, click , and select . Set Optimized Read Miss 5. Select a Set Optimized Read Miss mode: l System Default —Storage system determines whether to enable or disable optimized read miss mode for the specified volumes/group. l Off —Disables optimized read miss mode, regardless of the configuration. l On —Enables optimized read miss mode for both XtremCache and non- XtremCache EFD-only configurations. . 6. Click OK Setting volume status Before you begin You cannot set the status of an unbound thin volume. To set volume status for individual volumes: Procedure 1. Select the storage system. > Volumes . 2. Select STORAGE 3. Navigate to the volume. 4. , and click Set Volumes > Status . Select the volume, click 5. Set the volume status. Possible values are: l — Changes the write-protect status of the volumes to Read/Write Enable be read and write enabled on the specified director port(s) for any locally attached hosts. l Write Disable — Changes the write-protect status of the volumes to be write disabled on the specified director ports for any locally attached hosts. This option will only work on volumes that are in a write enabled state. l — Changes the User Ready status of the volumes to Ready. Device Ready l Device Not Ready — Changes the User Ready status of the volumes to Not Ready. l — Causes the Hold bit to be placed on a volume. The Hold bit is Hold automatically placed on the target volume of a Snap session. l — Causes the Hold bit to be removed from a volume. The Hold bit is Unhold automatically removed from the target volume of a snap session when the snap session is removed. 6. Optional: For HYPERMAX OS 5977 or higher, select SRDF/Metro . 7. Optional: To force the operation when the operation would normally be SymForce , if available. rejected, select 194 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

195 Storage Management 8. If the selected volumes are mapped, you can select to change the status for a or all directors. particular Director 9. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Run Now to perform the operation now. , and click Expand Add to Job List Setting volume attributes Before you begin You cannot set attributes for DATA volumes. Setting attributes for CKD volumes is not supported. If attempting to set attributes for multiple volumes of type FBA and CKD, a warning is displayed stating that the action will be applied only to FBA volumes. Setting the volume attribute for a volume restricts how it can be accessed. To set volume attributes: Procedure 1. Select the storage system. > Volumes . 2. Select STORAGE 3. Navigate to the volume. 4. Select a volume, click , and click Set Volumes > Attribute . 5. Set any number of the following attributes. Note that the attributes available depend on the type of selected volumes. l — Sets the emulation type for the volumes. The default is No Emulation Change. This option will appear dimmed for masked/mapped volumes, as this feature is not supported on masked/mapped volumes. This feature only applies/appears for storage systems running Enginuity 5876. l Dynamic RDF Capability — Sets the volume to perform dynamic RDF operations. This feature only applies/appears for storage systems running Enginuity 5876. Possible operations are: n — Keeps the RDF capability the same. No Change n Dynamic RDF Capability — Sets the volume to perform dynamic RDF operations. This feature only applies/appears for Symmetrix systems running Enginuity 5876. Possible operations are: n — Allows the volume to be R1 or R2 (RDF swaps RDF1 or RDF2 Capable allowed). Select this attribute to create an R21 volume used in a Cascaded RDF operation. n RDF1 Capable — Allows the volume to be an R1 (no RDF swaps). n RDF 2 Capable — Allows the volume to be an R2 (no RDF swaps). l — This can be set to enabled or disabled. SCSI3 Persistent Reservation Maintains any reservations (flags) whether the system goes online or offline. This field will appear dimmed for diskless volumes. 6. Do one of the following: Setting volume attributes 195

196 Storage Management l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920. on page 920 and l Run Now to perform the operation now. Expand Add to Job List , and click Setting volume identifiers This operation can be invoked from multiple locations in the Unisphere user interface. Depending on where the operation is invoked, some of the steps below may not apply. Procedure 1. Select the storage system. > Volumes 2. Select STORAGE . 3. Navigate to the volume. 4. Set Volumes > Identifier . , and click Select the volume, click 5. Set the volume identifiers: l Type the Volume Identifier Name . Volume identifier names must be unique from other volumes on the Symmetrix system and cannot exceed 64 characters. Only alphanumeric characters and underscores ( _ ) are allowed. l Type the Volume HP Identifier Name . HP identifier names must be a user- defined volume name (not to exceed 64 alpha-numeric characters and underscores ( _ ) ) applicable to HP-mapped volumes. This value is mutually exclusive of the VMS ID. This attribute will appear grayed out for diskless volumes. l . VMS identifier names must be a Volume VMS Identifier Name Type the numeric value (not to exceed 32766) with relevance to VMS systems. This value is mutually exclusive of the HP ID. This attribute will appear grayed out for diskless volumes. 6. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Setting volume names When creating or duplicating volumes; or creating or expanding storage groups, you can optionally name the new volumes. When naming volumes, you should be aware of the following: l Volume names cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and periods (.) are allowed. l Volume names plus an optional suffix cannot exceed 64 characters. If using a numerical suffix, volume names cannot exceed 50 characters (prefix) and the trailing numerical suffix number cannot exceed 14 characters. If not using a numerical suffix, all 64 characters can be specified for the volume name. The maximum starting suffix is 1000000. 196 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

197 Storage Management l This feature is not supported for the following types of volumes: SFS, DRV, Meta members, SAVE, DATA, Vault, and diskless. Setting copy pace (QoS) for device groups Procedure 1. Select the storage system. Data Protection Device Groups . > 2. Select 3. , and select Replication QoS . Select the device group, click Operation Type from the following valid values: 4. Select l SRDF — Sets the copy pace priority during SRDF operations. l — Sets the copy pace priority during mirror operations. Mirror Copy l — Sets the copy pace priority during clone operations. Clone l VLUN — Sets the copy pace priority during virtual LUN migrations. This option is only available on arrays running Enginuity 5876 or higher. 5. Select the Copy Pace from the following valid value: l 0 -16 — Sets the copy pace, with 0 (the default) as the fastest and 16 as the slowest. l — Stops the copy. Not supported when the Operation Type is BCV, STOP or the array is running an Enginuity verson lower than 5876. l URGENT — Sets the copy pace to urgent, which may be faster than the Operation Type default (0). Not supported when the is BCV, or the array is running an Enginuity version lower than 5876. 6. If performing this operation on a group: Select the type of volumes on which to perform the operation. OK . 7. Click QOS for replication The QoS (Quality of Service) feature adjusts the data transfer (copy) pace on individual volumes or groups of volumes (DGs or SGs) for certain operations. By increasing the response time for specific copy operations, the overall performance of other storage volumes increases. The following tasks are supported: l Setting copy pace (QoS) for storage groups on page 197 l on page 197 Setting copy pace (QoS) for device groups l Setting copy pace (QoS) for volumes on page 198 Setting copy pace (QoS) for storage groups Procedure 1. Select the storage system. STORAGE > Storage Groups to open the Storage Groups list view. 2. Select 3. Perform one of the following actions: Setting copy pace (QoS) for device groups 197

198 Storage Management l For all volumes in the storage group: Select the storage group, click , and Set Replication Priority QoS dialog Replication QoS select to open the box. l For some volumes in the storage group: a. Select a storage group. b. to open the Volumes , and select the number next to Volumes Click list view. c. Select the volumes, click , and select Set Volumes > Replication QoS . checkbox. 4. Optional: Select the Show Selected Group Copy Pace from the following valid values: 5. Select Operation Type l — Sets the copy pace priority during RDF operations. SRDF l Mirror Copy — Sets the copy pace priority during mirror operations. l Clone — Sets the copy pace priority during clone operations. l VLUN — Sets the copy pace priority during virtual LUN migrations. This option is only available on arrays running Enginuity 5876 or higher. Copy Pace from the following valid values: 6. Select the l 0 -16 — Sets the copy pace, with 0 (the default) as the fastest and 16 as the slowest. l — Stops the copy. Not supported when the Operation Type is BCV, STOP or the array is running an Enginuity verson lower than 5876. l URGENT — Sets the copy pace to urgent, which may be faster than the Operation Type default (0). Not supported when the is BCV. . OK 7. Click Setting copy pace (QoS) for volumes Procedure 1. Select the storage system. 2. Select Volumes . Storage > 3. Navigate to the volume. 4. , and select Replication QoS . Select one or more volumes, click Operation Type from the following valid values: 5. Select l SRDF — Sets the copy pace priority during RDF operations. l — Sets the copy pace priority during mirror operations. Mirror Copy l Clone — Sets the copy pace priority during clone operations. l VLUN — Sets the copy pace priority during virtual LUN migrations. This option is only available on arrays running Enginuity 5876 or higher. Copy Pace from the following valid values: 6. Select the 198 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

199 Storage Management l 0 -16 — Sets the copy pace, with 0 (the default) as the fastest and 16 as the slowest. l Operation Type is BCV, — Stops the copy. Not supported when the STOP or the array is running an Enginuity verson lower than 5876. l — Sets the copy pace to urgent, which may be faster than the URGENT default (0). Not supported when the is BCV, or the array is Operation Type running an Enginuity version earlier than 5876. . 7. Click OK Managing Meta Volumes Creating meta volumes Before you begin l Meta volumes are supported on storage systems running Enginuity 5876. l Bound thin volumes can be used as meta heads; however, bound thin volumes cannot be used as meta members. l Unmapped thin volumes can be formed into striped meta volumes. l Mapped or unmapped thin volumes can be formed into concatenated meta volumes. l For a complete list of restrictions and recommendations on creating meta volumes, Solutions Enabler Array Controls and Management CLI User Guide . refer to the l When creating meta volumes, will attempt to instill best practices in the creation process by setting the following defaults in the Create Meta Volume wizard: n Meta Volume Configuration = Striped n Meta Volume Member Count including Head = 8 Note that these best practices do not apply to volumes created with the CKD-3390 emulation type. Procedure 1. Select the storage system. > and click the Meta tab. STORAGE Volumes 2. Select 3. Click Create . 4. Select the Emulation type. Create Volumes 5. If creating FBA volumes, select whether to create them from or Use Existing Volumes volumes. 6. If creating FBA or AS/400 volumes, select the Meta Volume Configuration Concatenated or striped ). ( 7. Select a method for forming the meta volumes. 8. Click . Next 9. Do the following, depending on the method you selected: l Using Existing Virtual Volumes: a. Type the Number of Meta Volumes to form. Meta Volume Capacity by typing the Meta Volume Member b. Specify the , and selecting a Meta Volume Member Capacity . Count including Head Managing Meta Volumes 199

200 Storage Management c. Select a Volume Configuration for the members. . In addition, you can also type d. To reserve the volumes, select Reserve Expiration Date . reserve and select an Comments Comments Setting and The default values for Reserve are set in preferences on page 49 for volumes reservations. If the volumes are not automatically reserved, you can optionally reserve them here. . SSID e. If you are creating CKD meta volumes, type or select an f. If you are creating striped meta volumes, you can optionally select the size of the meta volumes, by clicking Advanced Options , and selecting a Striped Size . The striped size can be expressed in blocks or cylinders. Possible sizes in 512 byte blocks are 1920, 3840, 7680, 15360, 30720, and 61440. The stripe size must be 1920, which is the default for all versions of Enginuity. If no stripe size is specified when creating a striped meta, all Enginuity codes will consider the default stripe size as 1920 blocks of 512 bytes each. Next . g. Click l Using Existing Standard Provisioned Volumes: a. Type the Number of Meta Volumes to form. by typing the Meta Volume Member b. Specify the Meta Volume Capacity Meta Volume Member Capacity . , and selecting a Count including Head c. Select a Volume Configuration . d. Select the RAID Protection level for the meta volumes. Disk Technology on which the meta volumes will reside. e. Select the type of Disk Group (Request/Available) f. Select the containing the meta volumes. . Reserve Volumes g. To reserve the volumes, select Next . h. Click l By Manually Selecting Existing Volumes (Advanced): a. Select from the listed volumes. Reserve Volumes . b. To reserve the volumes, select c. Click Next . l Using New Standard Provisioned Volumes: a. Specify the Number of Meta Volumes . b. Specify the Meta Volume Capacity by typing the Meta Volume Member , and selecting a Meta Volume Member Capacity . Count including Head Volume Configuration c. Select a . level for the meta volumes. d. Select the RAID Protection e. Select the type of Disk Technology on which the meta volumes will reside. f. Select a Disk Group . SSID . g. If you are creating CKD meta volumes, type or select an Next . h. Click 200 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

201 Storage Management l Using New Virtual Volumes: Number of Meta Volumes . a. Specify the Meta Volume Member by typing the Meta Volume Capacity b. Specify the , and selecting a . Count including Head Meta Volume Member Capacity c. Select a . Volume Configuration Next d. Click . page. To change any of your selections, Summary 10. Verify your selections in the click Back . Note that some changes may require you to make additional changes to your configuration. 11. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l , and click Run Now Expand Add to Job List to perform the operation now. Adding meta members Before you begin l Meta volumes are supported on storage systems running Enginuity 5876. l To expand a bound striped thin meta volume on a storage system running Enginuity 5876 or higher without having to unbind the volume, however, you must Protect Data select the option. l When expanding meta thin volumes with BCV meta protection, the volumes must be fully allocated to a pool and they must have the Persist preallocated capacity through reclaim or copy option set on them. This is because binding thin meta BCV volumes is done through the pool and not through the thin BCV volume selection. For more information on allocating thin pool capacity for thin volumes, on page 244. refer to Managing thin pool allocations Procedure 1. Select the storage system. 2. Select STORAGE > Volumes and click the Meta tab. Add Member . 3. Select the meta volume and click 4. For striped metas only: To protect the original striped meta data, do the following: option. Protect Data a. Select the b. Type or select the name of the BCV meta head to use when protecting the data. By default, this field is filled in with the first available BCV. 5. Select one or more volumes to add to the meta volume. 6. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. Managing Meta Volumes 201

202 Storage Management l , and click Run Now Add to Job List Expand to perform the operation now. Removing meta members Before you begin l Meta volumes are supported on storage systems running Enginuity 5876. l You can only remove members from concatenated meta volumes. Procedure 1. Select the storage system. 2. Select and click the Meta tab. STORAGE > Volumes 3. Details view. to open its Select the meta volume and click META Members Meta Members list 4. Click the number next to to open the view. Remove Meta Member 5. Select one or more members and click to open the Remove Meta Volume Member dialog box. 6. Do one of the following: l Add to Job List Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Dissolving meta volumes Before you begin Meta volumes are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. STORAGE > Volumes and click the Meta tab. 2. Select . 3. Select the meta volume and click Dissolve . Delete Meta Members after dissolve 4. Optional: If required, select Note that selecting Delete meta members after dissolve requires the operation to be run immediately (it cannot be scheduled). 5. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Expand , and click Run Now to perform the operation now. Add to Job List Converting meta volumes Before you begin Meta volumes are supported on storage systems running Enginuity 5876. This procedure explains how to change the configuration of a meta volume. 202 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

203 Storage Management Procedure 1. Select the storage system. STORAGE and click the Meta tab. > 2. Select Volumes . 3. Select the meta volume and click Convert 4. If converting from concatenated to striped, you can optionally specify to Protect Data and typing or protect the original striped data by selecting selecting the BCV meta head to use when protecting the data. By default, the BCV field is filled in with the first available BCV. 5. Do one of the following: l Add to Job List Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l , and click Run Now to perform the operation now. Expand Add to Job List Viewing CKD volumes See below for procedure to view CKD volumes from the Hosts > Mainframe Viewing CKD volumes in dashboard. To see the CKD volumes in a CU image, see CU image on page 93. Procedure 1. Select the storage system. > Mainframe and click on 2. Select in the Summary panel. HOSTS CKD Volumes CKD Volumes list view is displayed. Use the this list view to view and The manage the volumes. The following properties display: however, not all properties may be available for every volume type: l —Assigned volume name. Name l Type —Type of volume. l Allocated % —% of the volume that is allocated. l Capacity (GB) —Volume capacity in Gigabytes. l Status —Volume status. l Emulation —Emulation type for the volume. l —Number of masking records for the volume. Host Paths l Reserved —Indicates whether the volume is reserved. l —The name of the associated split. Split l CU Image —The number of the associated CU image. l —Base Address. Base Address The following controls are available, however, not all controls may be available for every volume type: l — Viewing CKD volume details on page 204 l — Creating volumes on page 178 Create l Expand — Expanding existing volumes on page 191 Viewing CKD volumes 203

204 Storage Management l on page 188 — Deleting volumes l Creating storage groups on page 112 — Create SG l Setting volume emulation — Set Volumes > Emulation on page 96 l on page 195 Set Volumes > Attribute — Setting volume attributes l Setting volume identifiers Set Volumes > Identifier — on page 196 l — on page 194 Set Volumes >Status Setting volume status l QOS for replication Set Volumes > Replication QoS — on page 197 l Setting the SRDF GCM flag on page 434 Set Volumes > Set SRDF GCM — l — Set Volumes > Reset SRDF/Metro Identity Resetting original device identity on page 432 l Managing thin pool allocations on page Allocate/Free/Reclaim > Start — 244 l Managing thin pool allocations on page Allocate/Free/Reclaim > Stop — 244 l Configuration > Change Volume Configuration — Changing volume on page 190 configuration l Configuration > Duplicate Volume on page 188 — Duplicating volumes l — z/OS map from the volume list view Configuration > z/OS Map on page 333 l — Configuration > z/OS Unmap on z/OS unmap from the volume list view page 334 Viewing CKD volume details Procedure 1. Select the storage system. > Mainframe . 2. Select HOSTS 3. to open the Details view. Select a CKD volume and click Note Depending on the method you used to open this view, some of the following properties may not appear. The following properties are displayed: l —Number of other pools. Masking Info l Storage Groups —Number of Storage Groups. l SRP —Number of Storage Resource pools (SRPs). l CKD Front End Paths —Number of CKD Front End Paths. l —RDF Info. RDF Info l CU Image Number —CU image number. l Split —Split identifier. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 204

205 Storage Management l Volume Name —Volume name. l Physical Name —Physical name. l —Volume identifier. Volume Identifier l — Volume configuration. Type l Encapsulated Volume — Whether external volume is encapsulated. Relevant for external disks only. l Encapsulated WWN — World Wide Name for encapsulated volume. Relevant for external disks only. l Encapsulated Device Flag — Encapsulated device flag. l Encapsulated Device Array — Encapsulated device array. l Encapsulated Device Name — Encapsulated device name. l — Volume status. Status l — Whether the volume is reserved. Reserved l Capacity (GB) —Volume capacity in GBs. l Capacity (MB) —Volume capacity in MBs. l Capacity (CYL) —Volume capacity in cylinders. l —Compression ratio. Compression Ratio l Emulation — Volume emulation. l AS400 Gatekeeper — AS400 Gatekeeper indication. l — Symmetrix system on which the volume resides. Symmetrix ID l Symmetrix Vol ID — Symmetrix volume name/number. l — User-defined volume name (1-128 alpha-numeric HP Identifier Name characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l — Numeric value (not to exceed 32766) with VMS Identifier Name relevance to VMS systems. This value is mutually exclusive of the HP ID. l Nice Name — Nice name generated by Symmetrix Enginuity. l — World Wide Name of the volume. WWN l External Identity WWN — External Identity World Wide Name of the volume. l — Name of the device group in which the volume resides, if DG Name applicable. l — Name of the device group in which the volume resides, if CG Name applicable. l — Defines the attached BCV to be paired with the standard Attached BCV volume. l Attached VDEV TGT Volume — Volume to which this source volume would be paired. l — RDF configuration. RDF Type l — Method used to define the volume's geometry. Geometry - Type l Geometry - Number of Cylinders — Number of cylinders. l Geometry - Sectors per Track — Number of sectors per track, as defined by the volume's geometry. Viewing CKD volume details 205

206 Storage Management l Geometry - Tracks per Cylinder — Number of tracks per cylinder, as defined by the volume's geometry. l — Number of 512 blocks, as defined by the Geometry - 512 Block Bytes volume's geometry. l Geometry - Capacity (GB) —Geometry capacity in GBs. l Geometry - Limited — Indicates whether the volume is geometry limited. l — Subsystem ID. SSID l — Capacity in tracks. Capacity (Tracks) l — Volume SA status. SA Status l Host Access Mode — Host access mode. l Pinned —Whether the volume is pinned. l RecoverPoint Tagged —Indicates whether volume is tagged for RecoverPoint. l — Service state. Service State l Defined Label Type — Type of user-defined label. l — RDF capability of the volume. Dynamic RDF Capability l — Mirror set for the volume and the volume characteristic Mirror Set Type of the mirror. l — Volume status information for each member in the Mirror Set DA Status mirror set. l — Number of invalid tracks for each mirror in the Mirror Set Invalid Tracks mirror set. l Priority QoS — Priority value assigned to the volume.Valid values are 1 (highest) through 16 (the lowest). l — Copy Pace - RDF. Copy Pace - RDF l — Copy Pace - Mirror Copy. Copy Pace - Mirror Copy l Copy Pace - Clone — Copy Pace - Clone. l Copy Pace - VLUN — Copy Pace - VLUN. l Dynamic Cache Partition Name — Name of the cache partition. l Compressed Size (GB) — Compressed Size (GB) l — Compressed Ratio (%) Compressed Percentage l — Compressed Size Per Pool (GB) Compressed Size Per Pool (GB) l — Indicates whether XtremSW cache is XtremSW Cache Attached attached to the volume. l — Base address. Base Address l AS400 Gatekeeper — AS400 Gatekeeper indication. l — Indication if Mobility ID is enabled. Mobility ID Enabled l GCM — GCM indication. l Optimized Read Miss — Cacheless read miss status. l — Persistent Allocation indication. Persistent Allocation l PowerPath Hosts — Number of PowerPath hosts. l Mounted — Mounted indication. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 206

207 Storage Management l Process — Process. l Last time used — Last time used. Details view links you to views displaying objects contained in and The associated with the virtual volume. Each link is followed by a number, indicating Storage the number of objects in the corresponding view. For example, clicking opens a view listing the storage groups associated with the volume. Group Viewing CKD volume front end paths This procedure explains how to view CKD volume front end paths. Procedure 1. Select the storage system. > CU Images to open the CU Images list view. Hosts 2. Select 3. and click Select the CU image . Number of Volumes CKD Volumes 4. Click on the number in the field to open the list view. 5. Details view. to open its Select a CKD volume and click CKD Front End Paths field to open the CKD Front 6. Click on the number in the End Path list view. 7. The following properties display: l —Director name. Director Identifier l Port —Port number. l —Assigned base address. Base Address l Alias Count —Number of aliases mapped to the port. l Director Port Status —Indicates port status. Viewing DLDEV volumes This procedure explains how to view DLDEV volumes. Procedure 1. Select the storage system. STORAGE > Volumes and click the Virtual or Meta tab. 2. Select 3. Filter on DLDEV type. Viewing virtual volumes on page 224 4. To view the properties and controls, see or Viewing meta volumes on page 208. Viewing DLDEV volume details This procedure explains how to view DLDEV volume details. Procedure 1. Select the storage system. STORAGE > Volumes and click the Virtual or Meta tab. 2. Select Viewing CKD volume front end paths 207

208 Storage Management 3. Filter on DLDEV type. 4. to open its Details Select a DLDEV volume and click view. on page 225 or 5. To view the properties, see Viewing virtual volume details Viewing meta volume details on page 209 . Viewing masking information This procedure explains how to view masking information. Procedure 1. Select the storage system. 2. Select . STORAGE > Storage Groups 3. Select a storage group and click to open its Details view. . 4. Click the number next to Volumes 5. Details view. Select a volume and click to open its to open the volume's Masking Info 6. Click the number next to Masking Info view. The following properties display: l — Storage system director and port. Director Port l Identifier — Volume identifier name. l Type — Director type. l — User-generated name. User Generated Name l — Indicates if the initiator is logged into the host/target. Logged In l On Fabric — Indicates if the initiator is zoned in and on the fabric. l Port Flag Overrides — Flag indicating if any port flags are overridden by the initiator: Yes/No. l FCID LockDown — Flag indicating if port lockdown is in effect: Yes/No. l Heterogeneous Host — Whether the host is heterogeneous. l LUN Offset — Wether LUN offset is enabled. This feature allows you to skip over masked holes in an array of volumes. l — Whether the port is visible to the host. Visibility Viewing meta volumes This procedure explains how to view meta volumes. Meta volumes are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. STORAGE > Volumes and click the Meta tab. 2. Select Use the this list view to display and manage the volumes. Filter on a volume type. The following properties display: 208 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

209 Storage Management l Name —Assigned volume name. l Type —Type of volume. l —Type of meta volume addressing. Meta Config l Status —Volume status. l Capacity (GB) —Volume capacity in Gigabytes. l Emulation —Emulation type for the volume. The following controls are available: l Viewing meta volume details — on page 209 l on page 180 — Create Creating diskless volumes l — on page 201 Add Member Adding meta members l Dissolve on page 202 — Dissolving meta volumes l Converting meta volumes on page 202 Convert — l — Setting volume attributes Set Volumes > Attribute on page 195 l — on page 196 Set Volumes > Identifier Setting volume identifiers l Setting volume status Set Volumes >Status — on page 194 l Set Volumes > Replication QoS — QOS for replication on page 197 l Configuration > Change Volume Configuration — Changing volume on page 190 configuration l Configuration > Duplicate Volume on page 188 — Duplicating volumes l — Mapping volumes Configuration > Map on page 192 l — on page 193 Configuration > Unmap Unmapping volumes l Tagging and untagging volumes for RecoverPoint RecoverPoint > Tag — on page 472 (storage group level) l Tagging and untagging volumes for RecoverPoint RecoverPoint > Untag — on page 472 (storage group level) l Pinning and unpinning volumes on page 173 FAST > Pin — l — Pinning and unpinning volumes on page 173 FAST > Unpin l Assign Dynamic Cache Partition on — Assigning dynamic cache partitions page 945 l Assign Symmetrix Priority Assigning array priority to individual volumes — on page 189 l — VLUN Migration dialog box on page 260 VLUN Migration l Set Optimized Read Miss Setting optimized read miss on page 193 — Viewing meta volume details This procedure explains how to view meta volume details. Meta volumes are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. Viewing meta volume details 209

210 Storage Management 2. Select Volumes and click the Meta tab. STORAGE > 3. to open the Details Select a meta volume and click view. The following properties display: l —Number of Meta members.. META Members l Storage Groups —Number of Storage Groups. l —Number of FBA Front End Paths. FBA Front End Paths l —Number of Back End Paths. Back End Paths l Volume Name —Volume name. l RDF Info —RDF Info. l —Physical name. Physical Name l —Volume identifier. Volume Identifier l — Volume configuration. Type l Encapsulated Volume — Whether external volume is encapsulated. Relevant for external disks only. l Encapsulated WWN — World Wide Name for encapsulated volume. Relevant for external disks only. l Encapsulated Device Flag — Encapsulated device flag. l — Encapsulated device array. Encapsulated Device Array l Encapsulated Device Name — Encapsulated device name. l — Volume status. Status l Reserved — Whether the volume is reserved. l —Volume capacity in GBs. Capacity (GB) l Capacity (MB) —Volume capacity in MBs. l Capacity (CYL) —Volume capacity in cylinders. l Emulation — Volume emulation. l Symmetrix ID — Symmetrix system on which the volume resides. l — Symmetrix volume name/number. Symmetrix Vol ID l — User-defined volume name (1-128 alpha-numeric HP Identifier Name characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l VMS Identifier Name — Numeric value (not to exceed 32766) with relevance to VMS systems. This value is mutually exclusive of the HP ID. l Nice Name — Nice name generated by Symmetrix Enginuity. l — World Wide Name of the volume. WWN l External Identity WWN — External Identity World Wide Name of the volume. l — Indication if Mobility ID is enabled. Mobility ID Enabled l — Name of the device group in which the volume resides, if DG Name applicable. l — Name of the device group in which the volume resides, if CG Name applicable. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 210

211 Storage Management l Attached BCV — Defines the attached BCV to be paired with the standard volume. l — Volume to which this source volume would Attached VDEV TGT Volume be paired. l RDF Type — SRDF configuration. l Geometry - Type — Method used to define the volume's geometry. l — Number of cylinders. Geometry - Number of Cylinders l — Number of sectors per track, as defined Geometry - Sectors per Track by the volume's geometry. l Geometry - Tracks per Cylinder — Number of tracks per cylinder, as defined by the volume's geometry. l Geometry - 512 Block Bytes — Number of 512 blocks, as defined by the volume's geometry. l Geometry - Capacity (GB) —Geometry capacity in GBs. l — Indicates whether the volume is geometry limited. Geometry - Limited l GCM — Indication if GCM is set. l SSID — Subsystem ID. l Capacity (Tracks) — Capacity in tracks. l SA Status — Volume SA status. l Host Access Mode — Host access mode. l —Whether the volume is pinned. Pinned l Service State — Service state. l — Type of user-defined label. Defined Label Type l Dynamic RDF Capability — RDF capability of the volume. l Mirror Set Type — Mirror set for the volume and the volume characteristic of the mirror. l Mirror Set DA Status — Volume status information for each member in the mirror set. l Mirror Set Invalid Tracks — Number of invalid tracks for each mirror in the mirror set. l Priority QoS — Priority value assigned to the volume.Valid values are 1 (highest) through 16 (the lowest). l — Name of the cache partition. Dynamic Cache Partition Name l — Indicates whether XtremSW cache is XtremSW Cache Attached attached to the volume. l Optimized Read Miss — Cacheless read miss status. l Persistent Allocation — Persistent Allocation. There are links to views displaying objects contained in and associated with the volume. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number next to META Members opens a view listing the members for the meta volume, excluding the meta head. Viewing meta volume details 211

212 Storage Management Viewing meta volume meta members This procedure explains how to view meta volume meta members. Meta volumes are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. > and click the Meta tab. 2. Select Volumes STORAGE 3. Select a meta volume and click to open its Details view. to open the Meta Members list META Members 4. Click the number next to view. list view and manage the members of a meta volume, Use the Meta Members excluding the meta head. This list view can be accessed from other volumes that contain meta volumes, that is regular and virtual volumes can contain meta volumes. The follow properties display: l Name —Meta volume name. l Type —Meta volume configuration. l Status —Volume status. l —Volume capacity (GB). Capacity (GB) The following controls are available: l — Viewing meta volume member details on page 212 l — Adding meta members on page 201 Add Meta Member l Remove Meta Member on page 202 — Removing meta members Viewing meta volume member details This procedure explains how to view meta volume member details. Meta volumes are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. > Volumes and click the 2. Select tab. STORAGE Meta 3. Select a meta volume and click to open its Details view. META Members to open the Meta Members list 4. Click the number next to view. 5. Select a meta volume and click to open its Details view. This list view can be accessed from other volumes that contain meta volumes, that is regular and virtual volumes can contain meta volumes. Use this view to view meta volume member details. The following properties display: 212 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

213 Storage Management l Physical Name —Volume's physical name. l Volume Identifier —Volume ID. l —Volume configuration. Type l —Whether external volume is encapsulated. Relevant Encapsulated Volume for external disks only. l —World Wide Name for encapsulated volume. Relevant Encapsulated WWN for external disks only. l — Encapsulated device flag. Encapsulated Device Flag l Encapsulated Device Array — Encapsulated device array. l Encapsulated Device Name — Encapsulated device name. l Status —Volume status. l —Whether the volume is reserved. Reserved l —Volume capacity in GBs. Capacity (GB) l Capacity (MB) —Volume capacity in MBs. l Capacity (CYL) —Volume capacity in cylinders. l Emulation —Volume emulation. l —Stripe size. Stripe Size l Meta Index —Meta Index. l Symmetrix ID —Storage system on which the volume resides. l —Symmetrix volume name/number. Symmetrix Vol ID l —User-defined volume name (1-128 alpha-numeric HP Identifier Name characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l VMS Identifier Name —Numeric value (not to exceed 32766) with relevance to VMS systems. This value is mutually exclusive of the HP ID. l Nice Name —Nice name generated by Symmetrix Enginuity. l WWN —World Wide Name of the volume. l External Identity WWN — External Identity World Wide Name of the volume. l Mobility ID Enabled — Indication that the mobile ID is enabled or not. l DG Name —Name of the device group in which the volume resides, if applicable. l CG Name —Name of the device group in which the volume resides, if applicable. l Attached BCV —Defines the attached BCV to be paired with the standard volume. l Attached VDEV TGT Volume —Volume to which this source volume would be paired. l —RDF configuration. RDF Type l Geometry - Type —Method used to define the volume's geometry. l Geometry - Number of Cylinders —Number of cylinders. Viewing meta volume member details 213

214 Storage Management l Geometry - Sectors per Track —Number of sectors per track, as defined by the volume's geometry. l —Number of tracks per cylinder, as Geometry - Tracks per Cylinder defined by the volume's geometry. l Geometry - 512 Block Bytes —Number of 512 blocks, as defined by the volume's geometry. l Geometry - Capacity (GB) —Geometry capacity in GBs. l —Indicates whether the volume is geometry limited. Geometry - Limited l — GCM indication. GCM l SSID —Subsystem ID. l Capacity (Tracks) —Capacity in tracks. l SA Status —Volume SA status. l —Host access mode. Host Access Mode l —Whether the volume is pinned. Pinned l RecoverPoint Tagged —Whether RecoverPoint is tagged. l Service State —Service state. l Defined Label Type —Type of user-defined label. l —RDF capability of the volume. Dynamic RDF Capability l Mirror Set Type —Mirror set for the volume and the volume characteristic of the mirror. l Mirror Set DA Status —Volume status information for each member in the mirror set. l —Number of invalid tracks for each mirror in the Mirror Set Invalid Tracks mirror set. l —Priority value assigned to the volume.Valid values are 1 Priority QoS (highest) through 16 (the lowest). l —Name of the cache partition. Dynamic Cache Partition Name l XtremSW Cache Attached — Indicates whether XtremSW cache is attached to the volume. l — Cacheless read miss status. Optimized Read Miss l Persistent Allocation — Persistent Allocation. Viewing other pool information Procedure 1. Select the storage system. STORAGE > Volumes and click the Virtual tab. 2. Select TDEV 3. Filter on , such as BCV+TDEV . or volume type with TDEV 4. Select a thin volume and click to open its Details view. Other Pool Info to open the Other Pool Info view. 5. Click the number next to Use this view to view other pool information. The following properties display: 214 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

215 Storage Management l Name —Thin volume name. l Pool Name —Name of pool. l Allocated % —Percentage of pool allocated to the thin volume. l —Amount of pool allocated to the thin volume. Capacity (GB) Viewing private volumes This procedure explains how to view the properties of private volumes. Procedure 1. Select the storage system. Volumes and click the Private tab. > 2. Select STORAGE Use the this list view to view and manage the volumes. Filter on a volume type. The following properties display: l —Assigned volume name. Name l —Type of volume. Type l Status —Volume status. l Capacity (GB) —Volume capacity in Gigabytes. l Emulation —Emulation type for the volume. The following controls are available: l — Viewing private volume details on page 215 l — Creating private volumes on page 180 Create l — Deleting volumes on page 188 l — Configuration > Change Volume Configuration Changing volume configuration on page 190 l Duplicating volumes on page 188 Configuration > Duplicate Volume — l — Assigning dynamic cache partitions on Assign Dynamic Cache Partition page 945 Viewing private volume details This procedure explains how to view private volume details. Procedure 1. Select the storage system. STORAGE 2. Select and click the Private tab. > Volumes 3. to open its Details view. Select a private volume and click The following properties display: l FBA Front End Paths —Number of FBA Front End Paths. l RDF Info —RDF Info. Viewing private volumes 215

216 Storage Management l Volume Name —Volume name. l Back End Paths —Number of Back End Paths. l Physical Name —Physical name. l —Volume identifier. Volume Identifier l Type — Volume configuration. l Encapsulated Volume — Whether external volume is encapsulated. Relevant for external disks only. l Encapsulated WWN — World Wide Name for encapsulated volume. Relevant for external disks only. l — Encapsulated device flag. Encapsulated Device Flag l — Encapsulated device array. Encapsulated Device Array l Encapsulated Device Name — Encapsulated device name. l Status — Volume status. l — Whether the volume is reserved. Reserved l Capacity (GB) —Volume capacity in GBs. l Capacity (MB) —Volume capacity in MBs. l Capacity (CYL) —Volume capacity in cylinders. l — Volume emulation. Emulation l Symmetrix ID — Symmetrix system on which the volume resides. l Symmetrix VolID — Symmetrix volume name/number. l — User-defined volume name (1-128 alpha-numeric HP Identifier Name characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l — Numeric value (not to exceed 32766) with VMS Identifier Name relevance to VMS systems. This value is mutually exclusive of the HP ID. l — Nice name generated by Symmetrix Enginuity. Nice Name l WWN — World Wide Name of the volume. l — External Identity World Wide Name of the External Identity WWN volume. l DG Name — Name of the device group in which the volume resides, if applicable. l — Name of the device group in which the volume resides, if CG Name applicable. l Attached BCV — Defines the attached BCV to be paired with the standard volume. l — Volume to which this source volume would Attached VDEV TGT Volume be paired. l RDF Type — RDF configuration. l Geometry - Type — Method used to define the volume's geometry. l — Number of cylinders. Geometry - Number of Cylinders l — Number of sectors per track, as defined Geometry - Sectors per Track by the volume's geometry. l — Number of tracks per cylinder, as Geometry - Tracks per Cylinder defined by the volume's geometry. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 216

217 Storage Management l Geometry - 512 Block Bytes — Number of 512 blocks, as defined by the volume's geometry. l —Geometry capacity in GBs. Geometry - Capacity (GB) l — Indicates whether the volume is geometry limited. Geometry - Limited l — GCM indication. GCM l — Subsystem ID. SSID l Capacity (Tracks) — Capacity in tracks. l SA Status — Volume SA status. l Host Access Mode — Host access mode. l Pinned —Whether the volume is pinned. l RecoverPoint Tagged —Indicates whether volume is tagged for RecoverPoint. l Service State — Service state. l Defined Label Type — Type of user-defined label. l Dynamic RDF Capability — RDF capability of the volume. l — Mirror set for the volume and the volume characteristic Mirror Set Type of the mirror. l Mirror Set DA Status — Volume status information for each member in the mirror set. l — Number of invalid tracks for each mirror in the Mirror Set Invalid Tracks mirror set. l — Priority value assigned to the volume.Valid values are 1 Priority QoS (highest) through 16 (the lowest). l — Name of the cache partition. Dynamic Cache Partition Name The view links you to views displaying objects contained in and Details associated with the volume. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number opens a view listing the back end paths associated with Back End Paths next to the volume. 4. Viewing regular volumes This procedure explains how to view regular volumes Procedure 1. Select the storage system. STORAGE > Volumes and click the Regular tab. 2. Select Use the this list view to view and manage the volumes. Filter on a volume type. The following properties display: l Name —Assigned volume name. l Type —Type of volume. l —Volume status. Status l Capacity (GB) — Volume capacity in Gigabytes. Viewing regular volumes 217

218 Storage Management l Emulation —Emulation type for the volume. The following controls are available: l on page 218 Viewing regular volume details — l on page 180 Create Creating diskless volumes — l — on page 188 Deleting volumes l Set Volumes > Attribute — Setting volume attributes on page 195 l on page 196 — Set Volumes > Identifier Setting volume identifiers l — on page 194 Set Volumes >Status Setting volume status l QOS for replication on page 197 Set Volumes > Replication QoS — l — Configuration > Change Volume Configuration Changing volume configuration on page 190 l on page 188 — Duplicating volumes Configuration > Duplicate Volume l Configuration > Map Mapping volumes on page 192 — l Unmapping volumes Configuration > Unmap — on page 193 l z/OS map from the volume list view on page Configuration > z/OS Map — 333 l — z/OS unmap from the volume list view on Configuration > z/OS Unmap page 334 l — RecoverPoint > Tag Tagging and untagging volumes for RecoverPoint on page 472 (storage group level) l — RecoverPoint > Untag Tagging and untagging volumes for RecoverPoint (storage group level) on page 472 l on page 173 — FAST > Pin Pinning and unpinning volumes l Pinning and unpinning volumes FAST > Unpin — on page 173 l Assigning dynamic cache partitions on Assign Dynamic Cache Partition — page 945 l — Assigning array priority to individual volumes Assign Symmetrix Priority on page 189 l — VLUN Migration dialog box on page 260 VLUN Migration l Set Optimized Read Miss on page 193 — Setting optimized read miss Viewing regular volume details This procedure explains how to view regular volume details. Procedure 1. Select the storage system. > Volumes 2. Select Regular tab. STORAGE and click the 3. to open the Details view. Select a regular volume and click Details The view allows you to view and manage a volume. Properties panel Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 218

219 Storage Management The following properties display: l —Masking Info. Masking Info l —Number of Storage Groups. Storage Groups l —Number of FBA Front End Paths. FBA Front End Paths l —RDF Info. RDF Info l —Volume name. Volume Name l —Number of Back End Paths. Back End Paths l —Physical name. Physical Name l Volume Identifier —Volume identifier. l Type — Volume configuration. l Encapsulated Volume — Whether external volume is encapsulated. Relevant for external disks only. l Encapsulated WWN — World Wide Name for encapsulated volume. Relevant for external disks only. l Encapsulated Device Flag — Encapsulated device flag. l Encapsulated Device Array — Encapsulated device array. l Encapsulated Device Name — Encapsulated device name. l — Volume status. Status l Reserved — Whether the volume is reserved. l Capacity (GB) —Volume capacity in GBs. l —Volume capacity in MBs. Capacity (MB) l Capacity (Cylinders) —Volume capacity in cylinders. l — Volume emulation. Emulation l AS400 Gatekeeper — AS400 Gatekeeper indication. l Symmetrix ID — Symmetrix system on which the volume resides. l Symmetrix Volume ID — Symmetrix volume name/number. l — User-defined volume name (1-128 alpha-numeric HP Identifier Name characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l VMS Identifier Name — Numeric value (not to exceed 32766) with relevance to VMS systems. This value is mutually exclusive of the HP ID. l Nice Name — Nice name generated by Symmetrix Enginuity. l — World Wide Name of the volume. WWN l — External Identity World Wide Name of the External Identity WWN volume. l DG Name — Name of the device group in which the volume resides, if applicable. l CG Name — Name of the device group in which the volume resides, if applicable. l Attached BCV — Defines the attached BCV to be paired with the standard volume. l — Volume to which this source volume would Attached VDEV TGT Volume be paired. Viewing regular volume details 219

220 Storage Management l RDF Type — RDF configuration. l Geometry - Type — Method used to define the volume's geometry. l Geometry - Number of Cylinders — Number of cylinders. l — Number of sectors per track, as defined Geometry - Sectors per Track by the volume's geometry. l — Number of tracks per cylinder, as Geometry - Tracks per Cylinder defined by the volume's geometry. l Geometry - 512 Block Bytes — Number of 512 blocks, as defined by the volume's geometry. l Geometry - Capacity (GB) —Geometry capacity in GBs. l Geometry - Limited — Indicates whether the volume is geometry limited. l GCM — GCM indication. l — Subsystem ID. SSID l — Capacity in tracks. Capacity (Tracks) l SA Status — Volume SA status. l Host Access Mode — Host access mode. l Pinned —Whether the volume is pinned. l —Indicates whether volume is tagged for RecoverPoint Tagged RecoverPoint. l Service State — Service state. l Defined Label Type — Type of user-defined label. l — RDF capability of the volume. Dynamic RDF Capability l — Mirror set for the volume and the volume characteristic Mirror Set Type of the mirror. l — Volume status information for each member in the Mirror Set DA Status mirror set. l — Number of invalid tracks for each mirror in the Mirror Set Invalid Tracks mirror set. l Priority QoS — Priority value assigned to the volume.Valid values are 1 (highest) through 16 (the lowest). l Dynamic Cache Partition Name — Name of the cache partition. l — Copy pace priority during RDF operations. Copy Pace - RDF l Copy Pace - Mirror Copy — Copy pace priority during mirror operations. l — Copy pace priority during clone operations. Copy Pace - Clone l Copy Pace - VLUN — Copy pace priority during virtual LUN operations. l — Indicates whether XtremSW cache is XtremSW Cache Attached attached to the volume. l — Cacheless read miss status. Optimized Read Miss l Persistent Allocation — Persistent Allocation indication. Details view links you to views displaying objects contained in and The associated with the volume. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number 220 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

221 Storage Management next to Storage Groups opens a view listing the storage groups associated with the volume. Viewing reserved volumes Procedure 1. Select the storage system. 2. In the dashboard, click the System Health tab. 3. In the panel, click View Reservations . Action 4. Select the reservation and click . Reserved Volumes . 5. Click the number next to list view is displayed. The Reserved Volume list view to display and manage the volumes held in Reserved Volumes Use the a reservation. The following properties display: l —Assigned volume name. Name l Type —Type of volume. l Capacity (GB) —Volume capacity in Gigabytes. l Status —Volume status. l Reserved —Indicates whether the volume is reserved. l —Emulation type for the volume. Emulation The following controls are available: l — on page 221. Viewing reserved volume details Viewing reserved volume details Procedure 1. Select the storage system. tab. 2. In the dashboard, click the System Health panel, click View Reservations . Action 3. In the 4. . Select the reservation and click Details view. Opens the Reserved Volumes Reserved Volumes to open the 5. Click the number next to list view . The following properties display: l Name —Volume name. l —Volume identifier. Volume Identifier l Type —Volume configuration. Viewing reserved volumes 221

222 Storage Management l Encapsulated Volume —Whether external volume is encapsulated. Relevant for external disks only. l —Volume status. Status l —Whether the volume is reserved. Reserved l —Volume capacity in GBs. Capacity (GB) l —Volume capacity in MBs. Capacity (MB) l Capacity (Cylinders) —Volume capacity in cylinders. l Emulation —Volume emulation. l Symmetrix ID —Storage system on which the volume resides l Symmetrix Volume ID —Symmetrix volume name/number. l HP Identifier Name —User-defined volume name (1-128 alpha-numeric characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l —Numeric value (not to exceed 32766) with VMS Identifier Name relevance to VMS systems. This value is mutually exclusive of the HP ID. l Nice Name —Nice name generated by Symmetrix Enginuity. l WWN —World Wide Name of the volume. l DG Name —Name of the device group in which the volume resides, if applicable. l CG Name —Name of the consistency group in which the volume resides, if applicable. l Attached BCV —Defines the attached BCV to be paired with the standard volume. l —Volume to which this source volume would Attached VDEV TGT Volume be paired. l —RDF configuration. RDF Type Viewing SAVE volumes This procedure explains how to view SAVE volumes. Procedure 1. Select Volumes and click the Private tab. STORAGE > 2. Filter on SAVE type. 3. To view the properties and controls, see Viewing private volumes on page 215. Viewing SAVE volume details This procedure explains how to view SAVE volume details. Procedure 1. Select the storage system. STORAGE Volumes and click the Private 2. Select > tab. 3. Filter on SAVE type. 4. to open its Details view. Select a SAVE volume and click 222 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

223 Storage Management 5. To view the properties, see Viewing private volume details on page 215. Viewing storage resource pool information This procedure explains how to view storage resource pool information. Procedure 1. Select the storage system. > to open the Volumes list view. 2. Select Volumes Storage 3. view. Details to open its Select the volume and click to go to the Storage Resource Pool view for the 4. Click the number next to SRP volume. The following properties display: l Name —Volume name. l —Storage resource pool name. SRP Name l Allocated —Volume capacity allocated. l —Total volume capacity. Capacity l Allocated % —Percent of volume used. Viewing thin volumes This procedure explains how to view thin volumes. Procedure 1. Select the storage system. > and click the Virtual tab. STORAGE Volumes 2. Select 3. Filter on a thin volume type, such as TDEV. on page 224. 4. To view the properties and controls, see Viewing virtual volumes Viewing thin volume details This procedure explains how to view thin volume details. Procedure 1. Select the storage system. STORAGE > Volumes and click the Virtual tab. 2. Select 3. Filter on a thin volume type, such as TDEV. 4. to open its Details view. Select a thin volume and click 5. To view the properties, see Viewing virtual volume details on page 225. Viewing thin volume bound pool information This procedure explains how to view thin volume bound pool information. Procedure 1. Select the storage system. Viewing storage resource pool information 223

224 Storage Management 2. Select Volumes and click one of the panels. STORAGE > 3. to open its Details Select the thin volume and click view. Bound Pool Info to open the view. 4. Click the number next to Bound Pool Info The following properties display: l Name —Thin volume name. l —Name of pool. Pool Name l —Percentage of pool allocated to the thin volume. Allocated % l Capacity (GB) —Capacity in GB. l Allocated (GB) —Number of GB allocated from the pool for exclusive use by the thin volume. l Subscription % —Ratio between the DATA volume pool's enabled capacity and the thin volume subscribed capacity. l Written (GB) —Number of allocated GB in the DATA volume pool that the thin volume has used. l —Whether tracks are shared between thin volumes. Shared Tracks l Persistent Allocation —Indicates persistent allocations: All, some, or none. Viewing virtual volumes This procedure explains how to view virtual volumes. Procedure 1. Select the storage system. 2. Select STORAGE > Volumes and click the Virtual tab. Use the this list view to view and manage the volumes. Filter on a volume type. The following properties display: l Name —Assigned volume name. l —Type of volume. Type l Emulation —Emulation type for the volume. l —Volume capacity in Gigabytes. Capacity (GB) l Status —Volume status. The following controls are available: l on page 225 — Viewing virtual volume details l Create Creating VDEV volumes on page 187 — l Deleting volumes on page 188 — l Set Volumes > Attribute on page 195 — Setting volume attributes l Set Volumes > Identifier Setting volume identifiers on page 196 — l — Setting volume status on page 194 Set Volumes >Status l Set Volumes > Replication QoS QOS for replication on page 197 — Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 224

225 Storage Management l Changing volume Configuration > Change Volume Configuration — configuration on page 190 l — Configuration > Duplicate Volume on page 188 Duplicating volumes l on page 192 — Configuration > Map Mapping volumes l on page 193 Configuration > Unmap — Unmapping volumes l Tagging and untagging volumes for RecoverPoint RecoverPoint > Tag — (storage group level) on page 472 l RecoverPoint > Untag — Tagging and untagging volumes for RecoverPoint (storage group level) on page 472 l Managing thin pool allocations on page Allocate/Free/Reclaim > Start — 244 l Managing thin pool allocations — on page Allocate/Free/Reclaim > Stop 244 l — on page 257 FAST > Bind Binding/Unbinding/Rebinding thin volumes l Binding/Unbinding/Rebinding thin volumes on page 257 FAST > Unbind — l — on page 257 FAST > Rebind Binding/Unbinding/Rebinding thin volumes l FAST > Pin Pinning and unpinning volumes on page 173 — l — Pinning and unpinning volumes on page 173 FAST > Unpin l Assign Dynamic Cache Partition on — Assigning dynamic cache partitions page 945 l VLUN Migration — on page 260 VLUN Migration dialog box l — on page 193 Set Optimized Read Miss Setting optimized read miss Viewing virtual volume details This procedure explains how to view virtual volume details. Procedure 1. Select the storage system. STORAGE Volumes and click the Virtual panel. > 2. Select 3. to open its Details view. Select a volume and click Details view allows you to view and manage a volume. The The following properties display: l —Number of bound pools. Bound Pool Info l —Number of other pools. Other Pool Info l Masking Info —Number of other pools. l —Number of Storage Groups. Storage Groups l FBA Front End Paths —Number of FBA Front End Paths. l —RDF Info. RDF Info l Volume Name —Volume name. l Physical Name —Physical name. l —Volume identifier. Volume Identifier Viewing virtual volume details 225

226 Storage Management l Type — Volume configuration. l Encapsulated Volume — Whether external volume is encapsulated. Relevant for external disks only. l — World Wide Name for encapsulated volume. Encapsulated WWN Relevant for external disks only. l Encapsulated Device Flag — Encapsulated device flag. l — Encapsulated device array. Encapsulated Device Array l Encapsulated Device Name — Encapsulated device name. l — Volume status. Status l Reserved — Whether the volume is reserved. l Capacity (GB) —Volume capacity in GBs. l Capacity (MB) —Volume capacity in MBs. l —Volume capacity in cylinders. Capacity (CYL) l Emulation — Volume emulation. l — AS400 Gatekeeper indication. AS400 Gatekeeper l Symmetrix ID — Symmetrix system on which the volume resides. l Symmetrix Vol ID — Symmetrix volume name/number. l HP Identifier Name — User-defined volume name (1-128 alpha-numeric characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. l VMS Identifier Name — Numeric value (not to exceed 32766) with relevance to VMS systems. This value is mutually exclusive of the HP ID. l Nice Name — Nice name generated by Symmetrix Enginuity. l — World Wide Name of the volume. WWN l External Identity WWN — External Identity World Wide Name of the volume. l DG Name — Name of the device group in which the volume resides, if applicable. l CG Name — Name of the device group in which the volume resides, if applicable. l Attached BCV — Defines the attached BCV to be paired with the standard volume. l Attached VDEV TGT Volume — Volume to which this source volume would be paired. l — RDF configuration. RDF Type l Geometry - Type — Method used to define the volume's geometry. l — Number of cylinders. Geometry - Number of Cylinders l Geometry - Sectors per Track — Number of sectors per track, as defined by the volume's geometry. l Geometry - Tracks per Cylinder — Number of tracks per cylinder, as defined by the volume's geometry. l Geometry - 512 Block Bytes — Number of 512 blocks, as defined by the volume's geometry. 226 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

227 Storage Management l Geometry - Capacity (GB) —Geometry capacity in GBs. l Geometry - Limited — Indicates whether the volume is geometry limited. l — GCM indication. GCM l — Subsystem ID. SSID l Capacity (Tracks) — Capacity in tracks. l SA Status — Volume SA status. l — Host access mode. Host Access Mode l Pinned —Whether the volume is pinned. l RecoverPoint Tagged —Indicates whether volume is tagged for RecoverPoint. l Service State — Service state. l — Type of user-defined label. Defined Label Type l Dynamic RDF Capability — RDF capability of the volume. l Mirror Set Type — Mirror set for the volume and the volume characteristic of the mirror. l — Volume status information for each member in the Mirror Set DA Status mirror set. l Mirror Set Invalid Tracks — Number of invalid tracks for each mirror in the mirror set. l — Priority value assigned to the volume.Valid values are 1 Priority QoS (highest) through 16 (the lowest). l Dynamic Cache Partition Name — Name of the cache partition. l — Compressed Size (GB) Compressed Size (GB) l Compressed Ratio (%) — Compressed Ratio (%) l — Compressed Size Per Pool (GB) Compressed Size Per Pool (GB) l XtremSW Cache Attached — Indicates whether XtremSW cache is attached to the volume. l Optimized Read Miss — Cacheless read miss status. l Persistent Allocation — Persistent Allocation indication. The Details view links you to views displaying objects contained in and associated with the virtual volume. Each link is followed by a number, indicating Storage the number of objects in the corresponding view. For example, clicking opens a view listing the two storage groups associated with the volume. Group Viewing volume back end paths This procedure explains how to view volume back end paths. Procedure 1. Select the storage system. STORAGE > Volumes and click one of the panels. 2. Select 3. to open its Details view. Select a volume and click 4. Click the number next to Back End Paths . This view allows you to view the back end paths associated with the volume. Viewing volume back end paths 227

228 Storage Management The following properties display: l —Name. Name l Director Identifier —Director identifier. l —DA interface ID. DA Interface l SCSI ID —Disk SCSI ID. l —DA volume ID. DA Volume Number l —Hyper ID. Hyper Number l Hyper Capacity —Hyper capacity. l Member Status —Hyper member status. l —Hyper member number. Member Number l —Name of disk group. Disk Group Pretty Name l Disk Capacity —Capacity of disk. l Spindle —Spindle ID. Viewing volume FBA front end paths This procedure explains how to view volume FBA front end paths. Procedure 1. Select the storage system. 2. to open its Details view. Select a volume and click FBA Front End Paths to open the FBA Front End 3. Click the number next to list view. Paths list view to view the FBA front end paths Use the FBA Front End Paths associated with a volume. The following properties display: l Director Identifier —Director name. l Port —Port number. l —VBus number. VBus l TID —Disk SCSI ID. l Symm LUN —Symmetrix LUN number. l —Physical device name. PDeVName l Director Port Status — Director port status. Viewing volume RDF information This procedure explains how to view volume RDF information. Procedure 1. Select the storage system. STORAGE > Volumes and click one of the tabs. 2. Select 228 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

229 Storage Management 3. Details Select a volume and click view. to open its RDF Info 4. Click the number next to list view. RDF Info to open the The following properties display: l Remote SymmID —Remote Symmetrix serial ID. l —Symmetrix volume name. RDev l —Volume configuration. RDev Config l Capacity (GB) —Volume capacity. l RDFG —RDF group containing the volume. l Pair State —State of the pair of which the volume is part. l —SRDF copy type. RDF Feature l CSRMT —CSRMT — RDFA Flags: (C)onsistency: X = Enabled, . = Disabled, - = N/A (S)tatus : A = Active, I = Inactive, - = N/A (R)DFA Mode : S = Single-session, M = MSC, - = N/A (M)sc Cleanup: C = MSC Cleanup required, - = N/A (T)ransmit Idle: X = Enabled, . = Disabled, - = N/A (D)SE Status: A = Active, I = Inactive, - = N/A X = Enabled, . = Disabled, - = N/A DSE (A)utostart: l R1 Inv —Number of invalid tracks on the R1 volume. l —Number of invalid tracks on the R2 volume. R2 Inv l RA Status —Status of the RDF director. l Link Status —Indicates link state. l RDF State —Volume RDF state. l Remote RDF State —Remote volume RDF state. l RDF Status —Volume RDF status. l —Indicates is write pacing exemption Device Config RDFA WPACE Exempt capability is enabled or disabled. l — ndicates if effective write pacing Effective RDFA WPACE Exempt exemption capability is enabled or disabled. Select Volume Range dialog box Use this dialog box to select the range of volumes for the operation. The following properties display: Volume Range —Range of volumes. —CU image containing the volumes. CU Image Number —Subsystem ID assigned to the volumes. SSID Base Address —Base addresses assigned to the volumes. Select Volume Range dialog box 229

230 Storage Management Aliases —Aliases assigned to the volumes. Advanced Options dialog Advanced Options Refer to the parent help topic for information on the dialog. Viewing disk groups Procedure 1. Select the storage system. 2. Select > Disk Groups to open the Disk Groups list view. STORAGE Use this list view to view and manage disk groups. The following properties display: l —Name of disk group ; format is : number -- name . Name l Technology —Technology type for the disk group. l —Indicates whether disk is internal or external. Disk Location l —Number of disks in the disk group. Disks l Used Capacity (%) —Percent total used capacity of the disk group, displayed in bar graph format and the actual percent number. l Total Capacity (GB) —Total capacity in GB of the disk group. The following controls are available: l Viewing disk group details on page 230 — l Rename —Change the name of a disk group. l Deleting disk groups on page 237 — Viewing disk group details Procedure 1. Select the storage system. STORAGE to open the Disk Groups list view. > Disk Groups 2. Select 3. , and do one of the following: Select the disk group from the list, click l tab. Click the Details The following properties display: n Name —Name of disk group ; format is : number -- name . n —Technology type for the disk group. Technology n Used Capacity (GB) —Used capacity. n Free Capacity (GB) —Free capacity. n Total Capacity (GB) —Total capacity. n —Speed of the disks in the group. Speed (RPM) n From Factor —Form factor. 230 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

231 Storage Management n Disk Location —Whether the disks in the group are internal or external. n Number of Disks —Number of disks. n Number of Spare Disks —Number of spare disks. l tab. Disk Group Usage Report Click the A visual representation of used capacity as a percentage of overall capacity is displayed. Viewing disks in disk group Procedure 1. Select the storage system. > Disk Groups to open the Disk Groups list view. STORAGE 2. Select 3. Select the disk group from the list, click tab. , and click the Details Number of Disks Disks list view. 4. Click the number next to to open the list view to view and manage data disks in the disk group. Disks Use the The following properties display: l Spindle —Disk Spindle ID. l —Disk director ID. Dir l Int —DA SCSI path. l TID —Disk SCSI ID. l —Disk vendor. Vendor ID l Product Revision —Product version number. l —Number of disk hypers. Hypers l Total Capacity (GB) —Disk capacity. l Used (%) —Percent of disk capacity. The following controls are available: l — Viewing disk details on page 231 l Remove Disk — Removing disks from disk groups on page 236 Viewing disk details Procedure 1. Select the storage system. STORAGE > Disk Groups to open the Disk Groups list view. 2. Select 3. Details tab to open its Details , and click the Select the disk group, click view. Number of Disks to open the Disks for Disk Group 4. Click the number next to list view. 5. Select the disk group from the list, click , and do one of the following: Viewing disks in disk group 231

232 Storage Management l tab. Click the Details The following properties display: n —Spindle ID. Spindle n Disk ID —Disk Identification. n Int —DA SCSI path. n —Disk SCSI ID. TID n —World Wide Name of the external LUN. External WWN n —Disk group number. Disk Group n Disk Location —Location of disk. n Disk Technolog y—Disk technology type. n Speed (RPM) —Physical disk revolutions per minute. n —Form factor of the disk. Form Factor n —Disk vendor ID. Vendor ID n Product ID —Product ID. n Product Revision —Product revision number. n Serial ID —Serial number. n —Number of disk blocks. Disk Blocks n Actual Disk Blocks —Actual number of disk blocks. n Block Size —Size of each block. n Total Capacity (GB) —Useable disk capacity in Gigabytes. n —Free disk capacity in Gigabytes. Free Capacity (GB) n —Actual disk capacity in Gigabytes. Actual Capacity (GB) n Used (%) —Percentage of used disk capacity to the total disk capacity. n Rated Disk Capacity (GB) —Rated capacity of the disk. n Spare Disk —Indicates if disk is a spare. n Encapsulated —If the disk is external, this indicates if it is encapsulated (True) or not (False). n Disk Service State —Indicates disk service state. panel provides links to views for objects contained in or Details The associated with the disk group. Each group link is followed the name of the group, or by a number, indicating the number of objects in the corresponding view. For example, clicking Number of Hypers opens the view listing the hypers contained in the disk. l Click the tab. Disk Group Usage Report A visual representation of used capacity as a percentage of overall capacity is displayed. Viewing disk hyper volumes Procedure 1. Select the storage system. STORAGE > Disk Groups to open the Disk Groups list view. 2. Select 3. Details tab. , and click the Select the disk group, click Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 232

233 Storage Management 4. Click the number next to Disks for Disk Group Number of Disks to open the list view. 5. Select a disk, click , and click the tab. Details Hypers for Disk to open the 6. Click the number next to Number of Hypers list view. Use the Hypers for Disk list view to view the hyper volumes in a disk group. The following properties display: l —Volume hyper number. Hyper l Volumes —Disk adapter logical volume number (1 - n). l Hyper Type —Hyper type. l —Mirror position of hyper. Mirror l —Disk capacity in GB/Cylinders. Capacity (GB/Cyl) l Symm Volume —Symmetrix volume number. l Hyper Status —Hyper status. l Emulation —Emulation of hyper volume. The following control is available: l — Viewing hyper volume details on page 233 Viewing hyper volume details Procedure 1. Select the storage system. STORAGE > Disk Groups to open the Disk Groups list view. 2. Select 3. tab. , and click the Details Select the disk group, click to open the Number of Disks Disks for Disk Group 4. Click the number next to list view. 5. Select a disk, click , and click the Details tab. 6. Click the number next to Hypers for Disk list Number of Hypers to open the view. 7. Select a hyper volume and click to open its Details view. Details view to view the properties of a hyper volume. 8. Use the hyper volume The following properties display: l —Volume hyper number. Hyper Number l DA Volume —Disk adapter logical volume number (1 - n). l Hyper Type —Hyper type. l —Mirror position of hyper. Mirror l Capacity (GB/Cyl) —Disk capacity in GB/Cylinders. l Symm Volume —Symmetrix volume number. Viewing hyper volume details 233

234 Storage Management l Raid Group —RAID-S group number. l Original Mirror —Mirror position of hyper. l —Hyper status. Hyper Status l —Emulation of hyper volume. Emulation Viewing list for a hyper type Depending on your selection, a list is displayed for one of the following : l MetaHypers l Raid5Hyper l Raid5MetaHyper l Raid6Hyper l Raid6MetaHyper l Hypers l TWM l MetaTWM l MetaMembers Viewing volumes for disk Procedure 1. Select the storage system. > Disk Groups to open the Disk Groups list view. 2. Select STORAGE 3. , and click the Details tab. Select the disk group, click 4. Click the number next to Number of Disks to open the Disks for Disk Group list view. 5. to open the details view for the disk. Select a disk and click to open the view. 6. Click the number next to Volumes Number of Volumes Viewing paths for disks Procedure 1. Select the storage system. > Disk Groups 2. Select Disk Groups list view. STORAGE to open the 3. , and click the Details tab. Select the disk group, click Disks to open the Disks for Disk Group list view. 4. Click the number next to 5. tab. , and click the Details Select a disk, click Number of Paths Paths for Disk list 6. Click the number next to to open the view. Paths for Disk list view to view the paths for a disk. Use the The following properties display: 234 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

235 Storage Management l Dir —Director Identifier. Possible values are a director number or the word "Multi," which indicates that the hyper can see multiple directors. l —Director port number. Port l —World Wide Name of the port. Remote Port WWN l —Whether active path is being used (True/False). Active Path l —Whether failover is being used (True/False). Failover Viewing spare disks in disk group Procedure 1. Select the storage system. STORAGE > Disk Groups to open the Disk Groups list view. 2. Select 3. . Select the disk group and click panel, click the number next to Number of Spare Disks to open 4. In the Details view. Spare Disks for Disk Group the Spare Disks for Disk Group view to view the spare disks in a disk Use the group. The following properties display: l —Director ID. Dir l Int —DA SCSI path. l —Disk SCSI ID. TID l Hypers —Number of hypers. l —Disk group number where disk is contained. Disk Group l Speed (RPM) —Physical disk revolutions per minute. l Total Capacity (GB)— Total disk in GB. l Failed Dir —Failed disk director ID. l —Failed disk DA number. Failed DA Number l Failed DA Int —Failed disk DA SCSI path. l Failed Disk SCSI ID —Failed disk SCSI ID. l —Failed disk Spindle ID. Failed Spindle ID The following controls are available: l Viewing spare disk details on page 235 — Viewing spare disk details Procedure 1. Select the storage system. STORAGE > Disk Groups to open the Disk Groups list view. 2. Select 3. . Select the disk group and click Viewing spare disks in disk group 235

236 Storage Management 4. In the Number of Spare Disks to open Details panel, click the number next to view. the Spare Disks for Disk Group 5. view. Details to open the its Select a disk and click Details view to view the properties of a spare disk. Use the disk The following properties display: l —Spindle ID Spindle l Dir —Director ID l Int —DA SCSI path l TID —Disk SCSI ID l —External world wide name External WWN l —Disk group number Disk Group l —Location of disk Disk Location l Disk Technology —Disk technology type l —Physical disk revolutions per minute Speed (RPM) l Form Factor —Form factor l Vendor ID —Disk vendor ID l Product ID —Product ID l Product Revision —Product revision number l —Serial number Serial ID l —Number of disk blocks Disk Blocks l Actual Disk Block —Actual number of disk blocks l Block Size —Size of each disk block l Total Capacity (GB) —Total disk capacity in Gigabytes l Free Capacity (GB) —Free disk capacity in Gigabytes l Actual Capacity (GB) —Actual disk capacity in Gigabytes l —Used disk capacity in Gigabytes Used Capacity (GB) l Used (%) —Percentage of used disk capacity to the total disk capacity l —Rated disk capacity in Gigabytes Rated Disk Capacity (GB) l Spare Disk —Indication if the disk is a spare disk l —Indication if the disk is encapsulated Encapsulated l Disk Service State —Disk service state Removing disks from disk groups Note Only empty external disk groups can be deleted. Procedure 1. Select the storage system. STORAGE Disk Groups . 2. Select > Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 236

237 Storage Management 3. Details Select the disk group from the list and click view. to open its Number of Disks 4. From the to open the Details panel, click the number next to Disks view. 5. Select a disk from the list and click Remove Disk . OK 6. Click . Deleting disk groups Before you begin Only empty external disk groups can be deleted. Procedure 1. Select the storage system. > 2. Select to open the Disk Groups list view. STORAGE Disk Groups 3. Select a disk group and click . OK . 4. Click Renaming disk groups Procedure 1. Select the storage system. 2. Select Disk Groups . STORAGE > Rename . 3. Select the disk group and click OK. . 4. Type the new disk group name and click Creating DATA volumes This procedure explains how to create DATA volumes on storage systems running Enginuity version 5876. Procedure 1. Select the storage system. Thin Pools to open the STORAGE list view. > 2. Select Thin Pools 3. Details view. to open its Select the thin pool and click Number of Data Volumes . 4. Click the number next to 5. Click Create Volumes . DATA as the Configuration . 6. Select Disk Technology . 7. Select the External disk technology is an option if the storage system has FTS (Federated Tiered Storage) enabled and available external storage. 8. Select the Emulation type. Protection level. 9. Select the RAID Number of Volumes , and selecting a 10. Specify the capacity by typing the Volume Capacity . You can also manually enter a volume capacity. Deleting disk groups 237

238 Storage Management 11. To add the new volumes to a specific thin pool, select one from Add to Pool . Pools listed are filtered on technology, emulation, and protection type. Advanced Options to continue setting the advanced options, as described 12. Click next. . The advanced options presented depend on the value selected for Add to Pool Complete any of the following steps that are appropriate: a. Select the Disk Group (number and name) in which to create the volumes. The list of disk groups is already filtered based on the technology type selected above. . Enable volume in pool b. To enable the new volumes in the pool, select c. To rebalance allocated capacity across all the DATA volumes in the pool, . select Start Write Balancing . APPLY d. Click 13. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand Activating and deactivating DATA volumes Before you begin You can only activate deactivated DATA volumes with used tracks. This procedure explains how to activate or deactivate DATA volumes in a thin pool. Activating volumes is essentially the same thing as enabling volumes; however, the activate operation is not allowed if draining is in progress. After activation, the volumes will go into the Enabled state. Procedure 1. Select the storage system. 2. Select STORAGE > Volumes . 3. Click the Private tab. 4. File on DATA type. 5. Do one of the following: l Select one or more volumes, click , and select Set Volumes > Activate . l , and select Set Volumes > Select one or more volumes, click Deactivate . 6. Click OK . Enabling and disabling DATA volumes Before you begin To disable a volume, all sessions must be terminated, and have no used tracks. This procedure explains how to enable or disable DATA volumes for use in a pool. The volumes in the pool do not all have to be in the same state (enabled or disabled). If all 238 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

239 Storage Management the volumes in a pool are disabled, then the pool is disabled. If at least one volume in a pool is enabled, then the pool is enabled. Procedure 1. Select the storage system. Volumes STORAGE 2. Select . > 3. Click the Private tab. 4. File on DATA type. 5. Do one of the following: l Select one or more volumes, click Set Volumes > Enable . , and select l Set Volumes > Disable Select one or more volumes, click , and select . OK 6. Click . Start and stop draining DATA volumes This procedure explains how to start or stop draining DATA volumes. Procedure 1. Select the storage system. > Volumes . 2. Select STORAGE tab. Private 3. Click the 4. File on DATA type. 5. Do one of the following: l , and select Draining > Start . Select one or more volumes, click l , and select . Select one or more volumes, click Draining > Stop 6. Click OK . Viewing DATA volumes Procedure 1. Select the storage system. STORAGE > Volumes and click the Private tab. 2. Select 3. Filter on DATA type. 4. To view the properties and controls, see on page 215. Viewing private volumes Viewing DATA volume details Procedure 1. Select the storage system. STORAGE > Volumes and click the Private tab. 2. Select Start and stop draining DATA volumes 239

240 Storage Management 3. Filter on DATA type. 4. to open its Details Select a DATA volume and click view. on page 215. 5. To view the properties, see Viewing private volume details Creating thin pools When creating thin pools, Unisphere works on a best effort basis, meaning that it attempts to satisfy as much as possible of the requested pool from existing DATA volumes, and then creates the volumes necessary to meet any shortfall. Before you begin: Thin pools contain DATA volumes of the same emulation and the same configuration. When creating thin pools, will attempt to instill best practices in the creation process by updating the default Protection level according to the selected Disk Technology: Default protection level Technology EFD RAID5(3+1) FC 2-Way Mirror SATA RAID6(6+2) To create a thin pool: Procedure 1. Select the storage system. STORAGE Thin Pools to open the Thin Pools list view. > 2. Select 3. Click Create Thin Pool dialog box. Create to open the When this dialog box first opens, the chart displays the configured and unconfigured space on the selected storage system. Once you select a disk technology later in this procedure, and therefore a disk group, this chart will display the configured and unconfigured space of the selected group. Thin Pool Name 4. Type the . Thin pool names can contain up to 12 alpha-numeric characters. The only special characters allowed are the hyphen (-) and the underscore ( _ ); however, the name cannot start or end with a or hyphen or underscore . Disk Technology on which the pool will reside. 5. Select the 6. Select the RAID Protection level for the DATA volumes to use in the pool. 7. Select an Emulation type for the pool. 8. Specify the number of volumes, volume capacity and capacity unit. 9. Click Advanced Options - see Creating or Expanding or Modifying thin pools on page 255. Create Thin Pool - Summary page, and do one of 10. Verify your selections in the the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs 240 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

241 Storage Management l and click Expand Run Now Add to Job List to create the pool now. Expanding thin pools Before you begin Unisphere supports best practices, which state that volumes from different drive technologies should not be mixed in the same thin pool. To this end, Unisphere will only expand a thin pool with volumes from the same disk group as the volumes already in the pool. This is an important distinction from Solutions Enabler, which does not impose this restriction. Expanding thin pools refers to the process of increasing the amount of pool storage accessible to a thin volume by either adding a predefined capacity to the pool, or by increasing the pool's capacity by a percentage. To expand a thin pool: Procedure 1. Select the storage system. 2. Select Storage > Thin Pools to open the Thin Pools list view. to open the Expand Thin Pool dialog box. 3. Select the thin pool and click Expand The chart on this dialog box displays the configured and unconfigured space of the disk group containing the pool's DATA volumes. or : 4. Select how to expand the pool, either by Percentage Capacity l Volume Capacity Capacity —The field defaults to the first data volume size in the pool. All volume sizes contained in the pool are available. Type the Extra Pool Capacity and select the unit of capacity. l —Type an amount in the Percentage Increase field. Percentage Advanced Options 5. Click on - see Creating or Expanding or Modifying thin pools page 255. 6. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920 and on page 920. l Expand Add to Job List and click Run Now to create the pool now. Draining thin pools This procedure explains how to re-balance data across all the DATA volumes in a thin pool. This procedure is typically performed after expanding a thin pool. Before you begin: l The drain operation is not supported with any ongoing replication operation. l You can only drain deactivated DATA volumes. For instructions, refer to Activating and deactivating DATA volumes on page 238. l The drain must not cause the enabled volumes to end up with greater than 90% utilization in the pool. To calculate this, adds the total used tracks on the enabled volumes and the total used tracks on the volumes that will be drained and divides this sum by the total number of tracks on all the enabled volumes. If the result is greater than 90% the drain request is blocked. Expanding thin pools 241

242 Storage Management l The number of volumes that are draining at any time are limited to 20% total of the number of volumes to drain (or draining) plus the number of enabled volumes. This limits the impact on the system. l This feature is only supported on storage systems running Enginuity 5876 or higher. To drain thin pools: Procedure 1. Select the storage system. Thin Pools list view. 2. Select STORAGE > Thin Pools to open the 3. Details view. to open its Select the thin pool and click to open the 4. Click the number next to Number of Data Volumes DATA Volumes list view. 5. Start Draining . , and select Select one or more volumes, click 6. Click OK . This will put the volumes in a Draining state. 7. Monitor the draining until it reaches an acceptable percentage. This will require you to refresh the view. If you do not monitor the draining, eventually all data will be drained from the volumes and they will go into a Disable state. 8. When a volume reaches an acceptable level, select it, click , and select Stop . Draining OK 9. Click in the confirmation dialog. This will put the volume in an Enabled state. 10. If you are draining multiple devices, repeat steps 5 to 9 until all the volumes are drained to an acceptable percentage. Starting and stopping thin pool write balancing Before you begin l You can only perform this procedure on an enabled thin pool with at least one thin volume bound to it. l While write balancing is going on, all pool operations can still occur. l Write balancing requires Enginuity 5876 or higher. Write balancing thin pools refers to the process of rebalancing allocated capacity across all the DATA volumes in the pool. This procedure is typically performed after expanding a thin pool. To write balance a thin pool: Procedure 1. Select the storage system STORAGE > Thin Pools to open the Thin Pools list view. 2. Select 3. Start write balancing by clicking Start Write Balancing . , and clicking Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 242

243 Storage Management 4. Click OK . This will put the pool in a Balancing state. 5. Monitor the balancing until it reaches an acceptable percentage 6. . , and select Select the thin pool, click Stop Write Balancing OK . 7. Click Deleting thin pools Before you begin: Adding or removing You can only delete empty thin pools. For instructions, refer to on page 243. thin pool members To delete a thin pool: Procedure 1. Select the storage system. 2. Select Thin Pools to open the Thin Pools list view. STORAGE > Delete 3. Select the thin pool and click . . OK 4. Click Adding or removing thin pool members This procedure explains how to add or remove members from a thin pool. Before you begin: l The storage system must be running Enginuity 5876. l disable it. Before you can remove a thin pool member (data volume), you must first l Unisphere supports best practices, which state that volumes from different drive Add technologies should not be mixed in the same thin pool. To this end, the Volumes to Thin Pool dialog box will only allow you to add volumes from the same disk group as the volumes already in the pool. This is an important distinction from Solutions Enabler, which does not impose this restriction. To add or remove thin pool members: Procedure 1. Select the storage system. > Thin Pools to open the Thin Pools list view. 2. Select Storage 3. to open the thin pool's details view. Select the thin pool and click Number of Data Volumes to open the DataVolumes 4. Click the number next to view. 5. Click to open the Add Volumes to Thin Pool wizard. Add Volumes to Pool a. Locate the volumes by selecting/typing values for any number of the following criteria: l — Filters the list for volumes with a specific Capacity equal to capacity. Deleting thin pools 243

244 Storage Management l Volume ID — Filters the list for a volume with specific ID. l — Filters the list for the specified volume Volume Identifier Name name. l — Filters the list for the specified Volume Configuration configuration. l Emulation — Filters the list for the specified emulation. b. Click . NEXT table, select the volumes. Available Volumes c. In the d. Deselect one or more the previously selected volumes to remove a volume. . e. Click OK Enabling and disabling thin pool members Procedure 1. Select the storage system. Storage > Thin Pools Thin Pools list view. to open the 2. Select to open the thin pool's details view. 3. Select the thin pool and click SRDF Groups . DATA Volumes 4. Click the number next to 5. Do one of the following: l Enable . To enable members, select them and click l To disable members, select them and click Disable . . OK 6. Click Managing thin pool allocations Before you begin l You can only allocate thin pool capacity to bound thin volumes. l This procedure explains how to perform this operation from the Volumes view. You can also perform this procedure from storage group views. Depending from where you are performing this procedure, some of the following steps may not apply. The following describes how to start and stop allocating thin pool capacity from the Volumes view. Procedure 1. Select the storage system. STORAGE > Volumes 2. Select . 3. Select the volume type by selecting a tab. 4. Do one of the following: l To start thin pool allocation: n , and select Allocate/Free/ Select one or more volumes, click Reclaim > Start . 244 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

245 Storage Management n , Select Reclaim Volumes . If you Free Volumes Allocate Volumes , or , you can optionally specify to persist select Allocate Volumes preallocated capacity on the thin volumes by selecting the Persist preallocated capacity through reclaim or copy option. Persistent allocations are unaffected by standard reclaim operations and any TimeFinder/Clone, TimeFinder/Snap, or SRDF copy operations. If you select Reclaim Volumes , you can optionally specify to reclaim persistent option. capacity by selecting the Reclaim persistent capacity l To stop thin pool allocation: n , and select Allocate/Free/ Select one or more volumes, click more . Reclaim > Stop n . Select Stop Allocate 5. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l , and click Run Now to perform the operation now. Expand Add to Job List Viewing thin pools Procedure 1. Select the storage system. 2. Select STORAGE > Thin Pools to open the Thin Pools list view. Thin Pools list view allows you to view and manage thin pools on a storage 3. The system. The following properties display: l —Name of the thin pool. Name l Technology —Disk technology on which the pool resides. l —Configuration of the pool. Configuration l —Emulation of the pool. Emulation l —Percentage of the pool that is allocated. Allocated Capacity l Enabled Capacity (GB) —Capacity of the pool in GB. The following controls are available: l Viewing thin pool details on page 246 — l — Creating thin pools on page 240 Create l — Creating or Expanding or Modifying thin pools on page 255 Modify l Expand on page 241 — Expanding thin pools l Delete Deleting thin pools on page 243 — l — Starting and stopping thin pool write balancing on Start Write Balancing page 242 l Stop Write Balancing — Starting and stopping thin pool write balancing on page 242 Viewing thin pools 245

246 Storage Management l Binding/Unbinding/Rebinding thin volumes Bind on page 257 — Viewing thin pool details Procedure 1. Select the storage system. to open the Thin Pools > 2. Select Thin Pools STORAGE list view. 3. or Details panel. to open its Pool Usage Report Select the pool and click panel provides a graphic representation of the thin Pool Usage Report The pool's allocation as a percentage. The following properties display in the Details panel: l Name — Name of the pool. To rename a pool, type a new name over the . Thin pool names can contain up to 12 alpha-numeric Apply existing and click characters. The only special character allowed is the underscore ( _ ); however, the name cannot start or end with an underscore. l RAID Protection — RAID protection level for the DATA volumes in the pool. l Type — The pool type. l — Disk technology on which the pool resides. Technology l Emulation — Emulation type for the pool. l Total Capacity (GB) — Total capacity of the pool. l — Free capacity in the pool. Free Capacity (GB) l — Sum of capacity of all enabled DATA volumes in Enabled Capacity (GB) the pool. l — Pool capacity allocated to thin volumes. Allocated Capacity (GB) l — Percent of pool used. Allocated % l Maximum Subscription Set — Enable oversubscription for the pool. l — Acceptable oversubscription ratio for the pool. Maximum Subscription l Subscription % — Current subscription percentage. l State — Pool state (Enabled, Disable, Balancing). l — Target volume utilization variance for the Rebalance Variance rebalancing algorithm. The rebalancing algorithm attempts to level data distribution in a pool so that the percentage utilization of any volume in the pool is within the target variance of the percentage utilization of any other volume in the pool. Possible values range from 1 to 50%, with the default value being 1%. This field is only available when creating a thin pool on a Symmetrix system running Enginuity 5876 or higher. l Maximum Volumes per Rebalance Scan — Maximum number of volumes in the pool on which the rebalancing algorithm will concurrently operate. To change this number, type a new value over the existing and click Apply. Possible values range from 1 to 1024, with the default value being 256. This field only applies to thin pool on a Symmetrix system running Enginuity 5876 or higher. l Pool Capacity Reserved — Whether a percentage of the capacity of the thin pool is reserved. 246 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

247 Storage Management l Pool Reserved Capacity — The percentage of the capacity of the thin pool that will be reserved for non-FAST activities. l — Number of track groups freed from the thin pool as Pool Egress Counter a result of a FAST related data movement. l Pool Ingress Counter — Number of track groups allocated in the thin pool as a result of a FAST related data movement. l Number of Bound Volumes — Number of thin volumes bound to the pool. l — Number of data volumes bound to the pool. Number of Data Volumes l — Number of enabled DATA volumes in the Number of Enabled Volumes pool. l Number of Disabled Volumes — Number of disabled DATA volumes in the pool. l Disk Location — Whether the disk group is internal to the storage system or an external storage system or storage device. You can view objects contained in and associated with the thin pool. Each link is followed by a number, indicating the number of objects in the corresponding Number of Data Volumes opens view. For example, clicking the number next to a view listing the DATA volumes in the pool. Viewing bound volumes for a thin pool Procedure 1. Select the storage system. > Thin Pools 2. Select Thin Pools list view. STORAGE to open the 3. to open its Details view. Select the thin pool and click Number of Bound Volumes . 4. Click the number next to The following properties display: l Name —Assigned volume name. l —Emulation type for the volume. Emulation l Configuration —Volume configuration. l —Volume capacity in Gigabytes. Capacity (GB) l Allocated (GB) —Number of GBs from the pool allocated for exclusive use by the volume. l Written (GB) —Number of allocated GBs in the pool that the thin volume has actually used. l Shared Tracks —Whether the volume shares tracks with other thin volumes. The following controls display: l — on page 184 Creating thin volumes Create Volumes l Bind — Binding/Unbinding/Rebinding thin volumes on page 257 l — Binding/Unbinding/Rebinding thin volumes on page 257 Unbind l Configuration > Change Volume Configuration — Changing volume configuration on page 190 Viewing bound volumes for a thin pool 247

248 Storage Management l Mapping volume operations Configuration > Map on page 97 — l Mapping volume operations Configuration > Unmap on page 97 — l on page 97 Configuration > z/OS Map Mapping volume operations — l — Mapping volume operations Configuration > z/OS Unmap on page 97 l on page 96 — Set Volume > Emulation Setting volume emulation l on page 195 — Set Volume > Attributes Setting volume attributes l — on page 196 Set Volume > Identifiers Setting volume identifiers l Set Volume > Status on page 194 — Setting volume status l Binding/Unbinding/Rebinding thin volumes on page 257 FAST > Rebind — l — Pinning and unpinning volumes FAST > Pin on page 173 l — on page 173 FAST > Unpin Pinning and unpinning volumes l Managing thin pool allocations on page Allocate/Free/Reclaim > Start — 244 l Managing thin pool allocations on page Allocate/Free/Reclaim > Stop — 244 l RecoverPoint > Tag — Tagging and untagging volumes for RecoverPoint on page 472 (storage group level) l RecoverPoint > Untag — Tagging and untagging volumes for RecoverPoint on page 472 (storage group level) l on Assign Dynamic Cache Partition Assigning dynamic cache partitions — page 945 l VLUN Migration VLUN Migration dialog box on page 260 — 5. Details view. to open its Select a bound volume and click The following properties display: l Name —Volume name. l —Physical name. Physical name l Volume Identifier —Volume Identifier. l —Volume configuration. Type l —Indication whether the volume is encapsulated. Encapsulated Volume l Encapsulated WWN —Encapsulated World Wide Name. l —Volume status. Status l Reserved —Whether the volume is reserved. l Capacity (GB) —Volume capacity in GBs. l Capacity (MB) —Volume capacity in MBs. l —Volume capacity in cylinders. Capacity (Cylinder) l Emulation —Volume emulation. l Symmetrix ID —Symmetrix system on which the volume resides. l —Symmetrix volume name/number. Symmetrix Volume ID 248 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

249 Storage Management l HP Identifier Name —User-defined volume name (1-128 alpha-numeric characters), applicable to HP-mapped volumes. This value is mutually exclusive of the VMS ID. l Numeric value (not to exceed 32766) with VMS Identifier Name— relevance to VMS systems. This value is mutually exclusive of the HP ID. l —Nice name generated by Symmetrix Enginuity. Nice Name l WWN —World Wide Name of the volume. l —Name of the device group in which the volume resides, if DG Name applicable. l —Name of the CG in which the volume resides, if applicable. CG Name l Attached BCV —Defines the attached BCV to be paired with the standard volume. l Attached VDEV TGT Volume —Volume to which this source volume would be paired. l —RDF configuration. RDF Type l —Method used to define the volume's geometry. Geometry - Type l Geometry - Number of Cylinders —Number of cylinders. l Geometry - Sectors per Track —Number of sectors per track, as defined by the volume's geometry. l —Number of tracks per cylinder, as Geometry - Tracks per Cylinder defined by the volume's geometry. l Geometry - 512 Block Bytes —Number of 512 blocks, as defined by the volume's geometry. l —Capacity. Geometry - Capacity (GB) l SSID —Subsystem ID. l )—Capacity in tracks. Capacity (Tracks) l SA Status —Volume SA status. l Host Access Mode —Host access mode. l Pinned —Whether the volume is pinned. l —Indication whether the volume is tagged for RecoverPoint Tagged RecoverPoint. l Service State —Service state. l Defined Label Type —Type of user-defined label. l —RDF capability of the volume. Dynamic RDF Capability l Mirror Set Type —Mirror set for the volume and the volume characteristic of the mirror. l Mirror Set DA Status —Volume status information for each member in the mirror set. l Mirror Set Invalid Tracks —Number of invalid tracks for each mirror in the mirror set. l —Priority value assigned to the volume.Valid values are 1 Priority QoS (highest) through 16 (the lowest). l Dynamic Cache Partition Name —Name of the cache partition. Viewing bound volumes for a thin pool 249

250 Storage Management l Optimized Read Miss —Optimized Read Miss. l Compressed Size (GB) —Compressed Size. l —Compressed percentage. Compressed Percentage l —Compressed size per pool. Compressed Size Per Pool Viewing DATA volumes for a thin pool Procedure 1. Select the storage system. Thin Pools to open the Thin Pools 2. Select STORAGE > list view. 3. view. to open its Select the thin pool and click Details Number of Data Volumes 4. Click the number next to . The following properties display: l Name —Name of the DATA volume. l Emulation —Volume emulation. l —Volume configuration. Configuration l Used (%) —Percent of the volume used. l Used (GB) —Space used. l Free —Free space on the volume. l —Volume status. Status l Session Status —Session status (Active, or Inactive). The following controls are available: l Viewing details on DATA volumes in thin pools on page 250 — l — Creating DATA volumes on page 179 Create Volumes l Add Volumes to Pool on page 243 — Adding or removing thin pool members l — on page 243 Remove Adding or removing thin pool members l Enable — Enabling and disabling thin pool members on page 244 l Enabling and disabling thin pool members on page 244 Disable — l Activate — Activating and deactivating DATA volumes on page 238 l — Activating and deactivating DATA volumes on page 238 Deactivate l Start Draining on page 241 — Draining thin pools l — Stop Draining on page 241 Draining thin pools Viewing details on DATA volumes in thin pools Procedure 1. Select the storage system. STORAGE > Thin Pools to open the Thin Pools list view. 2. Select 250 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

251 Storage Management 3. Details Select the thin pool and click view. to open its . Number of Data Volumes 4. Click the number next to 5. to open its view. Select a DATA volume and click Details The following properties display: l —Volume name. Name l —Volume configuration. Type l —Indication whether the volume is encapsulated. Encapsulated Volume l Encapsulated WWN —Encapsulated World Wide Name. l Status —Volume status. l Reserved —Whether the volume is reserved. l —Volume capacity in GBs. Capacity (GB) l Capacity (MB) —Volume capacity in MBs. l —Volume capacity in cylinders. Capacity (Cylinder) l Emulation —Volume emulation. l Symmetrix ID —Symmetrix system on which the volume resides. l Symmetrix Volume ID —Symmetrix volume name/number. l HP Identifier Name —User-defined volume name (1-128 alpha-numeric characters), applicable to HP-mapped volumes. This value is mutually exclusive of the VMS ID. l Numeric value (not to exceed 32766) with VMS Identifier Name— relevance to VMS systems. This value is mutually exclusive of the HP ID. l —Nice name generated by Symmetrix Enginuity. Nice Name l —World Wide Name of the volume. WWN l DG Name —Name of the device group in which the volume resides, if applicable. l CG Name —Name of the CG in which the volume resides, if applicable. l Attached BCV —Defines the attached BCV to be paired with the standard volume. l —Volume to which this source volume would Attached VDEV TGT Volume be paired. l RDF Type —RDF configuration. l —Method used to define the volume's geometry. Geometry - Type l Geometry - Number of Cylinders —Number of cylinders. l —Number of sectors per track, as defined Geometry - Sectors per Track by the volume's geometry. l —Number of tracks per cylinder, as Geometry - Tracks per Cylinder defined by the volume's geometry. l —Number of 512 blocks, as defined by the Geometry - 512 Block Bytes volume's geometry. l Geometry - Capacity (GB) —Capacity. l SSID —Subsystem ID. Viewing details on DATA volumes in thin pools 251

252 Storage Management l Capacity (Tracks) )—Capacity in tracks. l SA Status —Volume SA status. l —Host access mode. Host Access Mode l —Whether the volume is pinned. Pinned l Service State —Service state. l —Type of user-defined label. Defined Label Type l —RDF capability of the volume. Dynamic RDF Capability l Mirror Set Type —Mirror set for the volume and the volume characteristic of the mirror. l Mirror Set DA Status —Volume status information for each member in the mirror set. l Mirror Set Invalid Tracks —Number of invalid tracks for each mirror in the mirror set. l Priority QoS —Priority value assigned to the volume.Valid values are 1 (highest) through 16 (the lowest). l Dynamic Cache Partition Name —Name of the cache partition. l XtremSWCache Attached —Whether volume is attached to XtremSW cache. l Compression Delta (GB) —Difference between volume allocation and uncompressed data. Viewing other volumes for thin pools Procedure 1. Select the storage system. Storage > Thin Pools . 2. Select 3. . Select the pool and click . Other Volumes 4. Click the number next to 5. Use the list view to display and manage other Other Volumes for Thin Pool volumes bound to a thin pool. The following properties display: l Name — Assigned volume name. l Pool Name — Pool to which the volume is bound. l % Allocated — Percentage of space allocated in the pool. l Allocated Capacity — Amount of space allocated in the pool. The following controls are available: l Viewing thin volume details on page 223 — l Create on page 184 — Creating thin volumes l Bind Binding/Unbinding/Rebinding thin volumes on page 257 — l — Binding/Unbinding/Rebinding thin volumes on page 257 Unbind 252 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

253 Storage Management l Tagging and untagging volumes for Untag for RecoverPoint — RecoverPoint (volume level) on page 472 l Tag for RecoverPoint — Tagging and untagging volumes for RecoverPoint on page 472 (volume level) l Unpin Pinning and unpinning volumes on page 173 — l on page 173 Pinning and unpinning volumes Pin — l Assign Symmetrix Priority — Assigning array priority to individual volumes on page 189 l Unmapping volumes on page 193 Unmap — l — on page 192 Map Mapping volumes l — on Assign Dynamic Cache Partition Assigning dynamic cache partitions page 945 l Managing thin pool allocations on page 244 Stop Allocate/Free/Reclaim — l Set Volume Status on page 194 — Setting volume status l Set Volume Identifiers Setting volume identifiers on page 196 — l — Setting volume attributes on page 195 Set Volume Attributes l Change Volume Configuration on page — Changing volume configuration 190 l Rebind Binding/Unbinding/Rebinding thin volumes on page 257 — Managing thin pool capacity Before you begin l You can only reclaim thin pool capacity from bound thin volumes. l Thin pool reclamation for individual thin volumes requires Enginuity 5876 or HYPERMAX OS 5977 or higher. l This procedure explains how to perform this operation from the Volumes view. You can also perform this operation from storage group views. Depending from where you are performing this procedure, some of the following steps may not apply. The following describes how to start and stop the process of freeing allocated thin Volumes view. In addition, you can also perform this operation pool capacity from the from the following views: l STORAGE > > Storage Groups Storage Groups (HYPERMAX OS 5977 or higher): l STORAGE > Storage Groups Storage Groups (Enginuity 5876): l Device Groups: DATA PROTECTION > Device Groups l File Storage Groups: SYSTEM > eNAS > File Dashboard > File Storage Groups Procedure 1. Select the storage system. 2. Select . Storage > Volumes 3. Select the volume type by selecting a tab. 4. Do one of the following: l To start freeing unused capacity: Managing thin pool capacity 253

254 Storage Management n Start Allocate/Free/ Select one or more volumes, click , and select dialog box. Start Allocate/Free/Reclaim Reclaim to open the n Select . Free Volumes n Optional: To free all allocations associated with the volumes, regardless Free all allocations (written and of whether the data is written, select . This option is only available on storage systems running unwritten) HYPERMAX OS 5977 or higher. n Reserve . In addition you can also type To reserve the volumes, select reserve Comments and select an Expiration Date . The default values for Reserve are set in Symmetrix Preferences for volumes and Comments reservations. If the volumes are not automatically reserved you can optionally reserve them here. l To stop freeing unused capacity: n , and select Stop Allocate/Free/ Select one or more volumes, click Reclaim to open the dialog box. Stop Allocate/Free/Reclaim n . In addition, on storage systems running Enginuity Select Free Volumes 5876, you can optionally specify to free tracks that are unwritten or zero-based, even if they are marked persistent. This option is only available on storage systems running Enginuity 5876. n To reserve the volumes, select Reserve . In addition you can also type Comments and select an Expiration Date . The default values for reserve and Reserve Comments are set in Symmetrix Preferences for volumes reservations. If the volumes are not automatically reserved you can optionally reserve them here. 5. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs Previewing jobs on page 920. on page 920 and l , and click Add to Job List to perform the operation now. Expand Run Now For more information about thin pools and thin provisioning concepts, refer to the . Solutions Enabler Symmetrix Array Management CLI Product Guide Allocate/Free/Reclaim dialogs Use the dialogs to perform the following operations: l Managing thin Start allocating thin pool capacity for thin volumes, as described in on page 244. pool allocations l Start freeing unused allocated thin pool capacity, as described in Managing thin pool capacity on page 253. l Start reclaiming unwritten tracks from thin volumes, as described in Managing on page 514. space reclamation l Stop allocating thin pool capacity for thin volumes, as described in Managing thin pool allocations on page 244. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 254

255 Storage Management l Stop freeing unused allocated thin pool capacity, as described in Managing thin on page 253. pool capacity l Managing Stop reclaiming unwritten tracks from thin volumes, as described in space reclamation on page 514. Creating or Expanding or Modifying thin pools Advanced Options when creating thin pools l containing the DATA volumes to use in the pool. Select the Disk Group l . This is the target volume utilization Type the Rebalancing Variance (1-50) variance for the rebalancing algorithm. The rebalancing algorithm attempts to level data distribution in a pool so that the percentage utilization of any volume in the pool is within the target variance of the percentage utilization of any other volume in the pool. Possible values range from 1 to 50%, with the default value being 1%. This field is only available when creating a thin pool on a Symmetrix system running Enginuity 5876 or higher. l Maximum Rebalancing Scan Device Range (2-1024) Type the . This is the maximum number of volumes in the pool on which the rebalancing algorithm will concurrently operate. Possible values range from 2 to 1024, with the default value being 256. This field is only available when creating a thin pool on a Symmetrix system running Enginuity 5876 or higher. l To specify the percentage of the pool's capacity to enable, select Enable Max Subscription (0-65534) and type a percentage. l To specify the percentage of the capacity of the thin pool that will be reserved for Enable Pool Reserved Capacity (1-80) and type a non-FAST activities, select value. If the free space in the pool (as a percentage of pool-enabled capacity) falls below this value, the FAST controller does not move any more chunks into the pool. Specifying a value here will override the system-wide PRC value. Possible values range from 1 to 80. l Enable DATA Volume for To enable the DATA volumes in the pool for use, select Use . l Enable VP To enable FAST VP compression for the volumes in a thin pool, select . This feature maximizes the storage capacity usage within the pool Compression by compressing its volumes. l APPLY . Click Advanced Options when expanding thin pools l Start Write Balancing . Select l Click APPLY . Modifying thin pools l Volume Capacity Select , in GB. l Rebalancing Variance (1-50) . This is the target volume utilization Type the variance for the rebalancing algorithm. The rebalancing algorithm attempts to level data distribution in a pool so that the percentage utilization of any volume in the pool is within the target variance of the percentage utilization of any other volume in the pool. Possible values range from 1 to 50%, with the default value being 1%. This field is only available when creating a thin pool on a Symmetrix system running Enginuity 5876 or higher. l Type the Maximum Rebalancing Scan Device Range (2-1024) . This is the maximum number of volumes in the pool on which the rebalancing algorithm will Creating or Expanding or Modifying thin pools 255

256 Storage Management concurrently operate. Possible values range from 2 to 1024, with the default value being 256. This field is only available when creating a thin pool on a Symmetrix system running Enginuity 5876 or higher. l To specify the percentage of the pool's capacity to enable, select Enable Max and type a percentage. Subscription (0-65534) l To specify the percentage of the capacity of the thin pool that will be reserved for and type a Enable Pool Reserved Capacity (1-80) non-FAST activities, select value. If the free space in the pool (as a percentage of pool-enabled capacity) falls below this value, the FAST controller does not move any more chunks into the pool. Specifying a value here will override the system-wide PRC value. Possible values range from 1 to 80. l Enable DATA Volume for To enable the DATA volumes in the pool for use, select . Use l To enable FAST VP compression for the volumes in a thin pool, select Enable VP Compression . This feature maximizes the storage capacity usage within the pool by compressing its volumes. l OK Click . Creating thin volumes This procedure explains how to create thin volumes on storage systems running Enginuity version 5876. For instructions on creating thin volumes on storage systems Creating thin volumes on page 185. running HYPERMAX OS 5977 or higher, refer to Procedure 1. Select the storage system. > Volumes 2. Select Virtual tab and select Create . STORAGE , click on the Configuration (TDEV or BCV + TDEV or Virtual Gatekeeper) . 3. Select Emulation type. 4. Select the Number of Volumes 5. Specify the capacity by typing the , and selecting a . You can also manually enter a volume capacity. Volume Capacity Bind to Pool 6. To bind the new volumes to a specific thin pool, select one from . Only thin pools with enabled DATA volumes and matching emulation are available for binding (except AS/400 which will bind to an FBA pool). 7. Click to continue setting the advanced options Advanced Options Setting Advanced options: a. To name the new volumes, select one of the following Volume Identifiers : Name and type a l None — Allows the system to name the volumes (Default). l Name Only — All volumes will have the same name. l — All volumes will have the same name with a unique Name + VolumeID Symmetrix volume ID appended to them. When using this option, the maximum number of characters allowed is 50. l Name + Append Number — All volumes will have the same name with a unique decimal suffix appended to them.The suffix will start with the value specified for the Append Number and increment by 1 for each Append Numbers must be from 0 to 1000000. additional volume. Valid When using this option, the maximum number of characters allowed is 50. 256 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

257 Storage Management For more information on naming volumes, refer to Setting volume names on page 196. Allocate Full Volume Capacity , select the option. b. To c. If you selected to allocate capacity in the previous step, you can mark the allocation as persistent by selecting Persist preallocated capacity through reclaim or copy . Persistent allocations are unaffected by standard reclaim operations and any TimeFinder/Clone, TimeFinder/Snap, or SRDF copy operations. d. To assign Dynamic Capability to the volumes, select one of the . following; otherwise, leave this field set to None l RDF1_Capable — Creates a dynamic R1 RDF volume. l RDF2_Capable — Creates a dynamic R2 RDF volume. l — Creates a dynamic R1 or R2 RDF volume. RDF1_OR_RDF2_Capable e. If Auto Meta is enabled on the system, and if you are attempting to create Minimum Meta Capacity , specify values for the volumes larger than the following in the Define Meta panel: l Member capacity (Cyl/MB/GB) —Size of the meta members to use when creating the meta volumes. l Configuration (Striped/Concatenated) —Whether to create striped or concatenated meta volumes. 8. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Run Now Expand Add to Job List, and click to perform the operation now. l Click Advanced Options to continue setting the advanced options, as described next. Binding/Unbinding/Rebinding thin volumes Before you begin This procedure applies to storage systems running Enginuity OS 5876. l Only one bind, unbind, or rebind operation can be performed on the same volume in any one config session. l As an alternative to unmapping/unmasking a volume prior to unbinding, you can make the volume Not Ready. l A thin volume cannot be unbound from a pool if any of the following are true: n Volume is mapped to a front-end port or is in the Ready state n Volume is masked by VCM n Volume has active snap sessions n Volume is held n Volume is a source or target of a clone (src or tgt) session n Volume is a metamember n Volume is a part of enabled RDF CG group Binding/Unbinding/Rebinding thin volumes 257

258 Storage Management n Volume is an RDF volume l The following apply just to the rebind operation: n The thin volume has to be in the Bound state. n The new binding has to comply with the oversubscription ratio of the new pool. The entire size of the volume being rebound will be considered when calculating the oversubscription. n If volumes in a range, device group, or storage group are bound to different pools, then all the volumes will be rebound to the specified pool. n If a thin volume is part of a storage group that is under FAST management, the thin volume can only be bound to a pool in a tier that is part of the FAST policy associated with the storage group. Therefore, the volume can only be rebound to a pool that is within the policy. n If all the volumes that are being rebound are already bound to the destination pool, an error returns. If some volumes get bound to a pool different than what they are currently bound to, the operation will return a success status. l For more information about thin pools and thin provisioning concepts, refer to the . Solutions Enabler Array Management CLI Product Guide This procedure explains how to bind/unbind/rebind thin volumes to a thin pool of DATA volumes. You can bind /unbind/rebind thin volumes at the volume, pool, or storage group level. Procedure 1. Select the storage system. > Volumes and click on the 2. Select tab. STORAGE Virtual 3. Select the volume and do one of the following: l Click FAST > Bind a. Select the thin pool with which to bind the volume. Allocate Full Volume Capacity b. Optional: Select option. c. To view additional information on the selected volumes, click Show selected volumes. d. If you selected to allocate capacity in the previous step, you can mark the Persist preallocated capacity allocation as persistent by selecting through reclaim or copy option. Persistent allocations are unaffected by standard reclaim operations and any TimeFinder/Clone, TimeFinder/ Snap, or SRDF copy operations. e. Click OK . l Click FAST > Unbind and click OK l FAST > Rebind , specify the pool name, and click OK . Click Understanding Virtual LUN Migration Virtual LUN Migration (VLUN Migration) enables transparent, nondisruptive data mobility for both disk group provisioned and virtually provisioned storage system volumes between storage tiers and between RAID protection schemes. Virtual LUN can be used to populate newly added drives or move volumes between high performance and high capacity drives, thereby delivering tiered storage capabilities within a single storage system. Migrations are performed while providing constant data availability and protection. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 258

259 Storage Management Note Virtual LUN migration requires Enginuity 5876. Virtual LUN Migration performs tiered storage migration by moving data from one RAID group to another, or from one thin pool to another. It is also fully interoperable with all other storage system replication technologies such as SRDF, TimeFinder/ Clone, TimeFinder/Snap, and Open Replicator. RAID Virtual Architecture allows, for the purposes of migration, two distinct RAID groups, of different types or on different storage tiers, to be associated with a logical volume. In this way, Virtual LUN allows for the migration of data from one protection scheme to another, for example RAID 1 to RAID 5, without interruption to the host or application accessing data on the Symmetrix system volume. Virtual LUN Migration can be used to migrate regular storage system volumes and metavolumes of any emulation — FBA, CKD, and IBM i series. Migrations can be performed between all drive types including high-performance enterprise Flash drives, Fibre Channel drives, and large capacity SATA drives. Migration sessions can be volume migrations to configured and unconfigured space, or migration of thin volumes to another thin pool. Viewing VLUN migration sessions Procedure 1. Select the storage system. to open the Virtual LUN Migration list 2. Select STORAGE > Vlun Migration view. Use the this view to display and manage migration sessions. The following properties display: l Name —Migration session name. l Status —Migration session status. l —Number of invalid tracks for the volume pair. Invalid Tracks l —Percentage of the session completed. Percentage The following controls are available: l on page 259 Viewing VLUN migration session details — l Terminate — Terminating a VLUN migration session on page 260 Viewing VLUN migration session details Procedure 1. Select the storage system. STORAGE > Vlun Migration to open the Virtual LUN Migration list 2. Select view. 3. to open its view. Details Select a session and click Use this view to display details on a migration session. This view contains two Details and Source and Target Info . panels: Details panel: The following properties display in the Viewing VLUN migration sessions 259

260 Storage Management l Name —Migration session name. l —Migration session status. Status l Invalid Tracks —Number of invalid tracks for the volume pair. l Percentage —Percentage of the session completed. l —Type of target volume. Target Type l —If the target type is thin, this is the name of the pool containing Thin Pool the thin volume. The following properties display in the Source and Target Info panel: l Source —Source volumes in the migration session. l Target —Target volumes in the migration session. l —Number of target volumes in the session. Target Volumes l Invalid Tracks —Number of invalid tracks for the volume pairs in the session. l —Migration session status for the pair. Status Terminating a VLUN migration session Procedure 1. Select the storage system. STORAGE > Vlun Migration to open the Virtual LUN Migration list 2. Select view. 3. Select the migration session and click Terminate . . OK 4. Click VLUN Migration dialog box From this dialog box you can perform volume migrations for regular or thin volumes. Thin volumes migrate from a source pool to a target pool, and regular volumes migrate to configured (existing) volumes or unconfigured (new) volumes. Some of the options in the dialog box display will differ depending on whether you are migrating regular or thin volumes. For volume-specific migration procedures, refer to the following: l Migrating regular volumes on page 261 l on page 262 Migrating thin volumes l Migrating regular storage group volumes on page 261 l Migrating thin storage group volumes on page 262 Select VLUN Migration Session Target dialog box Use this dialog box to select the target disk group (standard migration) or target thin pool (thin migration). 260 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

261 Storage Management Migrating regular storage group volumes Before you begin l Virtual LUN migration requires Enginuity 5876. This procedure explains how to migrate all the regular volumes in a storage group. To migrate regular storage group volumes: Procedure 1. Select the storage system. STORAGE> Storage Groups to open the Storage Groups view. 2. Select 3. . , and select Select a storage group, click VLUN Migration . Migration Session Name 4. Type a Migration session names must be less than 32 characters long and are case sensitive. . 5. Select the RAID Protection type . Choose to migrate to unconfigured 6. Select Create new volumes Target type volumes or Use existing volumes to migrate to configured volumes. 7. Select whether to Pin Volumes so that they cannot be moved by any FAST automated process. 8. Click OK to create the migration session. Migrating regular volumes Before you begin l Virtual LUN migration requires Enginuity 5876. This procedure explains how to migrate individual regular volumes. To migrate regular volumes: Procedure 1. Select the storage system. 2. Select STORAGE > Volumes to open the Volumes view. 3. Select the volume type by selecting a tab. 4. VLUN Migration . , and select Select one or more volumes, click 5. Type a Migration session name . Migration session names must be less than 32 characters and are case sensitive. Protection type . 6. Select the RAID 7. Select the Target type . Choose Create new volumes to migrate to Use existing volumes to migrate to configured unconfigured volumes or volumes. 8. Select whether to Pin Volumes so that they cannot be moved by any FAST automated process. Migrating regular storage group volumes 261

262 Storage Management 9. Click OK . Migrating thin storage group volumes Before you begin l Virtual LUN migration requires Enginuity 5876. This procedure explains how to migrate all the thin volumes in a storage group. To migrate thin storage group volumes: Procedure 1. Select the storage system. Storage Groups view. 2. Select STORAGE > Storage Groups to open the 3. VLUN Migration . , and select Select a storage group, click 4. Type a Migration Session Name . The session name must be less than 32 characters long and is case sensitive. . 5. Select a Target menu, select a pool from which to Migrate allocations from pool 6. From the migrate allocations. so that they cannot be moved by any FAST 7. Select wether to Pin volumes automated process. OK . 8. Click Migrating thin volumes Before you begin l Virtual LUN migration requires Enginuity 5876. This procedure explains how to migrate individual thin volumes. To migrate selected thin volumes: Procedure 1. Select the storage system. 2. Select STORAGE> Volumes to open the Volumes view. 3. Select the volume type by selecting a tab. 4. VLUN Migration . , and select Select one or more thin volumes, click Migration Session Name . 5. Type a The session name must be less than 32 characters long and is case sensitive. Target . 6. Select a Migrate allocations from pool menu, select a pool from which to 7. From the migrate allocations. 8. Select wether to Pin volumes so that they cannot be moved by any FAST automated process. OK . 9. Click 262 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

263 Storage Management Understanding Federated Tiered Storage Federated Tiered Storage (FTS) allows you to attach external storage to a storage system. Attaching external storage allows you to use physical disk space on existing storage systems while gaining access to features such as local replication, remote replication, storage tiering, data management, and data migration. For additional information on FTS, refer to the following documents: l Symmetrix Federated Tiered Storage (FTS) Technical Notes l Solutions Enabler Array Management CLI Product Guide l Solutions Enabler TimeFinder Family CLI User Guide Viewing external storage The page allows you to view and manage external storage as well as External Storage page, External Storage validate paths and zoning. The first time you visit the Unisphere scans all of the volumes that are visible from the DX directors. At least four paths to external volumes is required, meaning that at least four ports belonging to a single DX dual initiator pair must be configured. The best practice for maximum redundancy is achieved by using single initiator/multiple target zoning. This is accomplished by creating individual zones that contain each DX port and all external ports that the external volumes are available on. To view external storage and validate paths and zoning: Procedure 1. Select the storage system. > 2. Select . STORAGE External Storage Use the tree view lists to filter the list of external LUNs by selecting various combinations of members within a tree list view (control ports, external ports, and external LUNs). You can select a single item, multiple items in consecutive rows, or multiple items in non-consecutive rows. As each selection is made, the filtered results table is updated to reflect the current combination of filter criteria. tree view list Control Ports The following properties display: l Director —Storage system DX director. l Port —Port number on the director. External Ports tree view list The following properties display: l Port WWN —World Wide Name of the external port. l —External storage ID. Array ID l Dir:Port —Director: Port ID. l Vendor —External storage system vendor. External LUNs tree view list The following properties display: l LUN WWN —World Wide Name of the external LUN. Understanding Federated Tiered Storage 263

264 Storage Management l Capacity (GB) —Capacity in GB of the external LUN. table Filtered LUNs The following properties display: l —World Wide Name of the external LUN. External LUN WWN l Vendor —Vendor name of the external LUN. l —Capacity in GB of the external LUN. Capacity (GB) l —Volume ID on the external storage system. Volume l LUN —Displays 0 for storage systems. l Virtualizing Status —The mode of operation that the eDisk is using. Possible values are External, Encapsulated, and None. l Emulation —Emulation type of the external LUN. l —Disk group that contains the virtualized LUN. Disk Group l Spindle —Spindle ID of the external spindle. l Service State —Availability of the external LUN. Possible values are Normal, Degraded, and Failed. Failed means that there are no network paths available to the external LUN. Degraded means that there are paths from only one of the supporting DX directors. Normal means that there are network paths available from both supporting DX directors. The following controls are available: l — Virtualizing external LUNs on page 264 (Only displays for Virtualize Enginuity 5876) l Removing external LUNs Remove on page 266 (Only displays for — HYPERMAX OS 5977 or higher) Virtualizing external LUNs Virtualizing external LUNs See on page 265 for background information. Procedure 1. To virtualize external LUNs: 1. Select the storage system. > External Storage . STORAGE 2. Select 3. (Optional) Click the Not Virtualized check box above the filtered LUNs list view to see a list of external LUNs that have not been virtualized. 4. Select the external LUNs that you want to virtualize. 5. Click to open the Virtualize External LUNs dialog. Virtualize 6. Select an import method from the Import Method drop-down menu. This determines the mode of operation for the eDisk. WARNING Selecting Raw Space - External Provisioning deletes any data that is currently on the external volume. 264 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

265 Storage Management 7. Select an external disk group from the Disk Group drop-down menu, or type a disk group name to create a new external disk group. Enginuity adds the virtualized external LUNs to the specified external disk group. 8. If you are using Virtual Provisioning, select an empty pool or an existing pool Thin Pool drop- composed of externally provisioned data volumes from the down menu. Type a pool name if you want to create a new pool. 9. Optional: Click Advanced Options to continue setting the advanced options, as described next. Setting Advanced options: a. To override the auto meta member capacity configured on the storage , MB , or CYL from system, specify the unit of measurement by selecting GB the drop-down menu, and then select a capacity from the Meta Member Capacity drop-down menu. The Total Enabled Pool Capacity in GB is displayed. b. If you want all of the created storage volumes to be the same capacity, click the check box. If you do not select Create Equal Meta Member Capacity this check box, the meta tail is smaller than the other volumes in the meta. c. If you want to specify a DX director for the path to the eDisk, select a drop-down menu. director from the DX Director d. Click OK . 10. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l , and click Run Now Expand Add to Job List to perform the operation now. Virtualizing external LUNs When you attach external storage to a storage system, FAST.X virtualizes an external storage system’s SCSI logical units as disks called eDisks. eDisks have two modes of operation: Encapsulation Allows you to preserve existing data on external Symmetrix systems and access it through storage volumes. These volumes are called encapsulated volumes. External Provisioning Allows you to use external storage as raw capacity for new storage volumes. These volumes are called externally provisioned volumes. Existing data on the external volumes is deleted when they are externally provisioned. The following restrictions apply to eDisks: l Can only be unprotected volumes. The RAID protection scheme of eDisks is dependent on the external storage system. l Cannot be AS400, CKD, or gatekeeper volumes. l Cannot be used as VAULT, SFS, or ACLX volumes. Encapsulation Encapsulation has two modes of operation: Virtualizing external LUNs 265

266 Storage Management Encapsulation for disk group provisioning (DP encapsulation) The eDisk is encapsulated and exported from the storage system as disk group provisioned volumes. Encapsulation for virtual provisioning (VP encapsulation) The eDisk is encapsulated and exported from the storage system as thin volumes. In either case, Enginuity automatically creates the necessary volumes. If the eDisk is larger than the maximum volume capacity or the configured minimum auto meta capacity, Enginuity creates multiple volumes to account for the full capacity of the eDisk. These volumes are concatenated into a single concatenated meta volume to allow access to the complete volume of data available from the eDisk. External provisioning After you virtualize an eDisk for external provisioning, you can create volumes from the external disk group and present the storage to users. You can also use this storage to create a new FAST VP tier. Note If you use external provisioning, any data that is currently on the external volume is deleted. Geometry of encapsulated volumes Enginuity builds storage volumes based on the storage system cylinder size (fifteen 64 K tracks), so the capacity of storage volumes does not always match the raw capacity of the eDisk. If the capacity does not match, Enginuity sets a custom geometry on the encapsulated volume. For created meta volumes, Enginuity defines the geometry on the meta head, and only the last member can have a capacity that spans beyond the raw capacity of the eDisk. Encapsulated volumes that have a cylinder size larger than the reported user-defined geometry are considered geometry limited. For additional details and a list of Solutions Enabler restrictions that apply to geometry-limited volumes, refer to the Array Controls CLI Guide . Removing external LUNs Before you begin l This feature requires HYPERMAX OS 5977 or higher. l LUNs must be virtualized. This procedure explains how to remove external LUNs from storage groups protected with ProtectPoint. Encapsulated LUNs whose volumes are in a storage group cannot be removed. Procedure 1. Select the storage system. > External Storage . 2. Select Storage External Storage page. Opens the 3. Optional: Use the tree view lists to filter the list of external LUNs by selecting various combinations of members within a tree list view (control ports, external ports, and external LUNs). Select either a single item, multiple items in consecutive rows, or multiple items in non-consecutive rows. As each selection is made, the filtered results table is updated to reflect the current combination of filter criteria. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 266

267 Storage Management 4. From the filtered results table, select one or more LUNs and click Remove . dialog box. Opens the Remove External LUNs Show selected external 5. (Optional) To view details on the selected LUNs, click . LUNs 6. Do one of the following: l Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920. Previewing jobs Scheduling jobs on page 920 and l Expand Run Now . Add to Job List , and click Understanding storage templates Storage templates are a reusable set of storage requirements that simplify storage management for virtual data centers by eliminating many of the repetitive tasks required to create and make storage available to hosts/applications. With this feature, Administrators and Storage Administrators create templates for their common provisioning tasks and then invoke them later when performing such things as: l Creating or provisioning storage groups. The templates created on a particular Unisphere server can be used across all the arrays on that particular server. Storage templates require storage system running HYPERMAX OS 5977 or greater and storage groups. A provisioning template contains configuration information and a performance reservation. A Workload Plan/Performance Reservation is the I/O profile (IOPS/MBPS, Skew Mixture) for a particular SL-WL type combination. By default, the reservation is used for suitability checks and for comparison to current load running. The reservation expires after 14 days. Creating storage templates Before you begin l Storage templates require HYPERMAX OS 5977 or greater. l This feature is only available to a user with Admin or StorageAdmin permission. Using the configuration and performance characteristics of an existing storage group as a starting point, you can create templates that will pre-populate fields in the provisioning wizard and create a more realistic performance reservation in your future provisioning requests. To create a storage template: Procedure 1. Select the storage system. STORAGE > Templates to open the Provisoning Templates list view 2. Select Provision . Go to step 7. 3. For second and subsequent templates, click 4. To create the first template, click Select a Storage Group (this is part of the Get Started! ). and select a storage group that has a service level text under (SL) assigned (FBA only). Understanding storage templates 267

268 Storage Management 5. Compliance Click icon. and click the 6. Click Save as a Template . 7. Review the default values and update as appropriate. Configuration information includes the Service Level, Workload Type, Number and Size of Volumes to be saved as part of the template. By default the information will be populated based on the selected storage group, which can be modified as required before saving. Service Level: The drop-down will be populated with the all the available service levels on the selected storage system and Array Default. By default, the service level of the selected storage group will be selected. Workload Type: The drop-down will be populated with the workload types available to the selected SL (including None). By default the workload type of the selected storage group will be selected. Volumes: By default the number of volumes will be the number of volumes in the selected storage group. This field can be left empty. If the Scale Limits switch is on (default state), a change made to volume size will scale the IOPS and MBPS chart and an appropriate host IO limit will be calculated as a recommended value. Volume Size: By default the size of the volumes will be the size of the volumes in the selected storage group. If there are multiple volume sizes, the size of the first volume size encountered will be used. This field can be left empty. Volume capacity units available will be GB and TB. If the Scale Limits switch is on (default state), a change made to volume size will scale the IOPS and MBPS chart and an appropriate host IO limit will be calculated as a recommended value. Read Interaction Between Charts/Data for more information. Expected RT: The expected average response time for the selected service level. The Host I/O Limit section will pull current host I/O limit information from the source storage group (standalone or child limit only, parent limit is ignored). If no host I/O limit is set, a host I/O limit in IOPS is recommended and the value is pre-populated. Host I/O Limit combobox: Options: IOPS, MBPS, Both, and None. Initial Values: l If the source Storage Group has an IOPS limit set, is selected. IOPS l will be selected. If the source Storage Group has an MBPS limit set, MBPS l Both is If the source Storage Group has an IOPS and an MBPS limit set, selected. l IOPS is selected. If the source Storage Group has no limit set, Host I/O Limit input field(s) and associated recommendation label (s): l IOPS Selected in combobox: n Text input initial value: – If source storage group has an IOPS limit set, initial value is that limit. – If source Storage Group has no IOPS limit set, initial value is the recommended limit. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 268

269 Storage Management – Restrictions: – Value must fall between 100-2,000,000. – Value must be a multiple of 100. l MBPS selected in combobox. n Text input initial value: – If source storage group has an MBPS limit set, initial value is that limit. – If source Storage Group has no MBPS limit set, initial value is the recommended limit. – Restrictions: – Value must fall between 1-100,000. l Both selected in combobox - the IOPS and MBPS information is displayed. l None selected in combobox - no text field or recommendation is displayed. Scale limits switch: The scale limits switch is enabled by default. If the switch is 'on', provisioning requests using this template with scale the host IO Limit recommendation(s) if the template's default capacity is overridden. If the switch is 'off', provisioning requests will use the exact Host IO Limit value(s) that were saved with the template. Dynamic Distribution: l Options: Never, OnFailure, and Always. l Initial value: n If source Storage Group has a Dynamic Distribution value, initial value is that value. n If source Storage Group has no Dynamic Distribution set, initial value is Never. Performance Reservation This is a 2 week expiring performance reservation or plan that will be used for comparison on the storage group details page and suitability checks. IOPS and MBPS: Similar to IOPS and MBPS chart on the storage group details page with just the actual values and no plan (more details Workload Compliance Details spec). A Host I/O Limit line will be seen on the graph if a corresponding value has been set in the Host IO Limit section. Workload Skew: Similar to the workload skew chart on the storage group details page with just the actual values and no plan. I/O Mixture: Similar to the workload mixture chart on the storage group details page with just the actual values and no plan. Interaction Between Charts and Data When Capacity is Modified (both Volumes Count and Size are Populated) Performance Reservation Section: IOPS/MBPS: The IOPS and MBPS values are scaled to the new capacity to preserve the IO density. For example, if the source storage group's total capacity was 10 x 50 GB volumes = 500GB, and the Volumes field was changed from 10 to 15, the total capacity would be 750GB. The 42 IOPS values and the Creating storage templates 269

270 Storage Management 42 MBPS values in the charts would be multiplied by 1.5 to reflect the 50% increase in capacity. I/O Mixture: No change. Skew: No change. If the Scale limits switch is 'on', the recommended limit will be recalculated according to the new capacity. So it would be two times the maximum 42 bucket value of IOPS and/or MBPS as calculated for the Performance Reservation section. The textbox will be auto-populated with the recommended limit. This new value will also be drawn in the appropriate IOPS and/or MBPS chart in the Performance Reservation section. If the Scale limits switch is 'off, the recommended limit will be recalculated the same way. The value in the textbox will NOT be overwritten in this case. When Capacity is Modified (Volumes Count and/or Size are Empty) Performance Reservation Section: IOPS/MBPS: Total capacity it required to calculate IOPS/MBPS. If we are missing count, size, or both, the IOPS and MBPS values will be calculated according to the current total capacity of the source storage group. So if (for example) the storage group is 5 x 100GB devices, and Volume Count is nulled out, and device size is change to 75GB, IOPS and MBPS will be calculated assuming 500GB. The assumed capacity will be displayed in the upper right hand corner of the charts. I/O Mixture: No change. Skew: No change. Host IO Limit: Total capacity it required to calculate Host IO Limit recommendation. If we are missing count, size, or both, the recommendation will be calculated according to the current total capacity of the source storage group. An information icon will be shown next to the recommendation. Hovering will give more information. When Host IO Limit Combobox and Textbox Values are Modified The specified value will be drawn in the appropriate charts. If the value is updated, the chart is updated. If the combobox value is Both, you should have a red line corresponding to the specified value of IOPS on the IOPS chart and MBPS on the MBPS chart. If None is selected, no host IO limit line should show up on either IOPS or MBPS chart. If IOPS is selected, there should be a red line on IOPS and none on MBPS. If MBPS is selected, there should be a red line on MBPS and none on IOPS. 8. Click SAVE If there has not been at least one week of data collected for the selected Dialog displayed when there is less than storage group, a dialog is displayed (see one week's data collected on page 96). Viewing storage templates Before you begin l Storage templates require HYPERMAX OS 5977 or greater. Provisioning Template list view allows you to view and manage provisoning The templates. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 270

271 Storage Management Procedure 1. Select the storage system. STORAGE to open the Provisoning Template list view. > 2. Select Templates 3. Select a template card. The following properties are displayed - template service level, workload type, response time, capacity information (number of volumes, size and headroom), as well as workload characteristics (I/O density, I/O size, Writes and Skew.) 4. Hover near the workload writes % to view a popup chart of the I/O mixture that the workload is running. Hovering over the sections of the pie chart reveals the percentages associated with each I/O type. To dismiss the popup charts simply click anywhere off of the charts. 5. Hover near the workload skew % to view a popup chart of the actual workload skew. The actual workload skew is a load percentage over the percentage of capacity used in the workload. Hovering over the line on the chart will display the percentages for actual capacity and load score. To dismiss the popup charts simply click anywhere off of the charts. 6. Click on the icon on the top-right hand corner of the template card to view the back of the template card. The back of the template card displays the name of the template at the very top along with two charts underneath it. The top chart displays the set workload host IO limit in IOPS along with the actual workload IO statistics. The bottom chart displays the set workload host IO limit in MBPS along with the actual workload MBPS IO statistics. 7. Click on the icon on the top-right hand corner of the template card to view front siode of the card again. The following controls are available: l — Creating storage templates on page 267 Provision l Modify on page 271 — Modifying storage templates l Deleting storage templates — on page 272 Modifying storage templates Before you begin l Storage templates require HYPERMAX OS 5977 or greater. l The user must have Administrator or StorageAdmin permission. Procedure 1. To modify a storage template: 1. Select the storage system. STORAGE > Templates to open the Provisoning Templates list view 2. Select Modify to open the Modify Template wizard. 3. Select the template and click 4. Modify the template as you step through the wizard. Modifying storage templates 271

272 Storage Management All of the fields are exactly like the Save as a Template dialog. The only difference is how the scale limits work when volume size or volume count field is left empty. If the selected template has both volume size and count when it was created and the user removes one or either of them during the modification operation with scale limits switch on, the original capacity of the template will be used for the display purposes and it will be shown in the tooltip next to the IOPS/MBPS field and top right corner of the chart. If the selected template did not have either volume size and count when it was created and the user leaves one or either of the fields empty modification operation with scale limits switch on, the capacity of 200 GB will be used for the display purposes and it will be shown in the tooltip next to the IOPS/MBPS field and top right corner of the chart. 5. Click Finish . Deleting storage templates Before you begin l Storage templates require HYPERMAX OS 5977 or greater. l This feature is only available for a user with Administrator or StorageAdmin permission. To delete a storage template: Procedure STORAGE > Storage Templates to open the Storage Template list 1. Select view. 2. Select the template and click . 3. Click . OK Understanding FAST.X FAST.X allows the seamless integration of storage systems running HYPERMAX OS 5977 or higher and heterogeneous arrays. It enables LUNs on external storage to be used as raw capacity. Data services such as SRDF, TimeFinder, and Open Replicator are supported on the external device. FAST.X requires HYPERMAX OS 5977 or higher. For additional information on FAST.X, refer to the following documents: l Solutions Enabler Array Management CLI Guide l Solutions Enabler TimeFinder CLI User Guide Viewing external disks Before you begin The external disk list is available only for HYPERMAX OS 5977 or higher. Note You must refresh the external disks list to view the latest status. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 272

273 Storage Management Procedure 1. Select the storage system. STORAGE . > 2. Select Storage Resource Pools 3. to view its details. Select the SRP and click 4. Click the number next to . Disk Groups 5. to view its details. Select an external disk group and click Number of disks . 6. Click the number next to The following properties are displayed: Name World Wide Name of the external disk. Spindle Spindle ID of the external spindle. Vendor Vendor name of the external disk. Capacity (GB) Capacity in GB of the external disk. Array ID ID of the storage system. Service State Availability of the external disk. Possible values are Normal, Degraded, and Failed. Failed means that there are no network paths available to the external LUN. Degraded means that there are paths from only one of the supporting DX directors. Normal means that there are network paths available from both supporting DX directors. Disk State Active , Drained , Draining , and The state of the disk. Valid values are . Disabled Drained Drained Draining state. Drain information about the disk if it is in or Otherwise it displays "-". The following controls are available: l Add eDisks on page 274 — Adding external disks l Remove — Removing external disks or External LUNs on page 274 l — on page 275 Start draining external disks Start Draining l Stop Draining — Stop draining external disks on page 276 l — Activating external disks on page 276 Activate Viewing external disks 273

274 Storage Management Adding external disks Before you begin This action can be performed only for HYPERMAX OS 5977 or higher. You can add an external disk to the external disk group of a storage resource pool (SRP). When adding an external disk for storage systems running HYPERMAX OS 5977 or higher, if there is no pre-existing external disk group, it is created automatically when the external disk is added to the selected SRP. If an external disk group exists for the external array’s external LUN WWN, the external LUN WWN is added to it. Procedure 1. To add an external disk: 1. Select the storage system. Storage Resource Pools . Storage 2. Select > 3. Select the SRP. 4. Click Add EDisks . Add eDisks The dialog box shows the available external LUN WWNs from multiple external arrays. 5. Select the external disk to be added. 6. If you want to preserve the existing data on the external LUN, select Incorporate eDisk data cleared, the . If you leave the Incorporate eDisk data existing data on the external LUN is cleared. Add Storage Group list, select a storage group to add. 7. (Optional) In the You can filter the list by searching for a storage group by name. This option is available only on storage systems running HYPERMAX OS 5977 Q1 2016. 8. Do one of the following: l Add to Job List Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l , and click Run Now to perform the operation now. Add to Job List Expand Removing external disks or External LUNs Before you begin Removing external LUNs on page 266for information on removing external LUNs. See See below for information on removing external disks. This action can be performed only for HYPERMAX OS 5977 or higher. You can remove an external disk from a storage resource pool (SRP) if it is in a Drained state. Procedure 1. To remove an external disk: 274 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

275 Storage Management 1. Select the storage system. > 2. Select Storage Resource Pools STORAGE . 3. to view its details. Select the SRP and click 4. Click the number next to . Disk Groups 5. to view its details. Select an external disk group and click 6. Remove , and click Select the external disk that you want to remove, click eDisks . The Remove External LUNs dialog appears and prompts for confirmation that you want to remove the external disk. 7. Do one of the following: l Add to Job List Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand Working with external disks You can perform the following operations: l Start draining an external disk. For more information, refer to Start draining external disks on page 275. l Stop draining Stop draining an external disk. For more information, refer to external disks on page 276. l Activate an external disk. For more information, refer to Activating external disks on page 276. Start draining external disks Before you begin The storage resource pool (SRP) containing the external disk you want to drain must have sufficient free space to absorb the allocated tracks from the external disk that is being drained. You can drain a disk only if it is not currently draining or already drained. Procedure 1. To start a drain operation on an external disk: 1. Select the storage system. 2. Select > Storage Resource Pools . STORAGE 3. to view its details. Select the SRP and click Disk Groups . 4. Click the number next to 5. to view its details. Select an external disk group and click Working with external disks 275

276 Storage Management 6. Start Select the external disk that you want to drain, click , and click Draining . 7. Do one of the following: l to add this task to the job list, from which you can schedule Add to Job List or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l Run Now to perform the operation now. , and click Expand Add to Job List Stop draining external disks Before you begin You can stop the drain operation on an external disk only if it is currently draining. Procedure 1. To stop a draining operation on an external disk: 1. Select the storage system. > Storage Resource Pools . 2. Select STORAGE 3. Select the SRP and click to view its details. . Disk Groups 4. Click the number next to 5. to view its details. Select an external disk group and click 6. , and click Stop Select the external disk that you want to stop draining, click . Draining 7. Do one of the following: l to add this task to the job list, from which you can schedule Add to Job List or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l , and click Run Now Expand Add to Job List to perform the operation now. Activating external disks Before you begin This action can be performed only for HYPERMAX OS 5977 or higher. You can activate an external disk if it is in a draining, drained, or disabled state. Procedure 1. To activate an external disk: 1. Select the storage system. STORAGE > Storage Resource Pools . 2. Select 3. to view its details. Select the SRP and click 4. Click the number next to Disk Groups . 5. to view its details. Select an external disk group and click 276 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

277 Storage Management 6. Activate Select the external disk that you want to activate, click . , and click 7. Do one of the following: l to add this task to the job list, from which you can schedule Add to Job List or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Run Now to perform the operation now. Add to Job List Expand , and click Viewing reservations Procedure 1. Select the storage system. , click the System Health 2. In the Dashboard tab. Actions View Reservations . 3. In the panel, click The following properties display: l Reservation —Reservation ID. l Owner —User that created the reservation. l —Application used to create the reservation. Application l Host —Host from which the reservation was created. l Reserved Volumes —Number of reserved volumes. l —Date/time the reservation was created. Creation l Expiration —Date/time the reservation will expire. The default value is . Never l —User-supplied comments. User Comment The following control is available: Release — Releasing reservations on page 278 Viewing reservation details Procedure 1. Select the storage system. tab. System Health 2. In the dashboard, click the 3. In the Action panel, click View Reservations . 4. . Select the reservation and click Properties panel displays the following: The l Reservation —Reservation ID. l Owner —User that created the reservation. l —Application used to create the reservation. Application l Host —Host from which the reservation was created. l Reserved Volumes —Number of reserved volumes. Viewing reservations 277

278 Storage Management l Creation —Date/time the reservation was created. l —Date/time the reservation will expire. Never is the default. Expiration l User Comment —User-supplied comments. There are links to views for objects contained in and associated with the reservation. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking Reserved Volumes will open a view listing the volumes held in the reservation. Releasing reservations Procedure 1. Select the storage system. 2. In the dashboard, click the System Health tab. panel, click 3. In the . Action View Reservations Release 4. Select one or more reservations and click . . OK 5. Click Managing vVol Before you begin The storage system must be running HYPERMAX OS 5977 or higher. provides you with a single place to monitor and manage VVols. The VVol Dashboard To access the VVol Dashboard: Procedure 1. Select the storage system. 2. Select Storage > VVol Dashboard . VVol Dashboard: The VVol Dashboard is organized into the following panels: Summary panel Displays the following VVol summary information: l Storage Containers — The number of storage containers on the selected storage system. Click Storage Containers to display the Storage Containers list view. For more information about viewing storage containers, refer to Viewing storage containers on page 279. l Protocol Endpoints — The number of protocol endpoints on the selected storage system. Click Protocol Endpoints to display the Protocol Endpoints list view. For Viewing protocol more information about protocol endpoints, refer to on page 285. endpoints l PE Masking Views — The number of masking views that contain protocol endpoints. Click PE Masking Views to display the PE Masking Views list view. For more Viewing masking views on information about PE masking views, refer to page 308. 278 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

279 Storage Management To view additional information on a particular item, click on it to open the corresponding list view. panel Actions Displays links to the following common tasks: l — on page Creating storage containers CREATE STORAGE CONTAINER 281 l — PROVISION PROTOCOL ENDPOINT TO HOST Provisioning protocol on page 286 endpoints to hosts l Viewing alerts on page 52 STORAGE CONTAINER ALERTS — Symmetrix Consumed Capacity - Subscribed panel Displays a bar graph representing how much subscribed space all storage containers consume on the storage system. panel VASA Provider Status Displays one of the following icons representing the status of the VASA provider: l — The VASA provider is online. l — The VASA provider is offline. l — A connection to the VASA provider has not been configured. l — There was an error connecting to the VASA provider. . To refresh the status of the VASA provider, click Create Connection . To edit To create a connection to the VASA provider, click Edit Connection an existing connection, click . For more information about configuring a connection to the VASA provider, see Configuring the VASA provider connection on page 287. Storage Resources panel Displays a list of storage resources within all containers on the storage system, showing the current usage of each storage resource, ascending by usage. l Name — The name of the capability profile. l — The current percent of subscribed tracks within Subscribed Used(%) the storage resource in relation to the limit imposed on the capability profile. l Limit (GB) — The subscribed limit imposed on the storage resource. l — The name of the storage container with which the storage Container resource is associated. l Compression — If compression is enabled on this storage resource a tick will appear. If it's disabled a horizontal dash will appear. VIEW ALL STORAGE RESOURCE to view the Storage Resources list Click view. Viewing storage containers To view the storage container list: Viewing storage containers 279

280 Storage Management Procedure 1. Select the storage system. Storage > VVol Dashboard . 2. Select Storage Containers 3. Click list view. Storage Containers to display the The following properties display: l Name — The name of the storage container. l — The number of associated storage resources. Storage Resources l — The current percentage of subscribed tracks Subscribed Used (%) within the storage container, in relation to the limit imposed on all of the storage resources within the storage container. l Subscribed Limit (GB) —The current total limit of all storage resources in GB. The following controls are available: l on page 280 — Viewing storage container details l Create — Creating storage containers on page 281 l Modify on page 282 — Modifying storage containers l Delete — Deleting storage containers on page 282 Viewing storage container details To view storage container details: Procedure 1. Select the storage system. Storage > VVol Dashboard 2. Select . 3. Click Storage Containers to display the Storage Containers list view. 4. Select the storage container and click . The following properties display: l Name — The name of the storage container. l Description — The description of the storage container. This field is editable. l — The total combined limit of all storage resources Subscribed Limit (GB) within the storage container. l — The current subscribed usage on the storage Subscribed Used (GB) container of all of the storage resources within the storage container. l Subscribed Free (GB) — The total free subscribed capacity, based on the capacity used and the limit of all of the storage resources in the storage container. l — The total number of storage resources Number of Storage Resources within the storage container. 280 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

281 Storage Management Creating storage containers This procedure allows you to create a storage container. To add a storage resource to Adding storage resources to storage containers on page a storage container, refer to 284. To create a storage container: Procedure 1. Select the storage system. Storage > VVol Dashboard . 2. Select Storage Containers list view. 3. Click Storage Containers to display the Create Storage Container wizard displays. . The Create 4. Click 5. Complete the following steps: a. Type a name for the storage container. b. Optional: Type a description of a storage container. . 6. Click NEXT page, specify at least one storage resource. Default 7. On the Storage Resources values for a new storage resource are populated. To remove a storage resource from the list of associated storage resources, hover the mouse over the storage resource and click . To add a storage resource, click and complete the following steps (same steps when modifying an existing resource): field, type a name for the storage resource, or accept the Name a. In the default name. b. From the SRP menu, select the SRP to apply to the storage resource. Service Level menu, select the service level to apply to the c. From the storage resource. For all-flash storage systems, the only service level available is Diamond and it is selected by default. Workload menu, select the workload to apply to the storage d. From the resource. field, type the imposed subscribed limit on the storage Limit (GB) e. In the resource. 0.1 GB is the minimum value allowed. 8. Compression is enabled by default on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or higher. To disable the feature on this check box. For more information, storage container, uncheck the Compression Understanding compression . refer to 9. Click NEXT . Summary page, review the details and do one of the following: 10. On the l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920 l Add to Job List , and click Run Now to perform the operation now. Expand Creating storage containers 281

282 Storage Management Modifying storage containers To modify a storage container: Procedure 1. Select the storage system. Storage > VVol Dashboard 2. Select . 3. Click to display the Storage Containers list view. Storage Containers 4. Select the storage container and click Modify . 5. Modify the description. . OK 6. Click Deleting storage containers Before you begin l The storage system must be running HYPERMAX OS 5977 or higher. l You cannot delete containers with used capacity. To delete a storage container: Procedure 1. Select the storage system. 2. Select . Storage > VVol Dashboard to display the Storage Containers 3. Click Storage Containers list view. 4. , and click Delete . Select the storage container you want to delete, click OK . 5. Click Viewing storage resources To view the storage resource list: Procedure 1. Select the storage system. 2. Select . Storage > VVol Dashboard 3. Click Storage Containers to display the Storage Containers list view. 4. . Select the storage container and click 5. Click the number next to Storage Resources to display the Storage Resources list view. The following properties display: l — The name of the capability profile. Name l SRP — The number of the SRP. l Service Level — The name of the service level. l Workload — The name of the workload. l — The current percent of subscribed tracks within Subscribed Used (%) the storage resource in relation to the limit imposed on the storage resource. 282 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

283 Storage Management l Subscribed Limit (GB) — The subscribed capacity limit within the storage resource. The following controls are available: l Viewing storage resource details on page 283 — l on page 284 Add — Adding storage resources to storage containers l Modifying storage resources on page 285 — Modify l Removing storage resources from storage containers on page Remove — 285 Viewing storage resource details To view storage resource details: Procedure 1. Select the storage system. 2. Select . STORAGE > VVol Dashboard 3. Click Storage Containers . 4. Select the storage container and click . Storage Resources to display the Storage Resources 5. Click the number next to list view. 6. . Select the storage resource and click The following properties display: l — The name of the storage resource. Name l — The name of the associated storage container. Storage Container l SRP — The name of the associated SRP. l Service Level — The name of the associated service level. l — The name of the associated workload. Workload l Compression — Indication if compression is enabled or disabled. l — The current compression ratio on this storage Compression Ratio resource. l — The subscribed capacity limit imposed. Subscribed Capacity Limit (GB) This field is editable. l Subscribed Capacity Used (GB) — The current subscribed usage on the storage resource. l — The subscribed free space on the Subscribed Capacity Free (GB) storage resource. Viewing storage resource related SRPs To view the related SRPs of a storage resource: Viewing storage resource details 283

284 Storage Management Procedure 1. Select the storage system. Storage > VVol Dashboard . 2. Select 3. Click Storage Containers to display the Storage Containers list view. 4. . Select the storage container and click 5. Click the number next to Number of Storage Resources to display the Storage Resources list view. 6. Select the storage resource and click . 7. Click the entry next to SRP to display the Storage Resource Pools list view. For more information about the Storage Resource Pools list view, refer to Viewing Storage Resource Pools on page 154. Adding storage resources to storage containers To add a storage resource to a storage container: Procedure 1. Select the storage system. . 2. Select STORAGE > VVol Dashboard to display the Storage Containers list view. Storage Containers 3. Click 4. . Select the storage container and click . 5. Click the number next to Number of Storage Resources . Add 6. Click The Add Storage Resource To Storage Container dialog box displays. The details of any existing storage resource are populated automatically. 7. To add an additional resource, click and specify the following details: l Name — The name of the storage resource. l SRP — The name of the SRP. l Service Level — The name of the service level. l — The name of the workload. Workload . For more information about current workload, click l Limit (GB) — The subscribed capacity limit imposed. l — The Compression check box will be checked if you Compression enabled compression when creating the storage group. Uncheck to disable compression on this particular storage resource. For more information, refer Understanding compression . to 8. (Optional) If required, edit the details of the new storage resource, click to remove it completely, or click to add another new storage resource. 9. After you have added all of the required storage resources, do one of the following: 284 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

285 Storage Management l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920. on page 920 and l Run Now to perform the operation now. Expand Add to Job List , and click Modifying storage resources To modify a storage container: Procedure 1. Select the storage system. 2. Select STORAGE > VVol Dashboard . 3. Click Storage Containers list view. Storage Containers to display the 4. . Select the storage container and click . Number of Storage Resources 5. Click the number next to Modify . 6. Select the storage resource and click 7. Modify the subscribed limit. 8. Click OK . Removing storage resources from storage containers To remove a storage resource from a storage container: Procedure 1. Select the storage system. STORAGE > VVol Dashboard . 2. Select Storage Containers 3. Click list view. to display the Storage Containers 4. . Select the storage container and click Number of Storage Resources . 5. Click the number next to 6. Remove . Select the storage resource you want to remove, click , and click . OK 7. Click Viewing protocol endpoints To view the protocol endpoints list: Procedure 1. Select the storage system. Storage > VVol Dashboard . 2. Select Protocol Endpoints Protocol Endpoints list view. to display the 3. Click The following properties display: l Name — The volume ID of the protocol endpoint. l — Indicates, using a or symbol , if the protocol Masking view endpoint is in a masking view or not. Modifying storage resources 285

286 Storage Management l Storage Groups — The number of associated storage groups. l Reserved — Indicates if the protocol endpoint is reserved or not. The following controls are available: l Viewing protocol endpoint details on page 286 — l — on page 287 Deleting protocol endpoints l on page 196 Setting volume identifiers Set Volume Identifier — Viewing protocol endpoint details To view protocol endpoint details: Procedure 1. Select the storage system. . 2. Select Storage > VVol Dashboard Protocol Endpoints 3. Click . 4. . Select the protocol endpoint and click The following properties display: l Name — The name of the protocol endpoint. l Volume Identifier — The volume identifier of the protocol endpoint. l — The status of the protocol endpoint. Status l Reserved — The reserved status of the protocol endpoint. Valid values are and No . Yes l Number of Storage Groups — The total number of storage groups associated with the protocol endpoint. l Number of Masking Views — The total number of masking views associated with the protocol endpoint. Provisioning protocol endpoints to hosts To provision a protocol endpoint to a host: Procedure 1. Select the storage system. . 2. Select Storage > VVol Dashboard panel, click Provision Protocol Endpoint to Host . Actions 3. In the 4. Specify a host or host group. Do one of the following: l Select an existing host or host group from the list. l Create Host . The Create Host dialog displays. To create a new host, click Creating hosts on page 292. For more information, refer to l To create a host group, click Create Host Group . The Create Host Group Creating host groups on page dialog displays. For more information, refer to 302. 286 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

287 Storage Management 5. Click NEXT . pane, specify a port group. Do one of the following: 6. On the Select Port Group l . For more information about To create a new port group, select New Creating port groups on page 316. creating port groups, refer to l To use an existing port group, select Existing , and select a port group from list. Port Group the 7. Click NEXT . 8. On the Summary page, review the details and do one of the following: l name. Optional: Modify the auto-generated Masking View l Storage Group Optional: Modify the auto-generated name. l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Add to Job List , and click Run Now to perform the operation now. Expand Deleting protocol endpoints Before you begin The storage system must be running HYPERMAX OS 5977 or higher. To delete a protocol endpoint: Procedure 1. Select the storage system. . 2. Select Storage > VVol Dashboard 3. Click to display the Protocol Endpoints list view. Protocol Endpoints 4. . Select the protocol endpoint you want to delete and click OK . 5. Click Configuring the VASA provider connection To configure the VASA provider connection: Procedure 1. Select the storage system. Storage > VVols Dashboard . 2. Select VASA Provider Status 3. In the panel, do one of the following: l Create Connection . To create a new connection, click l Edit Connection . To edit an existing connection, click 4. Specify the IP address of the VASA provider. OK . 5. Click Understanding compression Compression allows users to compress user data on storage groups and storage resources. The feature is enabled by default and can be turned on and off at storage group and storage resource level. Deleting protocol endpoints 287

288 Storage Management If a storage group is cascaded, enabling compression at this level enables compression for each of the child storage groups. The user has the option to disable compression on one or more of the child storage groups if desired. To turn the feature off on a particular storage group or storage resource, uncheck the , check box in the in the Modify Storage Group Compression Create Storage Group or Add Storage Resource To Storage Container dialogs or when using the Provision wizards. or Storage Create Storage Container The following are the prerequisites for using compression: l Compression is only allowed on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or higher. l Compression is allowed for FBA devices only. l The user must have at least StorageAdmin rights. l The storage group needs to be FAST managed. l The associated SRP cannot be comprised, either fully or partially, of external storage. Reporting Users are able to see the current compression ratio on the device, the storage group and the SRP. Efficiency ratios are reported in units of 1/10th:1. Note External storage is not included in efficiency reports. For mixed SRPs with internal and external storage only the internal storage is used in the efficiency ratio calculations. Viewing the SRP efficiency details Before you begin Users need to have at least Monitor rights. This procedure explains how one way to view the overall efficiency details of an SRP. Overall Efficiency Ratio The field can also be viewed from the Storage Resource view. Pools Details Procedure 1. Select the storage system. to open the CAPACITY dashboard. 2. Select CAPACITY panel: Efficiency The following fields are displayed in the l Overall Efficiency Ratio- The ratio of the sum of all TDEVs and Snapshot sizes and the Physical Used Storage (calculated based on the compressed pool track size). l Virtual provisioning Savings - The ratio of the sum of all TDEVs and Snapshot sizes and the sum of all TDEVs allocated plus RDP allocated space. l Snapshot Savings - The ratio of the RDP Logical Backend Storage (calculated based on the 128K track size) and the RDP Physical Used Storage of the RDP space (calculated based on the compressed pool track size). Viewing compressibility reports This procedure shows how to view maximum data compressibility of storage groups on an All Flash storage system. Compression must be enabled on the storage system. 288 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

289 Storage Management Before you begin: l This feature requires HYPERMAX OS 5977.1125.1125 running on an All Flash storage system l The account you use on Unisphere must have Monitor privilege at least. Procedure 1. Select the storage system. CAPACITY to open the CAPACITY dashboard. 2. Select 3. Select a SRP instance from the drop down menu and in the Actions panel, click . COMPRESSIBILITY The report lists the following details for each storage group: l Storage Group —The name of the storage group. l # of Volumes —The number of volumes in the group. l Allocated (GB) —The amount of space allocated to the storage group. l —The amount of allocated space that the group is using. Used (GB) l Target Ratio —The expected compression ratio based on the last 24 hours of samples. If all storage groups are compressed, the compressibility report (assuming that not all NOT_IN_SG will be empty except for an entry named of the configured volumes are in storage groups). Viewing a storage group's compression ratio Before you begin Users need to have at least Monitor rights to view the compression ratio. Procedure 1. Select a storage system. > STORAGE Storage Groups 2. Select 3. Select a storage group and click . , Compression Ratio The VP Saved fields for the selected Compression and storage group are displayed. If compression is enabled on the storage group a tick will appear in the Compression field. If compression is disabled a horizontal dash will be shown. Viewing a volume's compression details Before you begin Users need to have at least Monitor rights to view the compression ratio. This procedure explains how to view a storage group volume's compression ratio. Procedure 1. Select a storage system. 2. Select STORAGE > Storage Groups . 3. and click the number next to Volumes . Click Viewing a storage group's compression ratio 289

290 Storage Management 4. Select a volume and click . Compression Ratio The field for the selected volume is displayed. If compression ratio is not applicable on the volume the field will read "N/A." > . Storage Volumes 5. Alternatively, select a storage system and then select 6. . Select a volume and click field for the selected volume is displayed. The Compression Ratio Viewing compression status using the VVol Dashboard Before you begin Users need to have at least Monitor rights. This procedure explains how to view the compression status and compression ratio of VVol Dashboard storage resources using the . Procedure 1. Select a storage system. 2. Select VVol Dashboard . > Storage The compression state column for each storage resource is displayed in the panel. If compression is enabled Symmetrix Consumed Capacity - Subscribed for that resource a tick will appear in the column. If compression is disabled a horizontal dash will be shown. Storage 3. To view the compression ratio on a storage resource, click on . Containers 4. . Select a storage container and click Storage Resouces . 5. Click the number next to 6. Select a storage resource and click . field is displayed. The Compression Ratio Viewing the compression efficiency dashboard This procedure explains how to view the compression efficiency of a storage system running HYPERMAX OS 5977. Procedure 1. Select a storage system. PERFORMANCE > Dashboards . 2. Select Array 3. Choose as the category. 4. Click on the Array Efficiency tab. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 290

291 CHAPTER 5 Host Management l Understanding Host Management ...292 l ...292 Creating hosts l Creating host groups ...302 l Creating masking views ...307 l Setting initiator port flags ... 311 l ... 312 Setting initiator attributes l Renaming initiator aliases ... 312 l Replacing initiators ... 313 l ... 313 Removing masking entries l Viewing initiators ... 313 l Viewing initiator details ... 314 l Viewing volumes associated with host initiator ... 315 l Viewing details of a volume associated with initiator ...316 l Creating port groups ...316 l ... 322 Managing storage for Mainframe l Mapping CKD volumes ... 340 l Creating PowerPath hosts ... 343 l ... 344 Viewing PowerPath hosts l Viewing PowerPath hosts details ... 344 l Viewing PowerPath Host Virtual Machines ... 345 l Viewing host cache adapters ...346 Host Management 291

292 Host Management Understanding Host Management Host Management covers the following areas: l Hosts - Management of host and host groups. l Masking Views - Management of masking views. A masking view is a container of a storage group, a port group, and an initiator group ,and makes the storage group visible to the host. Devices are masked and mapped automatically. The groups must contain some devices entries. l Port Groups -Management of port groups. Port groups contain director and port identification and belong to a masking view. Ports can be added to and removed from the port group. Port groups no longer associated with a masking view can be deleted. l Initiators - Managemnet of initiators and initiator groups. An initiator group is a container of one or more host initiators (Fibre or iSCSI). Each initiator group can contain up to 64 initiator addresses or 64 child IG names. Initiator groups cannot contain a mixture of host initiators and child IG names l Xtrem SW Cache Adapters - Monitor of host cache adapters. l PowerPath Hosts - Management of PowerPath hosts l Mainframe - Management of configured splits, CU images, and CKD volumes. l CU Images - Management of CU images. Creating hosts Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running Enginuity version 5876, or HYPERMAX OS 5977 or higher. l The maximum number of initiators allowed in a host depends on the storage operating environment: n For Enginuity 5876, the maximum allowed is 32. n For HYPERMAX OS 5977 or higher, the maximum allowed is 64. To create hosts: Procedure 1. Select the storage system. > Hosts . 2. Select Hosts Create > Create Host . 3. Click The Create Host dialog displays. 4. Type a Host Name. Host names must be unique from other hosts/host groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host names are case-insensitive. 5. Select the Fibre radio button to filter the available initiators table to display Fibre Channel initiators only or select the ISCSI radio button to filter the table to display iSCSI initiators only. The Fibre radio button is selected by default. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 292

293 Host Management 6. Set Flags Select a host, click to open the and then click Set Host/Host Group Flags dialog. 7. Optional: To set the host port attributes: a. Click . Set Host Flags b. Optional: Select a host whose flag settings you want to copy. c. Modify any of the attributes , by selecting the corresponding Override option) and enable (select) or disable Enable option (thereby activating the (clear) the flag. d. Optional: Select Consistent LUNs to specify that LUN values for the host must be kept consistent for all volumes within each masking view of which this host is part. When set, any masking operation involving this host that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. e. Click . OK 8. Do either of the following: l Click Run Now to start the task now. l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs Adding initiators to hosts Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. On storage systems running HYPERMAX OS 5977 or higher, iSCSI and fibre initiators cannot be mixed in the same host. To add initiators to hosts: Procedure 1. Select the storage system. > Hosts 2. Select Hosts . Modify to open the Modify Host dialog. 3. Select the host and click 4. Available Initiators list and click . Select an initiator from the 5. To add a user defined initiator to the host, click , fill in the name and click OK . 6. Specify the initiator by typing its name or by selecting it from the list. The Initiators table is a filtered list based on whether the initiator is Fibre Channel or Add . Repeat this iSCSI. To filter the list, type part of the initiator name. Click step for each additional host. Adding initiators to hosts 293

294 Host Management 7. Click Add To Job List . Run Now or Adding initiator to host To add an initiator to a host: Procedure Hosts 1. Select > HOSTS Create Host and then click 2. Click Create . 3. Click the . button to the right of Initiators in Host 4. Type the Initiator name. . 5. Click OK Removing initiators from hosts Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. To remove initiators from hosts: Procedure 1. Select the storage system. > Hosts . 2. Select Hosts to open the Modify Host dialog. 3. Select the host and click Modify 4. . list and click Select an initiator from the Available Initiators Replace Initiator 5. Select the initiator and click . or . 6. Click Add To Job List Run Now Modifying hosts Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. Procedure 1. Select the storage system. Hosts > Hosts to open the Hosts list view. 2. Select 3. Do one of the following: l Modifying hosts: n Modify to open the Modify Host dialog box. Select the host and click n To change the Host Name, highlight it and type a new name over it. Host names must be unique from other hosts on the Symmetrix system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host names are case-insensitive. 294 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

295 Host Management l Adding initiators: n In the Select Initiators list box, type the initiator name or select it from the list. To filter the list, type part of a initiator name. Note Initiators can only belong to one host at a time; therefore, any initiators that do not appear in the list already belong to another host. n The Add Initiators table is a filtered list based on whether the host is Fibre Channel or iSCSI. n . Available Initiators Select an initiator from the list and click n Repeat these steps for each additional initiator. n To add a user defined initiator to the host, click , fill in the name and OK click . l Removing initiators: n . Initiators in Host In the list, select the initiator and click n Repeat these steps for each additional initiator. 4. Do either of the following: l Run Now to start the task now. Click l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs Renaming hosts/host groups Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. To rename host/host groups: Procedure 1. Select the storage system. Hosts > Hosts . 2. Select Modify . 3. Select the host/host group and click 4. In the Properties panel, type a new name for the host/host group and click Apply . Host/host group names must be unique from other hosts/host groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host/host group names are case-insensitive. Renaming hosts/host groups 295

296 Host Management Setting host or host group port flags To set host or host group port flags: Procedure 1. Select the storage system. Hosts Hosts . 2. Select > 3. Select a host, click and then click Set Flags to open the Set Host/Host dialog. Group Flags 4. Optional: Select a host/host group whose flag settings you want to copy from drop-down menu. the Copy Flags from Other Host/Host Group , by selecting the corresponding Override option 5. Modify any of the flags (thereby activating the Enable option) and enable (select) or disable (clear) the flag. 6. Optional: Select to specify that LUN values for the host must Consistent LUNs be kept consistent for all volumes within each masking view of which this host is part. When set, any masking operation involving this host that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. 7. Click OK . Deleting hosts/host groups Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. To delete hosts/host groups: Procedure 1. Select the storage system. Hosts > Hosts . 2. Select 3. Select the host/host group from the list and click , then click OK to confirm. Delete 4. Click Viewing hosts/host groups Procedure 1. Select the storage system. Hosts Hosts to open the Hosts list view. > 2. Select 3. Use the Hosts list view to view and manage hosts. The following properties display: Name —Host/host group name. An arrow icon at the beginning of the name indicates that the host is a host group. Click the icon to view hosts contained in the group. 296 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

297 Host Management Masking Views —Number of masking view associated with the host. —Number of initiators in the host. Initiators Consistent LUNs —Flag indicating if the Consistent LUNs flag is set. When set, any masking operation involving this host/host group that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. indicates that the feature is set. —Flag indicating if any port flags are overridden for the Port Flag Overrides host. indicates that there are overridden port flags. —Timestamp of the most recent changes to the host. Last Update Click to view the host/host group details. The following controls are available: — on page 292 Create Host Creating hosts Creating host groups — on page 302 Create Host Group Using the Provision Storage wizard Provision Storage to Host — on page 100 Using the Provision Storage wizard on page 108 or — Modifying hosts on page 294 or Modifying host groups on page 304 Modify — Set Flags Setting host or host group port flags on page 296 Deleting hosts/host groups — on page 296 Delete Viewing host/host group details Procedure 1. Select the storage system. > 2. Select . Hosts Hosts 3. Select the host/host group. 4. Click to view the host/host group details. Note The properties and controls available in this panel depend on whether you are viewing details of an individual host or of host group, and on the storage operating environment. The following properties display: Name —Host/host group name. To rename the host/host group, type a new name over the existing and click Apply. Host/host group names must be unique from other hosts/host groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host names are case-insensitive. —Number of hosts in the group. This field only displays for host groups. Hosts —Number of masking views with which the host/host group is Masking Views associated. Viewing host/host group details 297

298 Host Management Initiators —Number of initiators in the host/host group. For host groups, the value includes initiators in any child host groups. —Number of host groups in which this host is a member. This field Host Groups only displays for individual hosts. . Consistent LUNs —Flag indicating if the Consistent LUNs flag is set. When set, any masking operation involving this host/host group that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. indicates that the feature is set. —Flag indicating if any port flags are overridden for the Port Flag Overrides indicates that there are overridden port flags. host. —List of any enabled port flags overridden by the host/ Enabled Port Flags host group. Disabled Port Flags —List of any disabled port flags overridden by the host/ host group. —Timestamp of the most recent changes to the host/host group. Last Update PowerPath Hosts —Number of PowerPath hosts. Viewing host initiators Procedure 1. Select the storage system. > Hosts . 2. Select Hosts 3. to open the host details view, then click on the Select the host and click link in the Initiators field to open the initiators list view. The following properties display: —WWN or IQN (iSCSI Qualified Name) ID of the initiator. Initiator —Storage system director and port associated with the initiator, for Dir:Port example: FA-7E:1. —User-defined initiator name. Alias Logged In — Flag indicating if the initiator is logged into the fabric: Yes/No. On Fabric — Flag indicating if the initiator is on the fabric: Yes/No. Port Flag Overrides — Flag indicating if any port flags are overridden by the initiator: Yes/No. Hosts — Number of hosts the initiator is associated with. Masking Views —Number of associated masking views. The following controls are available: — Setting initiator attributes on page 312 Set Attributes — on page 311 Setting initiator port flags Set Host Flags Rename Alias — Renaming initiator aliases on page 312 — Replacing initiators on page 313 Replace Initiator 298 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

299 Host Management Remove Masking Entry on page 313 — Removing masking entries Host/Host group flags Host/Host group flags Table 4 Attribute Description Enables a unique serial number. This attribute is only available on Common Serial storage systems running Enginuity 5876. Number Volume Set Enables the volume set addressing mode. Addressing** When using volume set addressing, you must specify a 4-digit address in the following range: (0)000-(0)007, (0)010-(0)017,... to a maximum of (0)FF0-(0)FF7 Where the first digit must always be set to 0 (storage system does not currently support the upper range of volume set addressing), the second digit is the VBus number, the third digit is the target, and the fourth digit is the LUN. Enables a SCSI bus reset to only occur to the port that received the Avoid Reset reset (not broadcast to all channels). Broadcast* Environ Set* Enables the environmental error reporting by the Symmetrix to the host on the specific port. When enabled, a Unit Attention (UA) that is propagated from another Disable Q Reset on director does not flush the queue for this volume on this director. Used UA for hosts that do not expect the queue to be flushed on a 0629 sense (only on a hard reset). Alters the inquiry data (when returned by any volume on the port) to SCSI 3* report that the Symmetrix supports the SCSI-3 protocol. When disabled, the SCSI 2 protocol is supported. Provides a stricter compliance with SCSI standards for managing SCSI Support1 (OS2007)* volume identifiers, multi-port targets, unit attention reports, and the absence of a volume at LUN 0. To enable the SCSI Support1 attribute, you must also enable the SPC2 Protocol Version attribute. SPC2 Protocol This flag should be enabled (default) in a Windows 2003 environment running Microsoft HCT test version 12.1. When setting this flag, the Version* port must be offline. AS400 Indicates whether AS/400 is enabled. This attribute is only available on storage systems running Enginuity 5876. Enables an Open VMS fiber connection. Open VMS*,** * To enable/disable this flag when it is already overridden (i.e., the Override option is already selected), you must: Override option and click OK to close the dialog. Clear the Override , and then the desired state (Enable/Disable). Open the dialog again, select Click OK. Host/Host group flags 299

300 Host Management ** For storage systems running HYPERMAX OS 5977 or higher, if Volume Set Addressing is overridden and enabled, the Open VMS flag must be disabled. However, if you do not actually select the Open VMS override option, Solutions Enabler will override and disable it. If the Open VMS flag is overridden and enabled, the Volume Set Addressing flag must be disabled. However, if you do not actually select the Volume Set Addressing override option, Solutions Enabler will automatically override and disable it. Host I/O limits dialog box Use this dialog box to set the host I/O limits for the storage group you are provisioning: Procedure 1. Type values for one or both of the following: l —Maximum bandwidth (in MB per second). Valid values range from MB/Sec 1 MB/sec to 100,000 MB/sec. l IO/Sec —Maximum IOPs (in I/Os per second). Valid values range from 100 IO/Sec to 2,000,000 IO/sec, in 100 increments. 2. To configure a dynamic distribution of host I/O limits, set Dynamic Distribution to one of the following; otherwise, leave this field set to Never (default). This feature requires Enginuity 5876.163.105 or higher. l Always —Enables full dynamic distribution mode. When enabled, the configured host I/O limits will be dynamically distributed across the configured ports, thereby allowing the limits on each individual port to adjust to fluctuating demand. l —Enables port failure capability. When enabled, the fraction of Failure configured host I/O limits available to a configured port will adjust based on the number of ports currently online. . OK 3. Click Note For more information on host I/O limits, refer to Setting host I/O limits on page 132. Host Group filtering rules The host and host group list follows these guidelines for display: Initiators with the same name, but seen from different storage system login history tables will be filtered to only show once. New host groups can be set on both storage systems. Initiators logged into one storage system but not another displays in the list, but Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 300

301 Host Management will show up as logged out in the other storage system if they are added to the host. If an Initiator is already in an host group on ALL of the storage systems where that initiator is logged in, then this initiator is filtered out of the Available list. Host groups with the same name and the same contents will be filtered to only show once. If an initiator is not in an host group on one storage system, but it is in a host group on another storage system, both the initiator and the host group will be shown in the list. Host groups with the same name but different contents will be shown individually with “Sym” and the last three digits of the storage appended to the name. Host groups with different names but same contents across different storage systems display individually. If an initiator that is not in a host group has the same name as a host group on a different storage system, then the host group is appended with (Group). Cascaded host groups are filtered out. Select Storage Resource Pool Use this dialog box to select a storage resource pool for the operation. Selecting None will remove the storage group from FAST control. Select Storage Resource Pool 301

302 Host Management Provisioning storage This section describes how to make storage available to hosts: Creating host groups Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876, or HYPERMAX OS 5977 or higher. The maximum number of hosts allowed in a host group depends on the HYPERMAX OS: For Enginuity 5876, the maximum allowed is 32. For HYPERMAX OS 5977 or higher, the maximum allowed is 64. This procedure explains how to create a host group (collection of hosts). For Creating hosts instructions on creating a host, refer to on page 292. To create host groups: Procedure 1. Select the storage system. > Hosts 2. Select Hosts . Create Create Host Group . 3. Click > . Name 4. Type a Host Group Host group names must be unique from other hosts/host groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host group names are case-insensitive. Fibre 5. Select the radio button to filter the available hosts table to display Fibre Channel hosts only or select the iSCSI radio button to filter the table to display Fibre radio button is selected by default. iSCSI hosts only. The 6. Optional: Do one of the following: l . For Create New Host To create new hosts to add to the group, click Creating hosts on page 292. instructions on creating hosts, refer to l To add existing hosts to the group: n Specify the host by typing its name or by selecting it from the list. n To filter the list, type part of the host name. n Repeat this step for each additional host. n Add . Click Repeat these steps for each additional host. l To set the host port attributes: n Click Set Host Group Flags to open the Set Host/Host Group Flags dialog box. n Optional: Select a host whose flag settings you want to copy. 302 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

303 Host Management n , by selecting the corresponding Override Modify any of the attributes option (thereby activating the Enable option) and enable (select) or disable (clear) the flag. l Optional: Select to specify that LUN values for the host Consistent LUNs must be kept consistent for all volumes within each masking view of which this host is part. When set, any masking operation involving this host that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. . OK 7. Click 8. Do either of the following: l to start the task now. Click Run Now l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs Adding hosts to host groups Before you begin To perform this operation, you must be a StorageAdmin. To add hosts to host groups: Procedure 1. Select the storage system. Hosts > Hosts . 2. Select 3. . Select the host group (or empty host) and click field. 4. Click on the link in the Hosts Add Hosts . 5. Click 6. Specify the host by typing its name or by selecting it from the list. The hosts table is a filtered list based on whether the host selected is Fibre Channel or ISCSI. To filter the list, type part of the host name. Click Add. Repeat this step for each additional host. Run Now or Add To Job List . 7. Click Removing hosts from host groups Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. To add hosts to host groups: Procedure 1. Select the storage system. Hosts > Hosts . 2. Select 3. Select the host group (or empty host) and click . Adding hosts to host groups 303

304 Host Management 4. Click on the link in the Hosts field. . 5. Select the host and click Remove 6. Click . OK Modifying host groups Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. Modifying host groups: Procedure 1. Select the storage system. Hosts . Hosts 2. Select > 3. Do one of the following: l To modify a host group: n . Select the host group and click Modify n To change the host group Name, highlight it and type a new name over it. Host names must be unique from other hosts on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host names are case-insensitive. l To add a Host: n Optional: To create a new host to add to the group, click . For Create help, refer to Creating hosts on page 292. n Add Select the host and click . To filter the list, type part of the host name. Repeat this step for each additional host. l To remove a Host: n In the list of hosts, select the host, click and then click Delete . n Click OK . 4. Repeat these steps for each additional host. 5. Do either of the following: l Click Run Now to start the task now. l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. Renaming hosts/host groups Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. To rename host/host groups: 304 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

305 Host Management Procedure 1. Select the storage system. Hosts . > 2. Select Hosts . 3. Select the host/host group and click Modify 4. In the Properties panel, type a new name for the host/host group and click . Apply Host/host group names must be unique from other hosts/host groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host/host group names are case-insensitive. Setting host or host group port flags To set host or host group port flags: Procedure 1. Select the storage system. Hosts Hosts . 2. Select > 3. Select a host, click and then click to open the Set Host/Host Set Flags Group Flags dialog. 4. Optional: Select a host/host group whose flag settings you want to copy from Copy Flags from Other Host/Host Group drop-down menu. the flags 5. Modify any of the , by selecting the corresponding Override option (thereby activating the Enable option) and enable (select) or disable (clear) the flag. 6. Optional: Select to specify that LUN values for the host must Consistent LUNs be kept consistent for all volumes within each masking view of which this host is part. When set, any masking operation involving this host that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. 7. Click OK . Deleting hosts/host groups Before you begin To perform this operation, you must be a StorageAdmin. The storage system must be running Enginuity version 5876 or higher. To delete hosts/host groups: Procedure 1. Select the storage system. Hosts > Hosts . 2. Select 3. Select the host/host group from the list and click Setting host or host group port flags 305

306 Host Management 4. Click OK to confirm. Delete , then click Viewing hosts/host groups Procedure 1. Select the storage system. Hosts to open the Hosts list view. 2. Select > Hosts 3. Use the Hosts list view to view and manage hosts. The following properties display: —Host/host group name. An arrow icon at the beginning of the name Name indicates that the host is a host group. Click the icon to view hosts contained in the group. Masking Views —Number of masking view associated with the host. Initiators —Number of initiators in the host. Consistent LUNs —Flag indicating if the Consistent LUNs flag is set. When set, any masking operation involving this host/host group that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. indicates that the feature is set. —Flag indicating if any port flags are overridden for the Port Flag Overrides host. indicates that there are overridden port flags. Last Update —Timestamp of the most recent changes to the host. to view the host/host group details. Click The following controls are available: Create Host — Creating hosts on page 292 — Creating host groups on page 302 Create Host Group — Provision Storage to Host Using the Provision Storage wizard on page 100 on page 108 Using the Provision Storage wizard or Modifying hosts on page 294 or Modifying host groups Modify — on page 304 — Setting host or host group port flags on page 296 Set Flags — Deleting hosts/host groups on page 296 Delete Viewing host/host group details Procedure 1. Select the storage system. 2. Select Hosts > Hosts . 3. Select the host/host group. 4. Click to view the host/host group details. 306 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

307 Host Management Note The properties and controls available in this panel depend on whether you are viewing details of an individual host or of host group, and on the storage operating environment. The following properties display: Name —Host/host group name. To rename the host/host group, type a new name over the existing and click Apply. Host/host group names must be unique from other hosts/host groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Host names are case-insensitive. —Number of hosts in the group. This field only displays for host groups. Hosts Masking Views —Number of masking views with which the host/host group is associated. Initiators —Number of initiators in the host/host group. For host groups, the value includes initiators in any child host groups. Host Groups —Number of host groups in which this host is a member. This field only displays for individual hosts. . Consistent LUNs —Flag indicating if the Consistent LUNs flag is set. When set, any masking operation involving this host/host group that would result in inconsistent LUN values, will be rejected. When not set, the storage system will attempt to keep LUN values consistent, but will deviate from consistency if LUN conflicts occur during masking operations. indicates that the feature is set. —Flag indicating if any port flags are overridden for the Port Flag Overrides indicates that there are overridden port flags. host. —List of any enabled port flags overridden by the host/ Enabled Port Flags host group. Disabled Port Flags —List of any disabled port flags overridden by the host/ host group. —Timestamp of the most recent changes to the host/host group. Last Update —Number of PowerPath hosts. PowerPath Hosts Creating masking views Before you begin The following explains how to mask volumes on storage systems running Enginuity 5876 or higher. To create a masking view, you need to have created initiator groups, port groups, and storage groups. For instructions, refer to Creating port groups on page 316. Procedure 1. Select the storage system. Hosts Masking view to open the Masking view list view. > 2. Select Create to open the Create Masking View dialog box. 3. Click Creating masking views 307

308 Host Management 4. Type the Masking View Name . Masking view names must be unique from other masking views on the array and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Masking view names are case-insensitive. Host 5. Select the . 6. Select the . Port Group 7. Select the Storage Group . 8. Optional: Manually set the host LUN addresses: a. Click dialog box. Set Dynamic LUNs to open the Set Dynamic LUNs field. Starting LUN b. Select a volume, and notice the address displayed in the To accept this automatically generated address, click Apply Starting LUN . To move to the next available, click Next Available LUN . OK Set Dynamic LUNs dialog box. c. Click to close the . OK 9. Click Renaming masking views Procedure 1. Select the storage system. > Masking view 2. Select Masking Views list view. Hosts to open the Rename . 3. Select the masking view from the list and click Name , and click OK . 4. Type the new Masking view names must be unique from other masking views on the array and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Masking view names are case-insensitive. Deleting masking views This procedure explains how to delete masking views from the list Masking Views view. In eNAS operating environments, you can also perform this operation from the System > System Dashboard File Masking Views File Dashboard > File page ( > ). Masking Views Procedure 1. Select the storage system. Hosts > Masking view to open the Masking View list view. 2. Select 3. Select the masking view from the list, click to open and then click Delete confirmation dialog box. Delete Masking View the Delete 4. To unmap volumes in the masking view from their mapped ports, select Storage Group(s) . OK . 5. Click Viewing masking views 308 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

309 Host Management Procedure 1. Select the storage system. 2. Do one of the following: l Masking view Masking Views list view. Hosts to open the Select > l VVols Dashboard > PE Masking Views to open the PE Select Storage > list view. Masking Views Masking view list view to view and manage masking views. Use the The following properties display: l Name — User-defined masking view name. l Host — Name of the associated host. l — Name of the associated port group. Port Group l — Name of the associated storage group. Storage Group To view a masking group's details, select it and click . The following properties are displayed: l — User-defined masking view name. Name l Capacity (GB) —Total capacity, in GB, of all volumes in the masking view. l — Name of the associated host. Host l Port Group — Name of the associated port group. l Storage Group — Name of the associated storage group. l Initiators —Number of initiators in the masking view. This is the number of primary initiators contained in the masking view and does not include any initiators included in cascaded initiator groups that may be part of the masking view. l —Number of ports contained in the masking view. Ports l Volumes —Number of volumes in the storage group contained in the masking view. Depending on the options chosen, some of the following controls are available: l on page 307 — Create Creating masking views l Rename on page 308 — Renaming masking views l Viewing masking view connections on page 309 View Path Details — l — Deleting masking views on page 308 Delete Viewing masking view connections Masking Views This procedure explains how to perform the operation from the list view. In eNAS operating environments, you can also perform this operation from the page ( System > > File Dashboard > File File Masking Views System Dashboard ). Masking Views Procedure 1. Select the storage system. Hosts > Masking Views to open the 2. Select list view. Masking Views Viewing masking view connections 309

310 Host Management 3. Select the masking view from the list and click View Path Details to open the masking view connections view. Masking View view to filter a masking view by selecting various 4. Use the combinations of members within a group (initiators, ports, volumes) and display the masking view details from the group level to the object level. Filtering a masking view The view contains three tree view lists for each of the Masking view component groups in the masking view, initiator groups, ports groups, and storage groups. The parent group is the default top-level group in each expandable tree view and contains a list of all components in the masking group including child entries which are also expandable. To filter the masking view, single or multi-select (hold shift key and select) the items in the list view. As each selection is made, the filtered results table is updated to reflect the current combination of filter criteria. Filtered results table The following properties display: LUN Address LUN address number. Volume Symmetrix system volume number. Capacity (GB) Capacity, in GB, of the volume. Initiator WWN or IQN (iSCSI Qualified Name) ID of the initiator. Alias Alias of the initiator. Director:Port Symmetrix system director and port in the port group. Logged In Indicates if the initiator is logged into the host/target. On Fabric Indicates if the initiator is zoned in and on the fabric. The following additional filters are available to filter the results table: Show Logged In Shows only the entries for LUNs where the associated initiator is logged in. Show On Fabric Shows only the entries for LUNs where the associated initiator is zoned in and on the fabric. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 310

311 Host Management Viewing masking view details Procedure 1. Select the storage system. Masking view to open the Masking Views list view. Hosts 2. Select > 3. Select the masking view from the list and click The following properties display: l Name —Name of the masking view. l Host —Name of the host. l Port group —Name of the port group. l —Name of the storage group. Storage group l —Number of initiators in the masking view. This is the number of Initiators primary initiators contained in the masking view and does not include any initiators included in cascaded initiator groups that may be part of the masking view. l Ports —Number of ports contained in the masking view. l —Number of volumes in the storage group contained in the Volumes masking view. l —Total capacity, in GB, of all volumes in the masking view. Capacity (GB) Set Dynamic LUN Addresses Use this dialog box to manually assign host LUN addresses for a masking operation. Procedure 1. Select the storage system. 2. Select > Masking Views . Hosts to open the Create Masking View 3. Select a masking view and click Create dialog box. Set Dynamic LUNs to open the Set Dynamic LUNs dialog box. 4. Click This dialog box contains the following elements: —LUN address assigned to the first volume. Starting LUN —Sets the address for the volume and keeps the dialog Apply Starting LUN box open for additional operations. Next Available LUN —Increments the Starting LUN address to the next available. Volumes to be masked — Select the volumes you want to mask from the volumes list. OK . Enter the necessary values and click Setting initiator port flags Viewing masking view details 311

312 Host Management Procedure 1. Select the storage system. Hosts . > 2. Select Initiators Set Initiator Flags to open the 3. Select an initiator and click Set Host Flags dialog. 4. Optional: Copy the attributes of an existing flag by selecting a flag under the drop-down menu. Copy Flags Override option (thereby , by selecting the corresponding 5. Modify the attributes option), and enable (select) or disable (clear) the flag. activating the Enable OK 6. Click . Setting initiator attributes Before you begin Any changes made to an initiator's attributes affect the initiator and all its ports. To set initiator attributes: Procedure 1. Select the storage system. > Initiators . 2. Select Hosts Set Attributes . 3. Select an initiator and click The initiator director: port, initiator, and optional alias names display. 4. Type the FCID (Fibre Channel ID) Value. . OK 5. Click Renaming initiator aliases When the system discovers the attached HBAs, a two-part record is created for the name. The format is NodeName/PortName. For fiber adapters, the HBA name is the WWN or iSCSI name. For native iSCSI adapters, the HBA name is the IP address. You can rename the HBA identifier by creating a shorter, and easier to remember, ASCII alias name. To rename an initiator alias: Procedure 1. Select the storage system. Hosts > Initiators . 2. Select 3. Select an initiator, click Rename Alias . and then click 4. Type a Node Name and Port Name. On storage systems running Enginuity 5876, node and port names cannot exceed 16 characters. On storage systems running HYPERMAX OS 5977 or higher, node and port names cannot exceed 32 characters. 312 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

313 Host Management 5. Click OK . This overwrites any existing alias name. Replacing initiators If a host adapter fails, or needs replacement for any reason, you can replace the adapter and assign its set of volumes to a new adapter. To replace an initiator: Procedure 1. Select the storage system. 2. Select Hosts Initiators . > 3. Replace Initiator and then click . Select the initiator, click The existing initiator and optional alias names display. 4. Type the full WWN or iSCSI identifier of the New Initiator. For native iSCSI, type the IP address. 5. Click OK . This substitutes all occurrences of the old WWN/iSCSI/IP address with the new one. Removing masking entries Procedure 1. Select the storage system. Hosts > Initiators . 2. Select 3. to open the and select Remove Masking Entry Select the initiator, click dialog box. Remove Masking Entry 4. Select the director and port. 5. Click . OK Viewing initiators Procedure 1. Select the storage system. Hosts > Initiators . 2. Select Initiators list view to view and manage initiators. 3. Use the The properties and controls displayed in the view vary depending on the Enginuity version running on the storage system and on how you arrived at this view. Initiator — WWN or IQN (iSCSI Qualified Name ) ID of the initiator. — Storage system director and port associated with the initiator, for Dir:Port example: FA-7E:1. Replacing initiators 313

314 Host Management Alias — User-defined initiator name. — Flag indicating if the initiator is logged into the fabric: Yes/No. Logged In —Flag indicating if the initiator is on the fabric: Yes/No. On Fabric Port Flag Overrides — Flag indicating if any port flags are overridden by the initiator: Yes/No. Hosts — Number of hosts the initiator is associated with Masking Views — Number of masking views the initiator is associated with, including the masking views that are associated with any cascaded relationships. This field only applies/appears for storage systems running Enginuity 5876 or higher. To view the initiator's details, click The following controls are available: Setting initiator port flags on page 311 Set Host Flags — Setting initiator attributes Set Attributes — on page 312 Renaming initiator aliases — on page 312 Rename Alias Replacing initiators on page 313 Replace Initiator — Removing masking entries on page 313 — Removing Masking Entry Viewing initiator details Procedure 1. Select the storage system. 2. Select Initiators . Hosts > 3. Select the initiator from the list and click 4. The following properties are displayed: Note The properties and controls displayed in the view vary depending on the Enginuity version running on the storage system and on how you arrived at this view. Initiator —WWN or IQN (iSCSI Qualified Name) ID of the initiator. Dir:Port —Storage system director and port associated with the initiator, for example: FA-7E:1 —The user-defined initiator name. Alias —Number of hosts. Hosts —Number of associated initiator groups, including the Initiator Groups immediate initiator group and any parent initiator groups that include this initiator group. This field only applies/appears for Symmetrix systems running Enginuity 5876 or higher. Masking Views —Number of associated masking views, including the masking views that are associated with any cascaded relationships. This field only applies/appears for storage systems running Enginuity 5876 or higher. 314 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

315 Host Management Volumes —Number of volumes. —Flag indicating if the initiator is logged into the fabric: Yes/No. Logged In —Flag indicating if the initiator is on the fabric: Yes/No. On Fabric Port Flag Overrides —Flag indicating if any port flags are overridden by the initiator: Yes/No. Enabled Flags —List of any enabled port flags overridden by the initiator. Disabled Flags —List of any disabled port flags overridden by the initiator. —Flags that are in effect for the initiator. Flags in Effect Last Login —Timestamp for the last time this initiator was logged into the system. FCID —Fibre Channel ID for the initiator. FCID Value —Value that is enabled for FCID lockdown. FCID Lockdown —Flag indicating if port lockdown is in effect: Yes/No. IP Address —IP address for the initiator. The following controls are available: — on page 312 Set Attributes Setting initiator attributes Setting initiator port flags Set Host Flags — on page 311 — Renaming initiator aliases on page 312 Rename Alias — Replacing initiators on page 313 Replace Initiator — Removing Masking Entry Removing masking entries on page 313 Viewing volumes associated with host initiator Procedure 1. Select the storage system. > Initiators . Hosts 2. Select 3. Select the initiator from the list and click 4. Click on the number in the Volumes field. 5. Use this view to view and manage volumes associated with the initiator. The following properties display: —Volume name. Name —Type of volume. Type —% of space allocated. Allocated % Capacity (GB) —Volume capacity in GBs. Status —Volume status. Emulation —Volume emulation. —SRDF group the volume belongs to. SRDF Group —Host paths for the volume. Host Paths To see more volume properties, select the volume and click Viewing volumes associated with host initiator 315

316 Host Management The following controls are available, depending on the Enginuity version running on the storage system: — Creating volumes Create on page 178 on page 191 Expanding existing volumes Expand — Deleting volumes on page 188— Delete Create SG l —HYPERMAX OS 5977 or later: Creating storage groups on page 112 on page 195 Setting volume attributes Set Volume Attributes — Set Volume Identifiers — Setting volume identifiers on page 196 Setting volume status on page 194 Set Volume Status — — on page 190 Change Volume Configuration Changing volume configuration QOS for replication — Replication QoS on page 197 Duplicating volumes on page 188 Duplicate Volume — — on page 191 Expand Volume Expanding existing volumes Start Allocate/Free/Reclaim — Managing thin pool allocations on page 244 — Managing thin pool allocations on page 244 Stop Allocate/Free/Reclaim — Map Mapping volumes on page 192 Unmapping volumes — on page 193 Unmap Set SRDF GCM — Setting the SRDF GCM flag on page 434 Resetting original device identity on page 432 Reset SRDF/Metro Identity — Viewing details of a volume associated with initiator Procedure 1. Select the storage system. 2. Select Hosts > Initiators . 3. Select the initiator from the list and click Volumes 4. Click on the number in the field. 5. to see its details. Select the volume from the list and click The following controls are available: Create Creating volumes on —To select the type of volume to create refer to page 178. Creating port groups Before you begin Note the following recommendations: Port groups should contain four or more ports. 316 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

317 Host Management Each port in a port group should be on a different director. A port can belong to more than one port group. However, for storage systems running HYPERMAX OS 5977 or higher, you cannot mix different types of ports (physical FC ports, virtual ports, and iSCSI virtual ports) within a single port group. Creating port groups: Procedure 1. Select the storage system. 2. Select > Port Groups . Hosts Create 3. Click . 4. Type a Port group name. Port group names must be unique from other port groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and (-) are allowed. Port group names are case-insensitive. 5. Select the appropriate filter to filter the port list by iSCSI or FC. to add them to the 6. Select the available ports from the Ports list, and click Add Ports to add list. The following properties display: Dir:Port —Storage system director and port in the port group. Identifier —Port identifier Port Groups —Number of port groups where the port is a member. —Number of masking views where the port is associated. Masking Views Volumes —Number of volumes in the port group. VSA Flag — An indicator to show if Volume Set Addressing flag is set for the port. 7. Select Add To Job List . Run Now or Deleting port groups Procedure 1. Select the storage system. 2. Select > Port Groups . Hosts 3. Select the port group, click Delete to open the Delete Port and then click Group confirmation message. 4. For mapped ports only: Select . Unmap 5. Click OK . Adding ports to port groups Before you begin Note the following recommendations: Port groups should contain four or more ports. Each port in a port group should be on a different director. Deleting port groups 317

318 Host Management A port can belong to more than one port group. However, for storage systems running HYPERMAX OS 5977 or higher, you cannot mix different types of ports (physical FC ports, virtual ports, and iSCSI virtual ports) within a single port group. Adding ports to port groups: Procedure 1. Select the storage system. Hosts Port Groups . > 2. Select 3. Select the port group and click . field. Ports 4. Click on the number in the . 5. Click Add Ports If the port group already contain FC ports, the dialog is populated with all available FC ports. If the port group already contain iSCSI ports, this dialog is populated with all available iSCSI ports. If there are no ports in the port group, select the appropriate filter to filter the port list by iSCSI or FC. Add Ports to add 6. Select the available ports from the Ports to add list, and click them to the Ports to Add list. The following properties display: Dir:Port —Storage system director and port in the port group. —IQN of an iSCSI target or WWN of an FC port. Identifier Ports Groups —Number of port groups where the port is a member. —Number of associated masking views. Masking Views —Number of associated mapped volumes. Mapped Volumes . OK 7. Click Removing ports from port groups Before you begin Note the following recommendations: Port groups should contain four or more ports. Each port in a port group should be on a different director. To remove ports from port groups: Procedure 1. Select the storage system. Hosts > Port Groups . 2. Select 3. . Select the port group and click Ports field. 4. Click on the number in the 5. Select the port to remove or hold down the shift key to multi-select the ports to be removed from the port group. 6. to open the Remove Ports confirmation message. Click 318 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

319 Host Management 7. For mapped ports only: You can optionally select to Unmap any affected volumes from their respective ports. OK . 8. Click Renaming port groups To rename port groups: Procedure 1. Select the storage system. Port Groups . Hosts 2. Select > . 3. Select the port group and click Modify 4. Type the new port group Name and click Apply . Viewing port groups Procedure 1. Select the storage system. > Port Groups to open the Port Groups list view. 2. Select Hosts list view allows you to view and manage port groups on a The Port Groups storage system. There are multiple ways to open this view. Depending on the one you used, some of the following properties and controls may not appear. The following properties display (Click a column heading to sort the list by that value): —User-defined port group name. Name —Number of ports in the group. Ports —Number of masking views where the port group is associated. Masking Views Last Update —Timestamp of the most recent changes to the port group. To view more details of a port group, select it and click The following controls are available: Creating port groups Create — on page 316 — Renaming port groups on page 319 Modify — Deleting port groups on page 317 Delete Viewing port groups details Procedure 1. Select the storage system. 2. Select Hosts > Port Groups . 3. Select the port group and click . 4. Use the port groups Details view to view and manage a port group. The following properties display: Renaming port groups 319

320 Host Management Name —User-defined port group name. —Number of ports in the group. Click on the number for more details Ports —Number of masking views where the port group is associated. Masking Views Click on the number for more details. Last Update —Timestamp of the most recent changes to the port group. Host I/O (IO/Sec) —Total host I/O limit on the specified port group in IO/Sec. Zero indicates that there is no limit set. Host I/O (MB/Sec) —Total host I/O limit on the specified port group in MB/ Sec. Zero indicates that there is no limit set. Port Speed (MB/Sec) —Bandwidth in MB/sec for that port group (that is, the aggregated port negotiated speed for the ports in the group). Percent Capacity (%) —Percentage of the bandwidth demand over the port group negotiated speed. Excess (MB/Sec) —Amount of bandwidth in MB/sec that is left available on the port group after the host I/O limits have been accounted for. The following controls are available: Creating port groups on page 316 Create — Renaming port groups — Modify on page 319 Delete Deleting port groups — on page 317 Viewing ports in port group Procedure 1. Select the storage system. Hosts > Port Groups . 2. Select 3. Select the port group and click . field. 4. Click on the number in the Number of Ports Ports list view to view and manage ports. 5. Use the The following properties are displayed: Dir:Port —Storage system director and port in the port group. Identifier — IQN of an iSCSI target or WWN of an FC port. —Number of port groups where the port is a member. Port Groups Masking Views —Number of masking views where the port is associated. Mapped Volumes —Number of volumes mapped to the port. The following controls are available: — on page 317 Adding ports to port groups Add Ports — Removing ports from port groups on page 318 Remove Viewing port details on page 321 — 320 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

321 Host Management Viewing port details Procedure 1. Select the storage system. Port Groups . Hosts 2. Select > 3. Select the port group and click . 4. Click on the number in the field. Ports 5. Select a port and click . 6. Use the port Details view to view and manage a port. The following properties display: Dir:Port —Storage system director and port in the port group. —IQN of an iSCSI target or WWN of an FC port. Identifier Number of Port Groups —Number of port groups where the port is a member. Number of Masking Views —Number of masking views where the port is associated. Number of Masked Volumes —Number of volumes visible through the port. Number of Mapped Volumes —Number of volumes mapped to the port, including meta members. Volume Set Addressing —Whether volume set addressing is on or off. —Whether the port is online or offline. Ports Status —Number of IP interfaces associated with the iSCSI Number of IP Interfaces target. —Number of physical iSCSI ports associated with IP Number of iSCSI Ports interfaces which are in turn attached to the iSCSI target. Volume Set Addressing An addressing scheme that uses virtual busses, targets, and LUNs to increase greatly the number of LUNs that can be addressed on a target port. Volume Set Addressing is supported for HP-UX. Viewing host IO limits Procedure 1. Select the storage system. Hosts > Port Groups . 2. Select 3. Select the port group and click . 4. Click on the link in the Host I/O (IO/Sec) or Host I/O (MB/Sec) fields. The following properties display: —Storage group on which the limit is set. Storage Group Viewing port details 321

322 Host Management Quota State —Whether the limit is set directly on the storage group (Defined) or through a cascaded relationship (Shared). —Storage system director and port in the port group. Dir:Port Host I/O Limit (MB/Sec) —Total host I/O limit on the listed port in MB/Sec. This value is the associated port group’s I/O limit divided across its ports. —Total host I/O limit on the listed port in IO/Sec. Host I/O Limit (IO/Sec) This value is the associated port group’s I/O limit divided across its ports. Child Host I/O Limit (MB/Sec) —Total child host I/O limit on the listed port in MB/Sec. This value is the associated port group’s I/O limit divided across its ports. —Total child host I/O limit on the listed port in Child Host I/O Limit (IO/Sec) IO/Sec. This value is the associated port group’s I/O limit divided across its ports. Managing storage for Mainframe The Mainframe Dashboard provides you with a single place to monitor and manage configured splits, CU images, and CKD volumes. To access the Mainframe Dashboard: Procedure 1. Select the storage system. Hosts > Mainframe to open the Mainframe Dashboard. 2. Select The Mainframe Dashboard is organized into the following panels: l CKD Compliance l CKD Storage Groups l Actions l Summary CKD Compliance panel Displays how well CKD storage groups are complying with their respective service level policies, if applicable. All of the storage groups on the Mainframe Dashboard are organized into the following categories: Total All Mainframe storage groups on the array. Stable Number of storage groups performing within the service level targets. indicates that there are no storage groups performing within the service level targets. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 322

323 Host Management Marginal Number of storage groups performing below service level targets. indicates that there are no storage groups performing below service level targets. Critical Number of storage groups performing well below service level targets. indicates that there are no storage groups performing well below service level targets. No Service Level No service level compliance information. CKD Storage Groups panel Displays all of the Mainframe storage groups on the array. Double-click on a storage group to see more details as well as information on its compliance and volumes. Actions panel Displays the following links: Provision Storage Opens the Mainframe Provision wizard, which guides you through the process of provisioning storage for a mainframe. For more information, see Using the Provision Storage wizard for mainframe on page 104. Create CKD Volumes Create Volume dialog, from where you can create a CKD Opens the Creating CKD volumes volume. For more information, see on page 330. Summary panel Displays the following mainframe summary information: Splits The number of splits on the selected array. To view the list of splits, click Splits Viewing splits on . For more information about viewing splits, see page 327. CU Images The number of CU images on the selected array. To view the list of CU images, click CU Images . For more information about viewing CU images, see Viewing CU images on page 328. Managing storage for Mainframe 323

324 Host Management CKD Volumes The number of CKD volumes on the selected array. To view the list of CKD CKD Volumes . For more information about viewing CKD volumes, click volumes, see Managing volumes on page 177. Provisioning storage for mainframe With the release of HYPERMAX OS 5977 Q1 2016, Unisphere introduces support for service level provisioning for mainframe. Service level provisioning simplifies storage system management by automating many of the tasks associated with provisioning storage. Service level provisioning eliminates the need for storage administrators to manually assign physical resources to their applications. Instead, storage administrators specify the service level and capacity required for the application and the system provisions the storage group appropriately. You can provision CKD storage to a mainframe host using the Provision Storage wizard. For specific instructions about how to provision storage for mainframe, refer on page 104. to Using the Provision Storage wizard for mainframe The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured. To provision storage for Open Systems, refer to Using the Provision Storage wizard on page 100. Mapping CKD devices to CU images You can map CKD devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit images, referred to as CU images. Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x00 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image. For more information about how to map CKD devices to CU images, see the following tasks: l on page 332 z/OS map from the CU image list view l z/OS map from the volume list view on page 333 Using the Provision Storage wizard for mainframe Before you begin l The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured. l Depending on the type of configuration selected, not all of the steps listed below might be required. To provision storage to mainframe: Procedure 1. Select the storage system. Hosts > Mainframe to open the Mainframe Dashboard. 2. Select Provision Storage . The Provision Storage wizard for 3. In the Actions panel, click mainframe is displayed. Create Storage Group page, type a Storage Group Name . 4. In the 324 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

325 Host Management Storage group names must be unique from other storage groups on the storage system and cannot exceed 64 characters. Only alphanumeric characters, underscores ( _ ), and dashes (-) are allowed. Storage group names are case- insensitive. If you want to create an empty storage group, proceed to the final step after typing the storage group name. Storage Resource Pool . 5. Select a To create the storage group outside of FAST control, select None . External storage resource pools are listed below the External heading. CKD-3390 and CKD-3380 . type. Available values are 6. Select an Emulation to set on the storage group. 7. Select the Service Level Service levels specify the characteristics of the provisioned storage, including average response time, workload type, and priority. This field defaults to None if to None. you set the Storage Resource Pool Available values are: Service level Performance level Use case Diamond Ultra high HPC, latency sensitive Cost optimized Backup, archive, file Bronze Optimized Places the most active data on the highest performing storage and the (Default) least active on the most cost-effective storage. For all-flash storage systems, the only service level available is Diamond and it is selected by default. 8. Type the number of and select either a Model or Volume Capacity . Volumes Selecting a Volume Capacity value. Model type automatically updates the Volume Capacity . Alternatively, you can type the Note The maximum CKD volume size supported is 1182006 cylinders or 935.66 GB. It is possible to create an empty Storage Group with no volumes. 9. (Optional) Configure volume options: Note When using this option, Unisphere uses only new volumes when creating the storage group; it will not use any existing volumes in the group. a. Hover the cursor on the service level and click . Volume Identifier . b. Edit the The following options are available: None Do not set a volume identifier. Using the Provision Storage wizard for mainframe 325

326 Host Management Name Only field. All volumes will have the same name. Type the name in the Name Name and VolumeID All volumes will have the same name with a unique volume ID appended to them. When using this option, the maximum number of characters Name allowed is 50. Type the name in the field. Name and Append Number All volumes will have the same name with a unique decimal suffix appended to them. The suffix will start with the value specified for the and increment by 1 for each additional volume. Valid Append Number Append Numbers must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. Type the name in the field. Name c. To Allocate capacity for each volume you are adding to the storage group, select this option. You can use the this option only for newly created volumes, not existing volumes. d. If you selected to allocate capacity in the previous step, you can mark the Persist preallocated capacity through allocation as persistent by selecting reclaim or copy . Persistent allocations are unaffected by standard reclaim operations and any TimeFinder/Clone, TimeFinder/Snap, or SRDF copy operations. e. Click OK . 10. (Optional) To add a child storage group, do one of the following: l . On all-flash storage systems, click Add Storage Group l On all other storage systems click Add Service Level . Name , Service Level , Volumes , and Model/Volume Capacity . Specify a Repeat this step for each additional child storage group. The maximum number of child storage groups allowed is 64. 11. To create a storage group, without actually provisioning it, click one of the Next following; otherwise, click and continue with the remaining steps in this procedure: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 on page 920 and Previewing jobs l Expand Add to Job List , and click Run Now to perform the operation now. CU Image page, select whether to use a New or an Existing CU image, 12. On the and then do the following depending on your selection: l New: a. Specify the following information for the new CU image: n CU Image Number n SSID n Base Address Split with which to associate the CU image. b. Select a l Existing: 326 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

327 Host Management a. Select a CU image. . For b. To specify a new value for the base address, click Set Base Address Setting the more information about setting the base address, refer to on page 337. base address Next . 13. Click Review 14. On the page, review the summary information displayed. If the storage system is registered for performance, you can subscribe for compliance alerts for the storage group and run a suitability check to ensure that the load being created is appropriate for the storage system. To enable compliance alerts, select Enable Compliance Alerts . Run Suitability Check To run a suitability check, click . 15. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand Viewing splits Before you begin l The storage system must be running HYPERMAX OS 5977 Q1 2016, or higher, and have at least one FICON director configured. To view the splits list view: Procedure 1. Select the storage system. . 2. Select HOSTS > Mainframe to display the Splits list view. Splits 3. Click The following properties are displayed: Split Name The user-defined name for the split. Alpha Serial # The alpha serial number of the split. PAV State Indicates what type of PAV is enabled on the split. The types are: HyperPAV, DynamicPAV, or SuperPAV (5978 only). CU Images The number of CU images associated with the split. Ports The number of FICON ports assigned to the split. 4. . Select the split and click The following properties display: Viewing splits 327

328 Host Management Split Name The user-defined name for the split. Alpha Serial # The alpha serial number of the split. PAV State Indicates if PAV is enabled on the split. Number of CU Images The number of CU images associated with the split. Number of Ports The number of FICON ports assigned to the split. Viewing CU images To view the CU images list view: Procedure 1. Select the storage system. > CU Images to display the CU Images list view. Hosts 2. Click 3. The following properties display: CU Image Number The CU image number. SSID The netmask prefix value of the IP interface. Split The name of the split containing the CU image. Number of Volumes The number of volumes mapped to the CU image. Storage Groups The number of storage groups containing volumes mapped to the CU image. Total Number of Base Addresses The total number of the base addresses configured on the CU image. The total includes used plus unused base addresses. Number of Aliases The number of aliases in use on the CU image. Status The status of volumes in the CU image. To view more details, click The following controls are available: 328 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

329 Host Management l z/OS map from the CU image list view z/OS Map on page 332 — l z/OS unmap from the CU image list view on page 333 z/OS Unmap — l on page 336 — Assign Alias Range Adding an alias range to a CU image l Removing an alias range from a CU image on page Remove Alias Range — 337 Viewing CU image details To view the CU images detailed view: Procedure 1. Select the storage system. 2. Select > Mainframe to open the Mainframe Dashboard. Hosts CU Images to display the CU Images list view. 3. Click 4. . Select the CU image and click The following properties display: CU Image Number The CU image number. SSID The CU SSID. Split The name of the containing split. Number of Volumes The number of volumes. Storage Groups The number of storage groups. Status The current status of the CU image. Total Number of Base Addresses The total number of base addresses configured on the CU image. The total includes used plus unused base addresses. Number of Available Base Addresses The number of available base addresses, in hexadecimal. Available Base Addresses The available base address ranges on the CU image. Next Available Base Address The next available base address, in hexadecimal. Number of Aliases The number of alias addresses. Alias Address Range The assigned alias address range, if applicable. Viewing CU image details 329

330 Host Management PAV Aliasing The type of PAV aliasing: HyperPAV, DynamicPAV, or SuperPAV (5978 only). Creating CKD volumes Before you begin l The storage system must be running HYPERMAX OS 5977.810.784, or later, and have at least one FICON director configured. l Depending on the type of configuration selected, not all of the steps listed below might be required. Procedure 1. Select the storage system. 2. Do one of the following: l > Volumes . In the Volumes list view, click Create . Select Storage l Hosts Mainframe . In the Actions panel, click Create CKD Select > volumes . 3. Select the Configuration type. Emulation 4. From the list, select one of the following values: l CKD-3390 l CKD-3380 Number of Volumes 5. Specify the capacity by typing the , and selecting a Volume Capacity . If the menu is available, selecting a model automatically updates the Model volume capacity to the correct capacity. Alternatively, you can manually enter a volume capacity by clicking . 6. (Optional) To add the volumes to a CKD storage group, click in the Add to Storage Group field to reveal a drop-down menu of available CKD storage Clear to clear the selection. groups. Click 7. Click Advanced Options The advanced options that are presented depend on the configuration details. Complete any of the following steps that are appropriate: SSID or click Select to choose one. a. If required, type an b. To name the new volumes, select one of the following Volume Identifiers: None Allows the system to name the volumes (Default). Name Only All volumes will have the same name. Name + VolumeID All volumes will have the same name with a unique volume ID appended to them. When using this option, the maximum number of characters allowed is 50. 330 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

331 Host Management Name + Append Number All volumes will have the same name with a unique decimal suffix appended to them. The suffix will start with the value specified for the and increment by 1 for each additional volume. Valid Append Number Append Numbers must be from 0 to 1000000. When using this option, the maximum number of characters allowed is 50. For more information on naming volumes, refer to on Setting volume names page 196. Name c. Depending on the value selected for Volume Identifier , type a , or a Append Number . and Name d. If creating thin volumes or a thin BCVs, you can specify to Allocate Full Volume Capacity . In addition, you can mark the preallocation on the thin volume as persistent . by selecting Persist preallocated capacity through reclaim or copy Persistent allocations are unaffected by standard reclaim operations. e. Click OK . 8. Do one of the following: a. Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. Add to Job List and click Run Now to perform the operation now. b. Expand Editing CKD volume capacities Procedure 1. Select the storage system. Hosts Mainframe > Create . 2. Select > 3. Click dialog. Edit Volume Capacities to open the 4. Use the drop-down menus to choose the number of volumes, the model, the capacity and the unit to be used to measure capacity (TB, GB, MB or cylinders). Click to add another volume size. Apply to apply your changes or Cancel to reject them. 5. Click Expanding CKD volumes Before you begin To expand CKD volumes requires HYPERMAX OS 5977.1125.1125 or later. In addition, you must be logged in as an Administrator. You can expand a volume up to 1,182,006 cylinders (1 TB). When expanding a device above 565,250 cylinders, the new size must be a multiple of 1113 cylinders. If you specify a size that isn't that multiple, the system rounds the size up to the next multiple of 1113. You cannot expand a volume when it is: l A CKD 3380 device l A TDAT Editing CKD volume capacities 331

332 Host Management l Marked as Soft Fenced l Part of a RDF session l Part of a SnapVx session The procedure below shows one way to expand a CKD volume. You can also carry out > > Volumes this task via Storage Storage Volumes . > Storage Groups or Procedure 1. Select the storage system. > Mainframe , click CKD Volumes , select a volume and click 2. Click Hosts Expand Volume Expand to open the dialog. field of the dialog box, type or select Volume Capacity 3. In the Expand Volume the new capacity of the volume. The Total Capacity and Additional Capacity figures update automatically. . 4. To reserve the volume, select Reserve Volumes 5. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Run Now to perform the operation now. Add to Job List , and click z/OS map from the CU image list view Before you begin The storage system must be running HYPERMAX OS 5977 Q1 2016. Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To map to a CU image from the CU image list view: Procedure 1. Select the storage system. > 2. Select to open the Mainframe Dashboard. Hosts Mainframe CU Images . 3. Click 4. Select a CU image, which has not already been mapped, and click z/OS Map . The CU Image Map wizard displays. Find Volumes page, search for a volume to which you can map the CU 5. In the image: a. (Optional) Specify one or more criteria by which you can filter volumes. filter for volumes with emulation CKD-3390 is applied Additional Criteria An by default. b. (Optional) Click Add Another to configure further additional criteria. c. Click Find Volumes . Select Volumes page, select one or more volumes to map to the CU 6. In the image. 332 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

333 Host Management 7. Click Summary . 8. Review the summary information. Set Base Address and specify the 9. (Optional) To reset the base address, click new base address. 10. Do one of the following: Add to Job List to add this task to the job list, from which you can a. Click schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs and click Run Now b. Expand Add to Job List to perform the operation now. z/OS unmap from the CU image list view Before you begin The storage system must be running HYPERMAX OS 5977 Q1 2016. Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To unmap a CU image from the CU image list view: Procedure 1. Select the storage system. > Mainframe to open the Mainframe Dashboard. 2. Select Hosts CU Images . 3. Click z/OS Unmap 4. Select the CU image you want to unmap. Click . dialog box displays. The CU Image Unmap 5. Select one or more volumes to unmap from the CU image. 6. Do one of the following: to add this task to the job list, from which you can a. Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs Add to Job List , and click Run Now to perform the operation now. b. Expand z/OS map from the volume list view Before you begin The storage system must be running HYPERMAX OS 5977 Q1 2016 or higher. Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To map to a CU image from the volume list view: Procedure 1. Select the storage system. z/OS unmap from the CU image list view 333

334 Host Management 2. Select Mainframe to open the Mainframe Dashboard. Hosts > . 3. Click CKD Volumes 4. z/OS Map Select one or more volumes to map, click . and then click The Mainframe Volumes Mapping dialog box displays. 5. Select whether to want to map the volume(s) to a CU New or an Existing image. l New , SSID , and a. Specify values for . CU Image Number Base Address Split b. (Optional) Select a . l Existing a. Select the CU image to which you want to map the selected volume(s). Set Base Address to reset the next available base address. b. (Optional) Click 6. Do one of the following: a. Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. Add to Job List , and click Run Now to perform the operation now. b. Expand z/OS unmap from the volume list view Before you begin The storage system must be running HYPERMAX OS 5977 Q1 2016 or higher. Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To unmap a CU image (from the volume list view): Procedure 1. Select the storage system. > Mainframe to open the Mainframe Dashboard. 2. Select Hosts . CKD Volumes 3. Click 4. and then click z/OS Unmap . Select one or more volumes to unmap, click Mainframe Volumes Unmapping dialog box displays a summary of the The unmap operation. 5. Do one of the following: Add to Job List to add this task to the job list, from which you can a. Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. Add to Job List and click Run Now to perform the operation now. b. Expand 334 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

335 Host Management z/OS map from the Volumes (Storage Groups) list view Before you begin The storage system must be running HYPERMAX OS 5977 Q1 2016 or higher. Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To map to a CU image from the Volumes (Storage Groups) list view: Procedure 1. Select the storage system. 2. Select Hosts > Mainframe . View All Storage Groups to open the CKD Storage Groups 3. In the panel, click list view. Storage Groups 4. Select the storage group and click to see its details. Click on the number in field to open the Volumes (Storage Groups) list the Number of Volumes view. 5. Select one or more volumes to map, click z/OS Map . and then click The Mainframe Volumes Mapping dialog box displays. New or an Existing CU 6. Select whether to want to map the volume(s) to a image. l New CU Image Number SSID , and Base Address . a. Specify values for , . b. (Optional) Select a Split l Existing a. Select the CU image to which you want to map the selected volume(s). b. (Optional) Click Set Base Address to reset the next available base address. 7. Do one of the following: Add to Job List to add this task to the job list, from which you can a. Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs Previewing jobs on page 920. on page 920 and Add to Job List , and click Run Now to perform the operation now. b. Expand z/OS unmap from the Volumes (Storage Groups) list view Before you begin The storage system must be running HYPERMAX OS 5977 Q1 2016 or higher. z/OS map from the Volumes (Storage Groups) list view 335

336 Host Management Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To unmap a CU image (from the Volumes (Storage Groups) list view): Procedure 1. Select the storage system. Hosts > Mainframe . 2. Select 3. In the CKD Storage Groups panel, click View All Storage Groups to open the list view. Storage Groups 4. to see its details. Click on the number in Select the storage group and click the Number of Volumes field to open the Volumes (Storage Groups) list view. 5. and then click . Select one or more volumes to unmap, click z/OS Unmap dialog box displays a summary of the Mainframe Volumes Unmapping The unmap operation. 6. Click Yes to the warning dialog box. The Mainframe Volumes Unmapping dialog box displays a summary of the unmap operation. 7. Do one of the following: a. Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Scheduling jobs Previewing jobs on page 920. and click to perform the operation now. Add to Job List b. Expand Run Now Adding an alias range to a CU image Before you begin The storage system must be running HYPERMAX OS 5977 Q1 2016. To add an alias range to a CU image: Procedure 1. Select the storage system. 2. Select Hosts > Mainframe to open the Mainframe Dashboard. CU Images , select the CU image to which you want to add an alias range 3. Click Assign Alias Range and click . (Next available address). 4. Type the Start Alias The minimum value allowed is 00. End Alias . 5. Type the The maximum value allowed is FF. Reserve Volumes . 6. If required, select 336 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

337 Host Management 7. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920. on page 920 and l Run Now to perform the operation now. Expand Add to Job List and click Removing an alias range from a CU image Before you begin l The storage system must be running HYPERMAX OS 5977 Q1 2016. l This operation removes all of the aliases for the selected CU image. To remove an alias range from a CU image: Procedure 1. Select the storage system. Hosts Mainframe to open the Mainframe Dashboard. 2. Select > , select the CU image from which you want to remove an alias CU Images 3. Click Remove Alias Range . range and click dialog box and do 4. Review the information displayed in the Remove Alias Range one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Scheduling jobs Previewing jobs on page 920. l Add to Job List Run Now to perform the operation now. Expand and click Setting the base address dialog box is launched from the following locations: The Set Base Address l Using the Provision Storage wizard on page 100 Provision Storage wizard — l CU Image Map wizard — z/OS map from the CU image list view on page 332 l dialog — Mainframe Volumes Mapping z/OS map from the volume list view on page 333 To set the base address: Procedure Base Address field, specify a new value for base address. 1. In the Addresses in the range 00-FF are allowed. OK . 2. Click Understanding All Flash Mixed FBA/CKD support With the release of HYPERMAX OS 5977 Q2 2017, Unisphere introduces support for All Flash Mixed FBA/CKD arrays. Removing an alias range from a CU image 337

338 Host Management Note This feature is only available for All Flash 450F/850F/950F arrays that are: l Purchased as a mixed All Flash system l Installed at HYPERMAX OS 5977 Q2 2017 or later l Configured with 2 Storage Resource Pools - 1 FBA Storage Resource Pool and 1 CKD Storage Resource Pool You can provision FBA/CKD storage to a mainframe host using the Provision Storage wizard. For specific instructions about how to provision storage for mainframe, refer to Using on page 104, by default only the CKD SRP the Provision Storage wizard for mainframe drop down list. is available in the Storage Resource Pool To provision storage for Open Systems, refer to Using the Provision Storage wizard on page 100, by default only the FBA SRP is available in the Storage Resource Pool drop down list. Modifying For specific instructions about how to modify a storage group, refer to storage groups on page 119, depending on the storage group selection the Storage Resource Pool drop down list is filtered to display the CKD or FBA SRP. Note 1. A CKD SG can only provision from a CKD SRP 2. A FBA SG can only provision from a FBA SRP 3. FBA volumes cannot reside in a CKD SRP 4. CKD volumes cannot reside in a FBA SRP 5. Compression is only for FBA volumes Mapping FBA devices to CU images You can map FBA devices to front-end EA/EF directors. Addressing on EA and EF directors is divided into Logical Control Unit images, referred to as CU images. Each CU image has its own unique SSID and contains a maximum of 256 devices (numbered 0x000 through 0xFF). When mapped to an EA or EF port, a group of devices becomes part of a CU image. For more information about how to map FBA devices to CU images, see the following tasks: l z/OS map FBA volumes from the Volumes (Storage Groups) list view (HYPERMAX OS 5977 or higher) on page 338 l on page z/OS unmap FBA volumes from the Volumes (Storage Groups) list view 339 z/OS map FBA volumes from the Volumes (Storage Groups) list view (HYPERMAX OS 5977 or higher) Before you begin This feature is only available for All Flash 450F/850F/950F arrays that are: l Purchased as a mixed all flash system Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 338

339 Host Management l Installed at HYPERMAX OS 5977 Q2 2017 or later l Configured with 2 Storage Resource Pools - 1 FBA Storage Resource Pool and 1 CKD Storage Resource Pool on page 337 for additional See Understanding All Flash Mixed FBA/CKD support information. Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To map to a CU image from the Volumes (Storage Groups) list view: Procedure 1. Select the storage system. . Storage > Storage Groups 2. Select 3. Select a storage group and click to see its details field to open the 4. Click on the number in the Volumes Volumes (Storage list view. Groups) 5. Select one or more volumes to map, click and then click . z/OS Map The Mainframe Volumes Mapping dialog box displays. 6. Select whether to want to map the volume(s) to a Existing CU New or an image. l New CU Image Number , SSID , and Base Address . a. Specify values for Split b. (Optional) Select a . l Existing a. Select the CU image to which you want to map the selected volume(s). Set Base Address to reset the next available base address. b. (Optional) Click 7. Do one of the following: a. Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. Add to Job List , and click Run Now to perform the operation now. b. Expand z/OS unmap FBA volumes from the Volumes (Storage Groups) list view Before you begin This feature is only available for All Flash 450F/850F/950F arrays that are: l Purchased as a mixed all flash system l Installed at HYPERMAX OS 5977 Q2 2017 or later l Configured with 2 Storage Resource Pools - 1 FBA Storage Resource Pool and 1 CKD Storage Resource Pool Understanding All Flash Mixed FBA/CKD support 339

340 Host Management See Understanding All Flash Mixed FBA/CKD support on page 337 for additional information. Note Before making any mapping changes to an existing CU image, please ensure that all of the devices in the CU are offline (the status of the CU should be offline). To unmap a CU image (from the Volumes (Storage Groups) list view): Procedure 1. Select the storage system. 2. Select Storage > Storage Groups . 3. Select a storage group and click to see its details field to open the Volumes (Storage 4. Click on the number in the Volumes list view. Groups) 5. z/OS Unmap Select one or more volumes to unmap, click . and then click The Mainframe Volumes Unmapping dialog box displays a summary of the unmap operation. 6. Do one of the following: Add to Job List to add this task to the job list, from which you can a. Click schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs Add to Job List b. Expand to perform the operation now. and click Run Now Mapping CKD volumes The following explains how to map CKD volumes to ESCON/FICON ports. You can perform this operation at the volume level or the CU image level. Procedure 1. Select the storage system. 2. l To map at the volume level: Storage > Volumes . a. Select Emulation field and select CKD b. To display only CKD volumes, click in the from the drop-down menu. c. and then click z/OS Map to open the z/ Select a CKD volume, click dialog box. OS Map Volumes 340 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

341 Host Management Note To create a new CU Image - Enter a base address with a 4 hexidecimal format e.g. "3210" n "32" = the CU Image Id n "10" = base address (First base address must end with 0) To create a new SSID - Enter a SSID with a 4 hexidecimal format e.g. "1234" (must be unique) l To map at the CU image level: Hosts > . a. Select CU Images z/OS Map dialog box. z/OS Map b. Select an image and click to open the a Volume Range . Select 3. Type or to be assigned to the first volume in the mapping 4. Type the Base Address request. Base addresses increases incrementally by one for each volume in the range of . volumes being mapped. To view base addresses already in use, click Show Select an SSID . 5. Type or Valid SSIDs must only have unmapped volumes using them and the number of volumes cannot exceed 256. to which you want to map the volumes. 6. Select the Port 7. Click one of the following: l Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Add to Job List to perform the operation now. , and click Run Now Unmapping CKD volumes The following explains how to unmap CKD volumes from ESCON/FICON ports. You can perform this operation at the volume level or the CU image level. Procedure 1. Select the storage system. 2. l To unmap at the volume level: > a. Select . Storage Volumes Emulation field and select CKD b. To display only CKD volumes, click in the from the drop-down menu. c. Select a CKD volume, click z/OS Unmap to open the and then click dialog box. z/OS Unmap Volumes l To unmap at the CU image level: Hosts > CU Images to open the CU Images list view. a. Select z/OS Unmap to open the Unmap CU Image b. Select an image and click dialog box. Unmapping CKD volumes 341

342 Host Management 3. Type or Volume Range to be unmapped. Select the the Base Address 4. Type or Select . SSID an 5. Type or . Select Valid SSIDs must only have unmapped volumes using them, and the number of volumes cannot exceed 256. to which you want to map the volumes. 6. Select the Port 7. Click one of the following: l to add this task to the job list, from which you can schedule Add to Job List or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920. on page 920 and l , and click Run Now Expand Add to Job List to perform the operation now. Copying CU image mapping Before you begin Before you begin: All volumes in a specified range must be mapped to the same CU image, or not mapped at all. Volumes within the specified range that are not mapped will be ignored as long as they are not mappable (SAVE devices, DRVs, and so on). If a volume in the specified range is mappable, the request will be rejected. The following explains how to copy the front-end mapping addresses of a set of volumes from one port to another, providing multi-path access from the storage system to the mainframe. To copy CU image mapping: Procedure > CU Images to open the CU Images 1. Select Hosts list view. Copy Mapping to open the z/OS Map dialog box. 2. Select an image, and click Available Volume for EA/EF Mapping dialog box Use this dialog box to select one or more volumes for the mapping operation. To select a range of volumes, select the first volume in the range, press and hold the Shift key, and then click the last volume in the range. Base Addresses in Use dialog box Use this dialog box to view base addresses already in use. Select SSID dialog box Use this dialog box to select an SSID for the operation. Viewing CKD volumes in CU image Viewing CKD volumes in CU image Procedure 1. Select the storage system. 342 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

343 Host Management 2. Select CU Images Hosts > 3. . Select the CU image and click Number of Volumes 4. In the details panel, click on the number in the field to open the list view.. CKD Volumes 5. Use the CKD Volumes list view to display and manage CKD volumes in a CU image. Results — Symmetrix volume name. Name Type — Volume configuration. Status — Volume status. Capacity (GB) — Volume capacity in GBs. Emulation — Emulation type. — Unit control block (address used by z/OS to access this volume. UCB Address Volser — Volume serial number (disk label (VOL1) used when the volume was initialized). The following controls are available: — Viewing CU image details on page 329 — z/OS map from the volume list view on page 333 z/OS Map — z/OS unmap from the volume list view on page 334 z/OS Unmap Creating PowerPath hosts Before you begin The following are the minimum requirements to perform this task: l A storage system running PowerMax 5978 or higher. l Unisphere for PowerMax 9.0. l Solutions Enabler 9.0. l PowerPath 6.3. Procedure 1. Select the storage system. > PowerPath Hosts to open the PowerPath Hosts list view. Hosts 2. Select Create Host to open the Create Host for PowerPath Host dialog. 3. Click 4. You can use the host name that appears in the dialog or else type in a new one. Host names must be unique from other host/host group names on the storage system. Add To Job List or Run Now . 5. Select either All initiators associated with the selected PowerPath Host will be added to the new host. Creating PowerPath hosts 343

344 Host Management Viewing PowerPath hosts Before you begin The following are the minimum requirements to perform this task: l A storage system running PowerMaxOS 5978 or higher. l Unisphere for PowerMax 9.0. l Solutions Enabler 9.0. l PowerPath 6.3. Procedure 1. Select the storage system. PowerPath Hosts to open the PowerPath Hosts list view. Hosts 2. Select > The following properties display: l Name — The PowerPath host name. l Version — The PowerPath host version. l — The PowerPath host OS version. OS Version l Vendor — The PowerPath host hardware vendor. l — The number of PowerPath host initiators. Initiators l Hosts — The number of PowerPath hosts. l — The number of PowerPath host virtual machines. VMs Create Host The following control is available: : Creating PowerPath hosts on page 343 Viewing PowerPath hosts details Before you begin The following are the minimum requirements to perform this task: l A storage system running PowerMaxOS 5978 or higher. l Unisphere for PowerMax 9.0. l Solutions Enabler 9.0. l PowerPath 6.3. Procedure 1. Select the storage system. Hosts > PowerPath Hosts to open the PowerPath Hosts list view. 2. Select 3. To view the details of a PowerPath host, select it and click The following properties display: l Name — The PowerPath host name. l — The PowerPath host version. Version 344 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

345 Host Management l Patch Level — The PowerPath host patch level. l License Info — The PowerPath host license info. l — The PowerPath host hardware vendor. Vendor l — The PowerPath host OS version. OS Version l OS Revision — The PowerPath host OS revision. l — The time the host registered with the Host Registration Time POWERMAX array. l — Indicates whether the PowerPath Host is connected Connectivity Type to the POWERMAX array by iSCSI or Fibre. l Cluster Name — The PowerPath host cluster name. l Cluster Node Name — The PowerPath host node name in the cluster. l Initiators — The number of PowerPath host initiators. Click on the number to see the initiators list view. l Hosts — The number of PowerPath hosts. l Masking Views — The number of PowerPath host masking views. Click on the number to see the masking views list view. l — The number of PowerPath host virtual machines. Click on the VMs number to see the VMs list view. l Storage Groups — The number of PowerPath host storage groups. Click on the number to see the storage groups list view. l — The number of PowerPath host volumes. Click on the number Volumes to see the volumes list view. Viewing PowerPath Host Virtual Machines Procedure 1. Select the storage system. PowerPath Hosts to open the PowerPath Hosts > Hosts 2. Select list view. 3. Select a PowerPath Host and click the information icon 4. In the details panel, click on the number in the VMs list VMs field to open the view. The following properties display: l Name — The VM name. l — The VM operating system. OS Name A link allows you to add ESX or vCenter viClient credentials so as to retrieve more information on the Virtual Machine. In that case, the following additional properties are displayed: l Power State — The VM power status. l CPU Count — The number of CPUs assigned to the VM. l — The total RAM assigned to the VM. Total Memory l State — The current state of the VM. l Address — The IP address of the VM. Viewing PowerPath Host Virtual Machines 345

346 Host Management Viewing host cache adapters Procedure 1. Select the storage system. 2. Select > Xtrem SW Cache Adapters to open the XtremSW Cache Hosts Adapters list view. The following properties display: —Adapter serial number. Card S/N Card Version —Adapter version. Vendor —Adapter vendor. —Adapter size. Card Size (GB) —Amount of card used. Card Used (GB) —Number of accessible volumes. Volumes Host —Host name. IP Address —Host IP address. —Host operating system. Host OS 346 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

347 CHAPTER 6 Data Protection l Understanding Data Protection Management ... 348 l ... 348 Creating device groups l Understanding TimeFinder/Clone operations ... 354 l Understanding TimeFinder/Snap operations ... 367 l ... 378 Managing TimeFinder/Mirror sessions l Managing TimeFinder SnapVX ... 385 l ...402 Managing remote replication sessions l Understanding Virtual Witness ... 444 l Creating SRDF/A DSE pools ... 448 l ... 451 Creating TimeFinder/Snap pools l Viewing SRDF group volumes ... 454 l Viewing SRDF protected storage groups ... 455 l Viewing related SRDF groups ... 457 l Creating SRDF groups ...457 l Understanding RecoverPoint ... 471 l Creating Open Replicator copy sessions ... 490 l ... 500 Understanding non-disruptive migration (NDM) l Viewing the authorized users and groups details ... 510 l Expanding remote volumes ...511 l Setting a device identity ...511 l ... 512 Editing storage group volume details l Editing storage group details ... 513 l Replication state severities ... 513 l Managing space reclamation ...514 l Advanced Options dialog ... 515 Data Protection 347

348 Data Protection Understanding Data Protection Management Data Protection Management covers the following areas: l Storage Groups -Management of SRDF protected storage groups. l Device Groups - Management of device groups. A device group is a user-defined group comprised of devices that belong to a locally attached array. Control operations can be performed on the group as a whole, or on the individual device pairs in the group. By default, a device can belong to more than one device group. l SRDF Groups - Management of SRDF groups. SRDF groups provide a collective data transfer path linking volumes of two separate storage systems. These communication and transfer paths are used to synchronize data between the R1 and R2 volume pairs associated with the RDF group. At least one physical connection must exist between the two storage systems within the fabric topology. l Migrations - Non-disruptive migration (NDM) management. NDM allows you to migrate storage group (application) data in a non-disruptive manner with no downtime from NDM capable source arrays to NDM capable target arrays. l SRDF/A DSE Pools - Management of SRDF/A DSE Pools. l TimeFinder Snap pools - Management of TimeFinder Snap pools. l Open Replicator - Management of Open Replicator. Open Replicator (ORS) provides a method for copying data to or from various types of arrays within a storage area network (SAN) infrastructure. l RecoverPoint Systems - Management of RecoverPoint systems. l Virtual Witness - Management of Virtual Witness.The Witness feature supports a third party that the two storage systems consult if they lose connectivity with each other, that is, their SRDF links go out of service. When this happens, the Witness helps to determine, for each SRDF/Metro Session, which of the storage systems should remain active (volumes continue to be read and write to hosts) and which goes inactive (volumes not accessible). Creating device groups Target volumes are automatically created by the wizard when the source Storage Group contains CKD volumes. Procedure 1. Select the storage system. 2. Select > Device Groups . DATA PROTECTION tab to open the 3. Click the list view. Device Groups Device Group Create to open the Create Device Group wizard. 4. Click Device Group Name . 5. Type a 6. Select a Device Group Type . Possible values are: l REGULAR — Group can only contain REGULAR volumes. l RDF1 — Group can only contain R1 volumes. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 348

349 Data Protection l RDF2 — Group can only contain R2 volumes. l RDF21 — Group can only contain R21 volumes. l ANY — Group can contain any volume type. Next . 7. Click 8. Select the of the volumes to use when creating the group; either Source manual selection, or all the volumes in a storage group. 9. Do the following, depending on the source of the volumes: l Manual selection: . Source Volume Type a. Select the b. Select one or more volumes and click Add . l Storage group: . Type or select the name of the Storage Group . 10. Click NEXT Target Volumes , either manually or automatically. 11. Select how to specify the 12. Do the following, depending on how you are specifying the target volumes: l Automatically: a. Optional: Select to replicate the source volumes using TimeFinder/ , TimeFinder Mirror , or TimeFinder/Clone . The required devices Snap (if they are not found to be already existing and unused) will be created. The BCV devices will be automatically created for the TimeFinder Mirror device group. The VDEV devices will be automatically created for the TimeFinder/Snap device group. The required devices will be automatically created for the TimeFinder/Clone device group. b. If you are replicating the source volumes with TimeFinder/Clone, select whether to add BCV or STD volumes to the device group. The volumes will be added with the TGT flag. l Manually: a. Click . NEXT b. Select the Target Volume Type. . c. Select one or more volumes and click Add NEXT . 13. Click 14. Verify your selections in the Summary page. To change any of your selections, Back . Note that some changes may require you to make additional click changes to your configuration. 15. Click . FINISH Results A window appears that displays the progress of the wizard's tasks. Adding volumes to device groups Procedure 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select Adding volumes to device groups 349

350 Data Protection 3. Click the Device Group list view. Device Groups tab to open the . 4. Select the device group and click Add Volumes Add to Group 5. From the list of available volumes, select the volume(s) and click . 6. Optional: Remove a previously added volume by selecting it and clicking . Remove 7. Click OK . Removing volumes from device groups Procedure 1. Select the storage system. > Device Groups . DATA PROTECTION 2. Select tab to open the Device Group 3. Click the Device Groups list view. 4. Details view. to open the Select the device group and click to view all volumes in device Number of Volumes 5. Click the number next to group. . 6. Select one or more volumes and click Remove Volumes OK 7. Click . Setting consistency protection Before you begin To set consistency protection: Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups > SRDF or DATA 2. Select > PROTECTION SRDF . Device Groups > 3. , and select . Select a group, click more Asynchronous > Set Consistency 4. select Enable or Disable . option if including the second hop of a cascaded SRDF 5. Select the Use 2nd Hop configuration (only applicable for device groups). Advanced Options to set the advanced options . Select the advanced 6. Click options and click OK . 7. Do one of the following: l Expand Add to Job List Now to add this task to Add to Job List and click the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 Scheduling jobs and Previewing jobs on page 920. l Add to Job List Run Now to perform the operation now. Expand , and click Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 350

351 Data Protection Renaming device groups Procedure 1. Select the storage system. Device Groups . DATA PROTECTION 2. Select > Device Group tab to open the 3. Click the list view. Device Groups 4. Select the device group from the list and click Rename . 5. In the Name field, enter the new device group name. 6. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs on page 920. and Previewing jobs l Add to Job List Run Now to perform the operation now. , and click Expand Deleting device groups Procedure 1. Select the storage system. > Device Groups 2. Select DATA PROTECTION . Device Groups tab to open the Device Group list view. 3. Click the 4. . Select the device group and click 5. Do one of the following: l Add to Job List Expand to add this task to and click Add to Job List Now the job list, from which you can schedule or run the task at your Scheduling jobs convenience. For more information, refer to on page 920 and Previewing jobs on page 920. l , and click Run Now Expand Add to Job List to perform the operation now. Viewing device groups Procedure 1. Select the storage system. 2. Select DATA PROTECTION > Device Groups . Device Groups tab to open the Device Group list view. 3. Click the Device Group Use the list view to view and manage device groups. The following properties display, depending on the operating environment: l Name —User-defined device group name. l —Device configuration of the devices in the group. Possible Group Type values are: Regular, R1, R2, R21, or Any. l —Number of standard devices in the device group. Standards l BCVs —Number of BCV devices in the device group. l VDEVs —Number of virtual devices in the device group. Renaming device groups 351

352 Data Protection l Targets —Number of target devices in the device group. l Gatekeepers —Number of gatekeeper devices in the device group (Does not apply/display with HYPERMAX OS 5977). l —Indicates whether the device group is valid. Group Valid The following controls are available, depending on the operating environment: l — Viewing device group details on page 352 l on page 348 Create — Creating device groups l Adding volumes to device groups on page 349 — Rename l — Deleting disk groups on page 237 l Add Volumes on page 349 — Adding volumes to device groups l QOS for replication on page 197 Replication QOS — l — on Assign Dynamic Cache Partition Assigning dynamic cache partitions page 945 (Does not apply/display with HYPERMAX OS 5977 or higher.) l Assigning array priority to groups of volumes Assign Symmetrix Priority — on page 189 (Does not apply/display with HYPERMAX OS 5977 or higher.) l — Setting optimized read miss on page 193 (Does Set Optimized Read Miss not apply/display with HYPERMAX OS 5977 or higher.) Viewing device group details Procedure 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select Device Groups 3. Click the list view. tab to open the Device Group 4. to open the view. Select the device group and click Details The following properties display, depending on the operating environment: l Name —User-defined device group name. l Application ID —Indicates which application created the device group. l —Indicates whether the device group is valid. Group Valid l Device Group Create Time —Time the device group was created. l Device Group Modify Time —Time the device group was modified. l —Storage system serial number ID. Symmetrix ID l Number of Volumes —Number of volumes. l —Number of gatekeeper devices in the Number of Associate Gatekeepers device group. l —Number of standard devices in the Number of STD Volumes in Group device group. l —Number of local BCV devices Number of Locally-Associated BCVs associated with the device group. l —Number of virtual devices Number of Locally-Associated VDEVs associated with the device group. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 352

353 Data Protection l Number of Locally-Associated TGTs —Number of local target volumes associated with the device group. l Number of Remotely-Associated BCVs (STD SRDF) —Number of remote BCV devices associated with the device group. l Number of Remotely-Associated BCVs (BCV SRDF) —Number of BCV devices, associated with the device group, to be paired with remotely- attached BCV devices. l —Number of Number of Remotely-Associated RBCVs (RBCV SRDF) remote BCV devices associated with the device group. l —Number of remote VDEV Number of Remotely-Associated VDEVs devices associated with the device group. l Number of Remotely-Associated TGTs —Number of remote target devices associated with the device group. l Number of Hop2 BCVs (Remotely-associated Hop2 BCV) —Number of BCVs on the second hop of the Cascaded SRDF configuration associated with the device group. l —Number of Number of Hop2 VDEVs (Remotely-associated Hop2 VDEV) virtual devices on the second hop of the Cascaded SRDF configuration associated with the device group. l Number of Hop2 TGTs (Remotely-associated Hop2 TGT) —Number of target devices on the second hop of the Cascaded SRDF configuration associated with the device group. l Number of Composite Groups —Number of composite groups. l —Indicates if the device group allows write pacing Pacing Capable capability. l Group-level Pacing State —Indicates if the device group is write pacing enabled or disabled. l Volume-level Pacing State —Indicates if the volumes in the device group are write pacing enabled or disabled. l —Indicates if group-level write Configured Group-level Exempt State pacing exemption capability is enabled or disabled. l —Indicates if effective group-level Effective Group-level Exempt State write pacing exemption capability is enabled or disabled. l Group Write Pacing Exempt Volumes —Indicates if the volumes in the device group have write pacing exemption capability enabled or disabled. Links are provided to views for objects contained in or associated with the device group. Each group link is followed the name of the group, or by a number, indicating the number of objects in the corresponding view. For Number of Volumes opens the view example, clicking the number next to listing the volumes contained in the device group. Viewing volumes in device group Procedure 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select Device Groups tab to open the Device Group list view. 3. Click the Viewing volumes in device group 353

354 Data Protection 4. Details Select the device group and click view. to open the 5. Click the number next to Number of Volumes to view all volumes in device group. The following properties display: l Name —Volume name l LDev —Logical device name l Volume Config —Device Configuration l Capacity (GB) —Device capacity in GB l Status —Device status The following controls are available: l Adding volumes to device groups on page 349 Add Volumes — l — Removing volumes from device groups on page 350 Remove Volumes Understanding TimeFinder/Clone operations Clone copy sessions allow you to create clone copies of a source volume on multiple target volumes. The source and target volumes can be either standard volumes or BCVs, as long as they are the same size and emulation type (FBA/CKD). Once you have activated the session, the target host can instantly access the copy, even before the data is fully copied to the target volume. Note TimeFinder operations are not supported directly on storage systems running HYPERMAX OS 5977 or higher. Instead, they are mapped to their TimeFinder/ SnapVX equivalents. An overview of a typical clone session is: 1. Create a device group, or add volumes to an existing device group. 2. Create the session; restore the session. 3. Activate the session. 4. View the session's progress. 5. Terminate the session For more information on TimeFinder/Clone concepts, refer to the Solutions Enabler and the TimeFinder Family Product Guide . TimeFinder Family CLI Product Guide Managing TimeFinder/Clone sessions Before you begin TimeFinder/Clone requires Enginuity version 5876, or HYPERMAX OS 5977 or higher. On HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents using Clone emulation. The TimeFinder/Clone dashboard provides you with a single place to monitor and manage TimeFinder/Clone sessions on a storage system. To manage TimeFinder/Clone sessions: Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 354

355 Data Protection Procedure 1. Select the storage system. DATA PROTECTION . > 2. Select Device Groups TimeFinder Clone tab to open the 3. Click the TimeFinder Clone list view. The following properties display: l Device Group Lists the groups containing volumes using TimeFinder/Clone. Information in this column is organized in a tree format, with groups organized into folders according to their type. To view information on a specific group, expand the appropriate folder. l Standard —The number of standard volumes in the group. l BCV — The number of BCVs in the group. l Target —he number of target volumes in the group. l State —The combined state of the sessions in the group. If all the sessions are in the same state, then that state appears; otherwise, Mixed appears. l Group Type —The type of group. Property values: RDF1, RDF2, RDF21, and Regular. l Group Valid —Indicates whether the group is valid. Property values: Yes or No. and click the number next to Clone Pairs to view the associated clone Click pairs (see Viewing clone pairs on page 364). to click the number next Storage Groups to view the associated Click storage groups. The following controls are available: l — Creating clone copy sessions Create Pairs on page 355 l — on page 357 Activate Activating clone copy sessions l Recreating clone copy sessions on page 358 Recreate — l Split — Splitting clone volume pairs on page 362 l — Restoring data from target volumes on page 361 Restore l Establish on page 379 — Creating Snapshots l Terminate Terminating clone copy sessions on page 363 — l — Modifying clone copy sessions on page 360 Set Mode Creating clone copy sessions This procedure explains how to create clone copy sessions. Creating clone copy sessions 355

356 Data Protection Note Note the following: l TimeFinder/Clone requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l You can only perform this operation on a group containing source and target volumes. l You can use the target volume of a clone session as the source volume for other clone sessions. To use this feature, you must first enable the SYMAPI_ALLOW_DEV_IN_MULT_GRPS option in the SYMAPI options file. For Solutions Enabler CLI more information on enabling SYMAPI options, refer to the Command Reference . l Data Domain volumes are not supported. l The clone copy does not become available to the host until the session is activated. Procedure 1. Select the storage system. > 2. Select . DATA PROTECTION Device Groups tab to open the list view. TimeFinder Clone 3. Click the TimeFinder Clone 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: a. Select a group and click . Create Pairs b. Select a source type and a target type. Pair level: a. to open its Details view. Select a group and click Clone Pairs b. Click on the number next to . . c. Select one or more pairs and click Create Pairs Set Pairs to open the Set Pairs dialog box. d. Click e. Select a source volume and a target volume, and click Add to make them a pair. Repeat this step as required. f. Click OK to return to the Create Sessions dialog box. Advanced Options to set the advanced options as described next. 5. Click Setting Advanced Options: If performing this operation at the group level, you can optionally select a and select one of the following. If you are not using the Pairing Pairing Type Type option, leave this field set to None . l —Allows the system to pair up the volumes in the exact Use Exact Pairs order that they were added to the group. 356 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

357 Data Protection l Use Optimized Pairs —Optimizes volume pairings across the local Symmetrix system without regard for whether the volumes belong to different RDF (RA) groups. For Copy Mode, select one of the following: l Use Background Copy —Specifies to start copying tracks in the background at the same time as target I/Os are occurring. l Use No Copy —Specifies to change the session to CopyOnAccess once the session is activated and no full-volume copy will initiate. l —Specifies to start copying tracks in the background before Use PreCopy you activate the clone session. By default, when creating a clone session, the system creates an SDDF session for maintaining changed track information. To change this default behavior, menu, and select expand the . Differential Mode Use No Differential Use Differential Otherwise, leave this field set to . . options To attach Session Options to the operation, select any number of OK . Click 6. Do one of the following: l and click Add to Job List Now to add this task to Expand Add to Job List the job list, from which you can schedule or run the task at your Scheduling jobs on page 920 convenience. For more information, refer to Previewing jobs and on page 920. l Add to Job List Run Now to perform the operation now. , and click Expand Activating clone copy sessions This procedure explains how to activate the copy operation from the source volume to the target volume. Activating a copy session places the target volume in the Read/ Write state. The target host can access the cloned data and has access to data on the source host until you terminate the copy session. Note l TimeFinder/Clone requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l You can only activate clone sessions that are in the Created or Recreated state. l This procedure explains how to perform this operation from the TimeFinder/Clone dashboard. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. To activate the copy operation from the source volume to the target volume: Procedure 1. Select the storage system. 2. Select DATA PROTECTION > Device Groups . TimeFinder Clone tab to open the TimeFinder Clone list view. 3. Click the 4. Do one of the following, depending on whether you want to perform the operation at the group level or pair level: Activating clone copy sessions 357

358 Data Protection Group level: l Activate . Select a group and click l Select a source type and a target type. Pair level: a. to open its Details view. Select a group and click b. Click on the number next to Clone Pairs . . Activate c. Select one or more pairs, and click 5. Optional: To attach session options to the operation, click Advanced Options . and select any number of options 6. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l , and click Expand to perform the operation now. Add to Job List Run Now Recreating clone copy sessions Before you begin l TimeFinder/Clone requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l The copy session must not have been created with the No Copy or No Differential option. l The session must have been activated to establish the new point-in-time copy. l With Enginuity 5876.159.102 or higher, you can recreate a clone copy without terminating TimeFinder/Snap or VP Snap sessions that are cascading off of the clone target. This procedure explains how to incrementally copy all subsequent changes made to the source volume (made after the point-in-time copy initiated) to the target volume. While in the Recreated state, the target volume remains Not Ready to the host. Procedure 1. To recreate clone copy sessions: 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select TimeFinder Clone TimeFinder Clone list view. tab to open the 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Recreate . Select a group and click Select a source type and a target type. Pair level: 358 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

359 Data Protection a. Details Select a group, and click view. to open its b. Click on the number next to Clone Pairs . c. Select one or more pairs and click . Recreate 5. Optional: To attach session options to the operation, click Advanced Options , and select any number of options . 6. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l , and click Run Now to perform the operation now. Expand Add to Job List Creating clone snapshots Before you begin l TimeFinder/Clone requires Enginuity OS 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l The create operation sets the target volume to Not Ready for a short time. If you are using a file system, unmount the target host before performing the create operation. This procedure explains how to create and immediately activate clone snapshots Procedure 1. To create clone snapshots: 1. Select the storage system. 2. Select Device Groups . DATA PROTECTION > tab to open the list view. TimeFinder Clone 3. Click the TimeFinder Clone 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: a. Select a group, click Create Snapshot . , and select b. Select the source type and the target type. Pair level: a. to open its Details view. Select a group and click Clone Pairs . b. Click on the number next to c. , and select Create Snapshot . Select one or more pairs, click d. Select the source type and the target type. Incremental or Full create. 5. Specify whether to perform an Creating clone snapshots 359

360 Data Protection 6. Optional: To attach session options to the operation, click Advanced Options , . and select any number of options 7. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Run Now to perform the operation now. , and click Expand Add to Job List Modifying clone copy sessions Before you begin l TimeFinder/Clone requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l You can modify the mode between Copy, NoCopy, and Precopy on clone pairs that are in a Created, Recreated, or Activated state. l Do not change a session created with the Differential option to the No Copy mode, as the session will fail. This procedure explains how to modify the mode in which a clone copy session is operating. Procedure 1. Select the storage system. 2. Select DATA PROTECTION > Device Groups . TimeFinder Clone tab to open the TimeFinder Clone list view. 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Select a group, click , and select . Set Mode Pair level: a. Details view. to open its Select a group, and click Clone Pairs . b. Click on the number next to c. Select one or more pairs, click , and select Set Mode Copy Mode : 5. Select a l Use Copy —If the session was created without the Copy option, it can be changed now to Copy mode. A copy initiates once the session is activated. l Use No Copy —If the session was created with Copy mode, you can change the session to Nocopy mode. The session becomes CopyOnAccess once the session is activated and no full-volume copy will initiate. —If the session was created without Precopy, you can change Use Precopy the session to Precopy mode, which implies a copy. You cannot change to 360 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

361 Data Protection NoCopy mode. Once the session is activated, the session changes to Copy mode. 6. If performing the operation at the group level, select the type of source volumes Target Type . Source Type ( ) and the type of target volumes , and select any 7. Optional: To set session options, click Advanced Options number of . options 8. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l , and click Run Now to perform the operation now. Expand Add to Job List Restoring data from target volumes Before you begin l TimeFinder/Clone requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l With Enginuity 5876 or higher, you can: n Use ORS control volumes as clone restore targets when the volumes are in PUSH sessions and in the ORS Copied state. n Perform an incremental restore to a cascaded clone target. For example, in the relationship A->B->C, you can copy data from volume C to volume A. l With Enginuity 5876, you can perform an incremental restore on volume pairs in a NoCopy/NoDiff clone session. l With Enginuity 5876.159.102 or higher, you can perform an incremental restore of clone targets to source volumes with active snap and VP snap sessions. l For a clone session in the Created state, the target volume must be in a fully copied state. This procedure explains how to copy target data to another volume (full restore), or back to the original source volume (incremental restore). In the case of a full restore, the original session terminates and a copy session to the target of the restore starts. In the case of an incremental restore, the original session copy direction is reversed and changed data is copied from the target volume to the source volume. To support this operation, the session must have been created with the Differential option and the volume must be in a fully Copied state. To restore data from a target volume: Procedure 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select TimeFinder Clone tab to open the TimeFinder Clone list view. 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Restoring data from target volumes 361

362 Data Protection Select a group, click . , and select Restore Pair level: a. to open its view. Details Select a group, and click . b. Click on the number next to Clone Pairs c. Select one or more pairs, click , and select Restore . Restore Type 5. Select a : l Incremental — Terminates the original session and starts an incremental copy session back to the original source volume. The session must have been created with the Differential option. l Full — Terminates the original session and starts a copy session to the target of the restore. 6. If performing the operation at the group level, select the type of source volumes ( ) and the type of target volumes Target Type . Source Type to the operation, click 7. To attach , and Session Options Advanced Options Clone copy session options on page 365. select any number of 8. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l Add to Job List Run Now to perform the operation now Expand , and click Splitting clone volume pairs Before you begin l TimeFinder/Clone requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l The clone session must be in the Restored state. This procedure explains how to split clone volume pairs. Splitting volume pairs changes the direction of the clone relationship (that is, the original source volume becomes the source volume for a future copy), which enables you to use either the establish or recreate command. Procedure 1. To split clone volume pairs: 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select TimeFinder Clone tab to open the TimeFinder Clone list view. 3. Click the 4. Do one of the following, depending on whether you want to perform the operation at the group level or pair level: Group level: 362 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

363 Data Protection Select a group, click . , and select Split Pair level: a. view. to open its Details Select a group and click . Clone Pairs b. Click on the number next to c. Select one or more pairs, click , and select Split . 5. If performing the operation at the group level, select the type of source volumes ) and the type of target volumes Target Type . ( Source Type , 6. Optional: To attach session options to the operation, click Advanced Options . and select any number of options 7. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Add to Job List , and click Run Now to perform the operation now. Expand Terminating clone copy sessions Before you begin l TimeFinder/Clone requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Clone operations are mapped to their TimeFinder/SnapVX equivalents. l You need a clone copy session in any pair state. l Terminating a session while the pairs are in the CopyOnAccess, CopyOnWrite, or CopyInProg state causes the session to end. If the application has not finished accessing all of the data, the target copy is not a full copy. This procedure explains how to terminate a clone copy session, thereby deleting the pairing information from the storage system, and removing any hold on the target volume. Procedure 1. To split clone volume pairs: 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select TimeFinder Clone TimeFinder Clone list view. tab to open the 3. Click the 4. Do one of the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Select a group, click Terminate . , and select Pair level: Terminating clone copy sessions 363

364 Data Protection a. Details Select a group and click view. to open its . Clone Pairs b. Click on the number next to c. , and select . Select one or more pairs, click Terminate 5. If performing the operation at the group level, select the type of source volumes ( Source Type Target Type . ) and the type of target volumes 6. To attach , and Session Options to the operation, click Advanced Options on page 365. Clone copy session options select any number of 7. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920. on page 920 and Previewing jobs l , and click Run Now to perform the operation now. Expand Add to Job List Viewing clone pairs Procedure 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select tab to open the TimeFinder Clone list view. 3. Click the TimeFinder Clone 4. to open its Details view. Select a group and click 5. Click on the number next to Clone Pairs . The following properties display: l —The name of the source volume. Source Volume l Source LDev —The logical name of the source volume. l —The name of the target volume. Target Volume l Target LDev —The logical name of the target volume. l —The session state of the pair. State The following controls are available: l on page 365 — Viewing clone pair details l Creating clone copy sessions on page 355 Create Pairs — l Activating clone copy sessions on page 357 Activate — l — Recreating clone copy sessions on page 358 Recreate l — Split Splitting clone volume pairs on page 362 l — on page 361 Restore Restoring data from target volumes l Modifying clone copy sessions on page 360 Set Mode — l — Creating clone snapshots on page 359 Create Snapshot l Terminate Terminating clone copy sessions on page 363 — Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 364

365 Data Protection Viewing clone pair details Procedure 1. Select the storage system. Device Groups . DATA PROTECTION 2. Select > TimeFinder Clone tab to open the TimeFinder Clone 3. Click the list view. 4. Select a group, and click to open its Details view . . 5. Click on the number next to Clone Pairs 6. Details view. Select a pair and click to open its The following properties display: l Source Volume — The name of the source volume. l Source LDev — The logical name of the source volume. l — The name of the target volume. Target Volume l Target LDev — The logical name of the target volume. l — The session state of the pair. State l ). Flags specific to the CDGP — (this property is displayed by clicking pair session in the form: (C): X = The background copy setting is active for this pair. . = The background copy setting is not active for this pair. (G): X = The Target volume is associated with a group. . = The Target volume is not associated with a group. (D): X = The Clone session is a differential copy session. . = The Clone session is not a differential copy session. (P): X = The precopy operation has completed one cycle. . = The precopy operation has not completed one cycle. l Percent Copied — The percentage of copying that is complete. (this property is displayed by clicking ). l Timestamp — Date and time the pair was created. (this property is ). displayed by clicking Clone copy session options The following table describes the TimeFinder/Clone session options: Viewing clone pair details 365

366 Data Protection Table 5 TimeFinder/Clone session options Available Description Session option with action Both Sides Activates all locally and remotely associated clone pairs in an Activate SRDF group. Establish Concurrent Create Performs the action for an additional clone pair in a group. Recreate Establish Activate Verify Consistent Creates clone copies that are consistent with the database up Activate to the point in time that the activation occurs. It suspends writes to the source volumes during the activation. Create Copy Creates a full data copy. By omitting this option (default), the volume pair state will be in the CopyOnAccess state when Establish activated. Actual copying of the data is deferred until either tracks on the source volume are written to, or tracks on the target volume are read or written. This option is only applicable when the target volume is a regular volume (not a virtual volume). Create Used with either the Copy or Precopy option to create an Differential SDDF session for maintaining changed track information. It Establish must be used when creating copy sessions on which you plan on issuing a Restore action. Force Overrides any restrictions and forces the operation, even Create though one or more paired volumes may not be in the Establish expected state. Use caution when checking this option Activate because improper use may result in data loss. Restore Split Terminate Establish Not Ready Sets the target volumes as Not Ready. Activate Restore Optimize Optimizes volume pairings across the local storage system Create without regard for whether the volumes belong to different Establish RDF (RA) groups. For remote volumes, use the Optimize Rag option. Optimize Rag Create Uses optimization rules to create remote BCV pairs from volumes within the same RDF (RA) group on a storage Establish system. Precopy Copies tracks in the background before the clone session is Create activated. Used with the create and recreate actions. 366 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

367 Data Protection Table 5 TimeFinder/Clone session options (continued) Description Available Session option with action Recreate With the verify command, verifies that the copy sessions are Verify Restored in the Restored state. With the terminate command, Terminate terminates a restored VP Snap session. Create Star Targets the action at volumes in SRDF/Star mode. Recreate Establish Activate Restore Split Terminate Symforce Forces an operation on the volume pair including pairs that Terminate would be rejected. Use caution when checking this option because improper use may result in data loss. Understanding TimeFinder/Snap operations TimeFinder/Snap operations enable you to create and manage copy sessions between a source volume and multiple virtual target volumes. When you activate a virtual copy session, a point-in-time copy of the source volume is immediately available to its host through the corresponding virtual volume. Virtual volumes consume minimal physical disk storage because they contain only the address pointers to the data that is stored on the source volume or in a pool of SAVE volumes. SAVE volumes are storage volumes that are not host-accessible and can only be accessed through the virtual volumes that point to them. SAVE volumes provide pooled physical storage for virtual volumes. Snapping data to a virtual volume uses a copy-on-first-write technique. Upon a first write to the source volume during the copy session, Enginuity copies the preupdated image of the changed track to a SAVE volume and updates the track pointer on the virtual volume to point to the data on the SAVE volume. The attached host views the point-in-time copy through virtual volume pointers to both the source volume and SAVE volume, for as long as the session remains active. If you terminate the copy session, the copy is lost, and the space associated with the session is freed and returned to the SAVE volume pool for future use. Note TimeFinder operations are not supported directly on storage systems running HYPERMAX OS 5977 or higher. Instead, they are mapped to their TimeFinder/ SnapVX equivalents. The following are the basic actions performed in a TimeFinder/Snap operation: l Create—Creates the relationship between the source volume and the virtual target volume. Understanding TimeFinder/Snap operations 367

368 Data Protection l Activate—Makes the virtual target volume available for read/write access and starts the copy-on-first-write mechanism. l Recreate—Creates a new point-in-time copy. l Restore—Copies tracks from the virtual volume to the source volume or another volume. l Terminate—Causes the target host to lose access to data pointed to by the virtual volume. Solutions Enabler For more information about TimeFinder concepts, refer to the TimeFinder Family CLI Product Guide TimeFinder Family Product Guide . and the Managing TimeFinder/Snap sessions Before you begin TimeFinder/Snap requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. The TimeFinder/Snap dashboard provides you with a single place to monitor and manage TimeFinder/Snap sessions on a storage system. Procedure 1. Select the storage system. > Device Groups . 2. Select DATA PROTECTION TimeFinder Snap TimeFinder Snap list view. 3. Click the tab to open the The following properties display: l Device Group —Groups containing volumes using TimeFinder/Snap. l Standard —The number of standard volumes in the group. l —The number of BCVs in the group. BCV l VDEV —The number of virtual volumes in the group. l Target —The number of target volumes in the group. l —The session state of the pair. State l —The type of group. Property values: Regular, R1, R2, or R21. Group Type l Group Valid —Whether the group is valid or invalid. Click and click the number next to Snap Pairs to view the associated snap pairs (see Viewing snap pairs on page 376). Storage Groups to view the associated Click to click the number next storage groups. The following controls are available: l Create Pairs — Creating virtual copy sessions on page 369 l — Activating virtual copy sessions on page 370 Activate l Terminate on page 375 — Terminating virtual copy sessions l Restore Restoring virtual copy sessions on page 374 — l — Recreating virtual copy sessions on page 373 Recreate 368 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

369 Data Protection l Creating snapshots Establish on page 371 — l Duplicating virtual copy sessions Duplicate on page 372 — Creating virtual copy sessions Virtual copy sessions define and set up the volumes for snap operations. The Create action defines the copy session requirements and sets the track protection bitmap on the source volume to protect all tracks and detect which tracks are being accessed by the target host or written to by the source host. The target virtual volume remains Not Ready to its host and placed on hold status for copy session usage. This prevents other control operations from using the volume. The volume pair state transitions from CreateInProg to Created when complete. The virtual data becomes accessible to its host when the copy session is activated Note l TimeFinder/Snap requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. l You can create up to 128 copies of a source volume to various virtual target volumes. To do this, enable the following SYMCLI environment variable: SYMCLI_MULTI_VIRTUAL_SNAP = ENABLED. l A source volume can concurrently copy data to as many as 15 target volumes at one time. Each target requires a separate copy session. l For storage systems running Enginuity 5876, you can: n Use this feature to create multivirtual snap sessions from thin volumes. n Use RDF2 async volumes as source volumes. n Create a snap pair from a clone target in the Split state. l To create a snap session of an R2 volume that is in an SRDF/A session, volume level pacing must be enabled on the R1 side. l Data Domain volumes are not supported. To create virtual copy sessions: Procedure 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select TimeFinder Snap tab to open the TimeFinder Snap list view. 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level. Group level: a. Select a group, and click Create Pairs . b. Select a source type and a target type. Pair level: Creating virtual copy sessions 369

370 Data Protection a. Details Select a group, and click view. to open its . SnapPairs b. Click on the number next to Create Pairs c. Select one or more pairs and click . dialog box. to open the Set Pairs d. Click Set Pairs Add e. Select a source volume and a target volume, and click to make them a pair. Repeat this step as required. OK to return to the Create Sessions dialog box. f. Click to set the advanced options as described next. 5. Click Advanced Options l Pairing Type Pairing Type option, leave Select a . If you are not using the . None this field set to l To attach Session Options to the operation, select any number of options . 6. Do one of the following: l Add to Job List Add to Job List Now to add this task to Expand and click the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 Previewing jobs on page 920. and l Expand Run Now to perform the operation now. Add to Job List , and click Activating virtual copy sessions Activating the copy session starts the copy-on-first-write mechanism and places the target volume in the Read/Write state. The target host can access the copy and has access to data on the source host until the copy session is terminated. Note TimeFinder/Snap requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. To activate virtual copy sessions: Procedure 1. Select the storage system. 2. Select DATA PROTECTION > Device Groups . TimeFinder Snap tab to open the TimeFinder Snap list view. 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: l Activate . Select a group, and click l Select a source type and a target type. Pair level: a. to open its Details view. Select a group, and click b. Click on the number next to Clone Pairs . 370 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

371 Data Protection c. Select one or more pairs, and click Activate . to set the advanced options. To attach session options 5. Click Advanced Options options to the operation, select any number of . 6. Do one of the following: l Expand Add to Job List and click Add to Job List Now to add this task to the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs and Previewing jobs on page 920. l , and click Run Now Expand Add to Job List to perform the operation now. Creating snapshots Before you begin TimeFinder/Snap requires Enginuity OS 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. This procedure explains how to create and immediately activate virtual copy sessions. To create a snapshot: Procedure 1. Select the storage system. > Device Groups 2. Select DATA PROTECTION . TimeFinder Snap tab to open the TimeFinder Snap list view. 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Select a group, click . Create Snapshot , and select Select a source type and a target type. Pair level: a. Select a group, and click to open its Details view. b. Create Snapshot . , and select Select one or more pairs, click 5. Click Advanced Options to set the advanced options. Setting Advanced options: Pairing Type . If you are not using the Pairing a. Select one of the following for option, leave this field set to Type None . l —Allows the system to pair up the volumes in the exact Use Exact Pairs order that they were added to the group. l Use Optimized Pairs —Optimizes volume pairings across the local storage system without regard for whether the volumes belong to different RDF (RA) groups. Advanced Options and b. To attach Session Options to the operation, select select any number of options . Creating snapshots 371

372 Data Protection c. Click OK . 6. Do one of the following: l Expand to add this task to Add to Job List Add to Job List Now and click the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to Previewing jobs on page 920. and l , and click Run Now to perform the operation now. Expand Add to Job List Duplicating virtual copy sessions The duplicate TimeFinder/Snap feature allows you to duplicate a point-in-time copy of a virtual volume that is paired in a previously activated snap session to another virtual volume. This second point-in-time copy session actually resides with the source volume of the original snap session and is charged as part of the maximum number of sessions for that source volume. The duplicate snap is an actual copy of the virtual volume to another virtual volume. Before you begin: l TimeFinder/Snap requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. l Snap create and activate operations cannot be mixed between normal snap sessions and duplicate snap sessions within the same operation. l The maximum number of duplicated sessions in the Created state is two l When a duplicate session is in the Created state, the original session cannot be terminated or recreated until the duplicate session is activated. To duplicate virtual copy session: Procedure 1. Select the storage system. 2. Select Device Groups . DATA PROTECTION > tab to open the list view. TimeFinder Snap 3. Click the TimeFinder Snap 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Select a group, click Duplicate . , and select Select a source type and a target type. Pair level: a. to open its Details view. Select a group, and click b. Select one or more pairs, click , and select Duplicate . 5. To attach Session Options to the operation, select Advanced Options and options . select any number of 372 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

373 Data Protection 6. Do one of the following: l Add to Job List to add this task to and click Expand Add to Job List Now the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to Previewing jobs on page 920. and l Run Now Expand Add to Job List , and click to perform the operation now. Recreating virtual copy sessions Before you begin l TimeFinder/Snap requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. l For storage systems running Enginuity 5876 or higher, you can use this feature to recreate multivirtual snap sessions from thin and standard volumes. l This feature can only be used on sessions that have been previously activated. The snap recreate action allows you to recreate a snap session on an existing VDEV in order to prepare to activate a new point-in-time image. Procedure 1. To recreate virtual copy sessions: 1. Select the storage system. > Device Groups 2. Select DATA PROTECTION . TimeFinder Snap tab to open the TimeFinder Snap list view. 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: , and select Select a group, click Recreate . Select a source type and a target type. Pair level: a. to open its Details view. Select a group, and click b. Select one or more pairs, click to open the , select Recreate Recreate dialog box. 5. To attach Session Options to the operation, select Advanced Options and select any number of options . 6. Do one of the following: l Expand and click Add to Job List Now to add this task to Add to Job List the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 Previewing jobs on page 920. and l Expand Add to Job List , and click Run Now to perform the operation now. Recreating virtual copy sessions 373

374 Data Protection Restoring virtual copy sessions Before you begin l TimeFinder/Snap requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. l With Enginuity 5876 or higher, you can use ORS control volumes as snap restore volumes when the volumes are in Push sessions and in the ORS Copied state. l With Enginuity 5876.159.102 and higher, you can perform a TimeFinder/Snap restore to a TimeFinder/Clone target. For example, volumes in an A > B > C cascaded session (where A > B is TimeFinder/Clone and B > C is TimeFinder/ Snap) can copy data from volume C to volume A (via volume B). You can complete this operation without terminating the TimeFinder/Clone session, or any existing TimeFinder/Snap sessions off of the TimeFinder/Clone target. This feature is known as Persistent Restore to Target (PTT). The following types of restore operations can be performed for virtual copy sessions: l Incremental restore back to the original source volume. l Incremental restore to a BCV, which has been split from its original standard source volume but maintains the incremental relationship with the source. l Full restore to any standard or split BCV outside of the existing copy session. The target volume of the restore must be of the same size and emulation type as the source volume. Procedure 1. To restore virtual copy sessions: 1. Select the storage system. > Device Groups . 2. Select DATA PROTECTION TimeFinder Snap tab to open the TimeFinder Snap list view. 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Select a group, click , and select . Restore Select a source type and a target type. Pair level: a. Details view. to open its Select a group, and click b. , and select Restore to open the Restore Select one or more pairs, click dialog. 5. Select the . Restore Type Restore operations can be used to copy target data to another device (full restore), or back to the original source device (incremental restore). In the case of a full restore, the original session terminates and a copy session to the target of the restore starts. In the case of an incremental restore, the original session 374 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

375 Data Protection copy direction is reversed and changed data is copied from the target device to the source device. Restore operations require that the original session is differential and the source device is fully copied. Set Pairs to open the Set TimeFinder Snap Full 6. If performing a restore, click Pairs dialog from which you can select the volumes to use in the operation. Advanced Options 7. To attach Session Options to the operation, select and select any number of options . 8. Do one of the following: l Expand and click Add to Job List Now to add this task to Add to Job List the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs on page 920. and Previewing jobs l Add to Job List Run Now to perform the operation now. Expand , and click Terminating virtual copy sessions Before you begin TimeFinder/Snap requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Snap operations are mapped to their TimeFinder/SnapVX equivalents. This procedure explains how to terminate an active virtual copy session at any time. Procedure 1. To terminate virtual copy sessions: 1. Select the storage system. 2. Select DATA PROTECTION > Device Groups . TimeFinder Snap tab to open the TimeFinder Snap list view. 3. Click the 4. Do one of the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Terminate . Select a group and select Select a source type and a target type. Pair level: a. Select a group, and click Details view. to open its . b. Select one or more pairs and select Terminate Advanced Options and 5. To attach Session Options to the operation, select select any number of options . 6. Do one of the following: l Add to Job List Add to Job List Now to add this task to and click Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 Previewing jobs on page 920. and l Expand Add to Job List , and click Run Now to perform the operation now. Terminating virtual copy sessions 375

376 Data Protection Viewing snap pair details Procedure 1. Select the storage system. Device Groups . DATA PROTECTION 2. Select > TimeFinder Snap tab to open the 3. Click the TimeFinder Snap list view. 4. view . Select a group, and click to open its Details . Snap Pairs 5. Click on the number next to 6. Select a pair and click to open its Details view. The following properties display: l Source Volume —Name of the source volume. l Source LDev —Logical name of the source volume. l —Name of the target volume. Target Volume l Target LDev —Logical name of the target volume. l State —Session state of the pair. l Snap Pool — The name of the snap pool. l — The percentage of copying that is complete. Percent Copied l Timestamp — Date and time the snapshot was created. Viewing snap pairs Procedure 1. Select the storage system. DATA PROTECTION > Device Groups . 2. Select TimeFinder Snap 3. Click the list view. tab to open the TimeFinder Snap 4. to open its view. Details Select a group and click 5. Click on the number next to Snap Pairs . The following properties display: l Source Volume — The name of the source volume. l — The name of the target volume. Target Volume l Source LDev — The logical name of the source volume. l Target LDev — The logical name of the target volume. l — The session state of the pair. State The following controls are available: l — Viewing snap pair details on page 376 l — Creating virtual copy sessions on page 369 Create Pairs l — Activating virtual copy sessions on page 370 Activate 376 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

377 Data Protection l Terminating virtual copy sessions Terminate on page 375 — l Viewing clone pairs Detach on page 364 — l Viewing clone pairs — Attach on page 364 l on page 372 — Duplicate Duplicating virtual copy sessions l — Creating snapshots on page 371 Create Snapshot l Recreating virtual copy sessions Recreate — on page 373 l — on page 374 Restore Restoring virtual copy sessions Snap session options The following table describes the TimeFinder/Snap session options: Table 6 TimeFinder/Snap session options Description Available Session with action option Consistent Causes the source and VDEV pairs to be consistently Activate activated. Indicates that the action is being performed on a duplicate Duplicate Create virtual copy session (that is, on a VDEV to a VDEV pair). Activate Terminate Overrides any restrictions and forces the operation, even Create Force though one or more paired volumes may not be in the Activate expected state. Use caution when checking this option Terminate because improper use may result in data loss. Restore Incremental Restore Not Ready Sets the VDEVs as Not Ready. Activate Restore Incremental Restore Must be used with the terminate action when terminating a Restore Terminate restore session. Indicates that the action is being performed on a volume that Star Create is in SRDF/Star mode. Activate Recreate Terminate Restore Forces an operation on the volume pair including pairs that Terminate SymForce would be rejected. Use caution when checking this option because improper use may result in data loss. Snap session options 377

378 Data Protection Set TimeFinder Snap Pairs dialog box When creating, activating, restoring, or establishing a TimeFinder/Snap pairs, this dialog box allows you to define the pairs used in the operation. Procedure 1. To define the pairs: and Add to move them Source Volumes and click Target Volumes 1. Select the table. to the Selected Pairs 2. Click OK . Managing TimeFinder/Mirror sessions Before you begin l TimeFinder/Mirror requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Mirror operations are mapped to their TimeFinder/SnapVX equivalents. l TimeFinder operations are not supported on ORS control volumes on storage systems running HYPERMAX OS 5977 or higher. The TimeFinder/Mirror dashboard provides you with a single place to monitor and manage TimeFinder/Mirror sessions on a storage system. Procedure 1. Select the storage system. DATA PROTECTION Device Groups . > 2. Select tab to open the TimeFinder Mirror list view. 3. Click the TimeFinder Mirror The following properties display: l Device Group —Groups containing volumes using TimeFinder/Mirror. l Standard —The number of standard volumes in the group. l — The number of BCVs in the group. BCVs l State —The combined state of the sessions in the group. If all the sessions appears. Mixed are in the same state, then that state appears; otherwise, l Group Type —The type of group. Property values are: RDF1, RDF2, RDF21, and Regular l —Indicates whether the group is valid. Property values are: Yes Group Valid and No. Click to view the associated and click the number next to Mirror Pairs Viewing snap pairs on page 376). mirror pairs (see to click the number next Storage Groups to view the associated Click storage groups. The following controls are available: l Creating Snapshots on page 379 — Create Snapshot l Restore Restoring BCV pairs on page 380 — l — Splitting BCV pairs on page 381 Split 378 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

379 Data Protection l Cancelling BCV pairs Cancel on page 381 — Creating Snapshots Before you begin l TimeFinder/Mirror requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher,TimeFinder/Mirror operations are mapped to their TimeFinder SnapVX equivalents. l Data Domain volumes are not supported. Procedure 1. To create snapshots: 1. Select the storage system. 2. Select . DATA PROTECTION > Device Groups tab to open the TimeFinder Mirror list view. TimeFinder Mirror 3. Click the 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: Create Snapshot Select a device group, and click to Pair level: a. Mirror Sessions List view. to open its Select a device group, and click to open the Create b. Select one or more pairs, click Create Snapshot Snapshot - Mirror Pair dialog. 5. Select a Snapshot Type : l Incremental —Copies to the BCV volume only the new data that was updated on the standard volume while the BCV pair was split. l —Copies the entire contents of the standard volume to the BCV volume. Full 6. If performing a full establish at the pair level, do the following: Set Pairs Set TimeFinder Mirror Pairs dialog. a. Click to open the and a Target Volume , and click Add to make them b. Select a Source Volume a pair. Repeat this step as required. OK to return to the Create Snapshot - Mirror Pair dialog. c. Click 7. To attach session options to the operation, select Advanced Options and options . select any number of 8. Do one of the following: l and click Add to Job List Now to add this task to Add to Job List Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 Previewing jobs on page 920. and l Expand Add to Job List , and click Run Now to perform the operation now. Creating Snapshots 379

380 Data Protection Restoring BCV pairs Before you begin TimeFinder/Miorror requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Mirror operations are mapped to their TimeFinder/SnapVX equivalents. This procedure explains how to copy data from the BCV volumes to the standard volumes. Procedure 1. To restore BCV pairs: 1. Select the storage system. > . DATA PROTECTION 2. Select Device Groups TimeFinder Mirror list view. 3. Click the tab to open the TimeFinder Mirror 4. Do the following, depending on whether you want to perform this operation at the group level or the pair level: Group level: Select a device group, and click Restore . Pair level: a. Mirror Sessions List Select a device group, and click to open the view. b. Select one or more pairs, and click Restore . : 5. Select a Restore Type l —Copies to the standard volume only the new data that was Incremental updated on the BCV volume while the BCV pair was split. l Full —Copies the entire contents of the BCV volume to the standard volume. 6. If performing a full establish at the pair level, do the following: Set Pairs to open the Set TimeFinder Mirror Pairs dialog. a. Click Source Volume b. Select a , and click Add to make them and a Target Volume a pair. Repeat this step as required. OK Restore - Mirror Pair c. Click dialog. to return to the and Advanced Options 7. To attach session options to the operation, select options . select any number of 8. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 Scheduling jobs and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand 380 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

381 Data Protection Splitting BCV pairs Before you begin TimeFinder/Miorror requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Mirror operations are mapped to their TimeFinder/SnapVX equivalents. This procedure explains how to split paired volumes to where each holds separate valid copies of the data. Procedure 1. To split BCV pairs: 1. Select the storage system. Device Groups . 2. Select DATA PROTECTION > TimeFinder Mirror list view. tab to open the 3. Click the TimeFinder Mirror 4. Do the following, depending on whether you want to perform this operation at the group level or the pair level: Group level: . Select a device group, and click Split Pair level: a. Select a device group, and click Mirror Sessions List view. to open the . b. Select one or more pairs, and click Split and 5. To attach session options to the operation, select Advanced Options select any number of options . 6. Do one of the following: l Add to Job List Expand to add this task to and click Add to Job List Now the job list, from which you can schedule or run the task at your Scheduling jobs convenience. For more information, refer to on page 920 and Previewing jobs on page 920. l Add to Job List Run Now to perform the operation now. Expand , and click Cancelling BCV pairs TimeFinder/Mirror requires Enginuity version 5876. On storage systems running HYPERMAX OS 5977 or higher, TimeFinder/Mirror operations are mapped to their TimeFinder/SnapVX equivalents. To cancel the relationship between volumes in a BCV pair: Procedure 1. Select the storage system. 2. Select Data Protection > TimeFinder > TimeFinder/Mirror to open the dashboard. TimeFinder/Mirror 3. Do the following, depending on whether you want to perform this operation at the group level or the pair level. Group level: Splitting BCV pairs 381

382 Data Protection l . Select a device group and click Cancel Pair level: a. to open the Mirror Sessions List view. Select a device group and click . Cancel b. Select one or more pairs and click 4. To attach session options to the operation, select and Advanced Options select any number of options . 5. Do one of the following: l Add to Job List Now to add this task to Add to Job List and click Expand the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to on page 920. and Previewing jobs l , and click Run Now to perform the operation now. Expand Add to Job List Viewing mirror pairs Procedure 1. Select the storage system. > Device Groups . DATA PROTECTION 2. Select tab to open the TimeFinder Mirror list view. 3. Click the TimeFinder Mirror 4. Select a group and click to open its Details view. Mirror Pairs . 5. Click on the number next to The following properties display: l Source Volume —The hexadecimal ID of the source volume. l —The logical name of the source volume. Source LDev l Target Volume —The hexadecimal ID of the target volume. l —The logical name of the target volume. Target LDev l Pair State —The session state of the pair. l Timestamp —Date and time the snapshot was created. The following controls are available: l — Viewing mirror pair details on page 382 l — Creating Snapshots on page 379 Create Snapshot l Restore on page 380 — Restoring BCV pairs l Split Splitting BCV pairs on page 381 — l — Cancelling BCV pairs on page 381 Cancel Viewing mirror pair details Procedure 1. Select the storage system. 382 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

383 Data Protection 2. Select TimeFinder > TimeFinder/Mirror to open the Data Protection > TimeFinder/Mirror view. 3. view. Select a device group, and click Mirror Pairs List to open its 4. to open its Details view. Select a pair and click 5. Click on the number next to . Mirror Pairs 6. Details view. to open its Select a pair and click l Group —Group name. l Source Volume —Hexadecimal ID of the source volume. l —Logical name of the source volume. Source LDev l Target Volume —Hexadecimal ID of the target volume. l Target LDev —Logical name of the target volume. l —Session state of the pair. State l Percent Copied —Percentage of copying complete. TimeFinder/Mirror session options The following table describes the TimeFinder/Mirror session options: Table 7 TimeFinder/Mirror session options Available with action Session Description option Bypasses the storage system's Split Bypass exclusive locks for the local or Full Restore remote array during mirror Incremental Restore operations. Consistent Causes the standard volumes Split being managed to be consistently split. Cannot be combined with the Instant option. Differential Indicates that the split operation Split should initiate a differential data copy from the first mirror set member to the rest of the BCV mirror set members when the BCV pair split is done. Overrides any restrictions and Full Establish Force forces the operation, even Incremental Establish though one or more paired Split volumes may not be in the expected state. Use caution Full Restore when checking this option Incremental Restore because improper use may result in data loss. TimeFinder/Mirror session options 383

384 Data Protection Table 7 TimeFinder/Mirror session options (continued) Available with action Session Description option Create Differential Used with either the Copy or Precopy option to create an Establish SDDF session for maintaining changed track information. This must be used when creating copy sessions on which you plan on issuing a Restore action. Create Overrides any restrictions and Force forces the operation, even Establish though one or more paired Activate volumes may not be in the expected state. Use caution Restore when checking this option Split because improper use may result in data loss. Terminate Split Not Ready Sets the target volumes as Not Ready. Upon completion of a Full Restore split action, the target volumes Incremental Restore are set as Not Ready. When a restore is initiated, the standard volumes are set as Not Ready. Optimize Optimizes volume pairings Full Establish across the local storage system without regard for whether the volumes belong to different RDF (RA) groups. For remote volumes , use the Optimize Rag option. Optimize Rag Full Establish Uses optimization rules to create remote BCV pairs from volumes within the same RDF (RA) group on a Symmetrix system. Applies to two-way mirrored Full Establish Protbcvest BCV volumes . Moves all mirrors Incremental Establish of the BCV volume to join the mirrors of the standard volume. Protect Indicates that the BCV should be Split write-protected before initiating Full Restore a restore operation. Incremental Restore Remote Applicable only for split Split operations on a BCV RDF1 Full Restore volume, or a restore operation Incremental Restore from a BCV to a STD RDF2 volume. If this option is not specified, then the mode Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 384

385 Data Protection Table 7 TimeFinder/Mirror session options (continued) Description Session Available with action option defaults to not propagate the data to the remote mirror of the RDF volume. Full Establish With a split operation, initiates a Reverse reverse data copy from one or Incremental Establish more fixed BCV mirrors to the Split first (moving) mirror of the BCV upon the completion of the split Full Restore operation. With an establish or Incremental Restore restore operation, requests a verification check that the BCV’s fixed mirror has valid data. If at establish or restore time you anticipate a need to perform future BCV reverse split operations, you must apply a reverse establish or restore so that no invalid tracks on the fixed BCV mirror become used. Targets the action at volumes in Full Establish Star SRDF/Star mode. Restore Split Cancel Forces an operation on the SymForce Full Establish volume pair including pairs that Incremental Establish would be rejected. Use caution Split when checking this option because improper use may result Full Restore in data loss. Incremental Restore Setting TimeFinder/Mirror pairs When establishing or restoring TimeFinder/Mirror pairs, this dialog box allows you to define the pairs used in the operation. Procedure 1. To define the pairs: Source Volumes and Target Volumes and click Add to move them 1. Select the Selected Pairs table. to the 2. Click OK . Managing TimeFinder SnapVX TimeFinder SnapVX is a local replication solution designed to non-disruptively create point-in-time copies (snapshots) of critical data. TimeFinder SnapVX creates Setting TimeFinder/Mirror pairs 385

386 Data Protection snapshots by storing changed tracks (deltas) directly in the Storage Resource Pool of the source volume. With TimeFinder SnapVX, you do not need to specify a target volume and source/target pairs when you create a snapshot. If there is ever a need for the application to use the point-in-time data, you can create links from the snapshot to one or more target volumes. If there are multiple snapshots and the application needs to find a particular point-in-time copy for host access, you can link and relink until the correct snapshot is located. The TimeFinder/SnapVX view provides a single place from you can manage TimeFinder SnapVX snapshots and their associated storage groups. secure snaps—These are Snap VX snapshots that can't be deleted before the expiry time set by the StorageAdmin. Users can create a Secure snapshot or set Secure status on an existing snapshot. Once the retention time has expired, the Secure snapshot will be automatically terminated unless there is a linked device or an active restore session is ongoing. The expiry time on a Secure snapshot can be changed but the time can only be moved forward from the expiry time originally set. This feature requires an array running the HYPERMAX OS 5977 Q1 2017 Service Release or higher. Note Secure snapshots may only be terminated after they expire or by customer-authorized support. Please refer to Knowledge Base article 498316 for additional information. Time To Live — From Unisphere 8.4 onwards, users can now specify a SnapVX snapshot's time to live in hours as well as days. Previously only days could be specified. Before you begin l The storage system must be running HYPERMAX OS 5977 or higher. l TimeFinder/SnapVX operations are not supported on working ProtectPoint snapshots. TimeFinder/SnapVX operations are, however, supported to help repair failing ProtectPoint snapshots. To access the TimeFinder/SnapVX view: 1. Select the storage system. > Storage Groups and click on the SnapVX 2. Select DATA PROTECTION tab to TimeFinder/SnapVX view. open the TimeFinder/SnapVX view The following properties display: l —Storage group associated with the snapshot. Storage Groups l —Total capacity of the storage group. Capacity l Snapshots —Number of snapshots associated with storage group. l Last Creation Time —Date/time the most recent snapshot was created. The following controls are available: l —displays a properties panel listing the following properties: Storage Group, Capacity (GB), Number of Snapshots and SRP. l Create — Creating snapshots on page 387 l — Modifying TimeFinder SnapVX snapshots on page 389 Modify l Restore Restoring snapshots on page 393 — l — Linking to snapshots on page 390 Link 386 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

387 Data Protection l Unlinking from snapshots Unlink on page 392 — l Relinking to snapshots Relink on page 391 — l Setting copy mode for snapshots — Set Mode on page 396 l on page 393 — Set Time to Live Setting snapshots to automatically terminate l — Setting "Secure" status on an existing snapshot on page 394 Set Secure l Terminate on page 395 — Terminating snapshots Creating snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l The maximum number of snapshots per source volume is 256. l Snapshots off of linked targets are permitted only after the volume is fully defined. l The Secure snapshot feature requires the HYPERMAX OS 5977 Q1 2017 Service Release or higher. l TimeFinder/SnapVX view , You can perform this operation from the following : Storage view, or Data Protection dashboard. Depending on the location from which you are performing this operation, some of the following steps may not apply. This procedure explains how to create TimeFinder SnapVX snapshots. Note Secure snapshots may only be terminated after they expire or by customer-authorized support. Please refer to Knowledge Base article 498316 for additional information. To create snapshots: Procedure 1. Select the storage system. 2. Do the following, depending on the location from which you want to perform the procedure: TimeFinder/SnapVX view: > Storage Groups a. Select SnapVX DATA PROTECTION and click on the TimeFinder/SnapVX view. tab to open the b. Select a storage group and click Create to open the Create Snapshot dialog. view: Storage Groups STORAGE a. Select to open the Storage Groups view. Storage Groups > Protect Storage b. Select the storage group and click Protect to open the Group wizard. Point In Time Protection Using SnapVX . c. If not already selected, select Next . d. Click Data Protection dashboard: Creating snapshots 387

388 Data Protection a. Select Data Protection dashboard. Replication to open the . b. Click CREATE SNAPSHOT 3. Select whether to create a new snapshot or reuse an existing snapshot. 4. If reusing an existing snapshot, select it from the list. When using this method, assigns generation numbers to the snapshots in the order in which they were created (latest = generation 0, previous incrementing by one). This naming convention allows you to differentiate point-in-time copies of the same volumes. CAUTION It is the users responsibility to manage the snapshot names being used. If snapshots are being applied to parent and child storage groups individually, care should be taken to never use the same snapshot name at different levels of the storage group construct. The same applies if some of the volumes are in multiple storage groups being snapshotted; the same snapshot names should also be avoided across the different storage groups. 5. Choose an expiry type from the drop-down menu. The options are: l None — If no automatic expiry time is set the snapshot will need to be manually deleted. l Time to live — Once the time you set has expired, the snapshot will be automatically terminated, provided that it is not linked to any target volumes. If an expired snapshot is linked, the system waits until the last link has been removed before terminating the snapshot. To override this Force option under the behavior and terminate the snapshot, select the Advanced Options link. Time to live as the protection type, use the Days and Hours drop- 6. If you chose down menus to set the snapshot's expiry time. 7. Click Advanced Options to see the advanced options. They are: l — Select this option to set a Secure snapshot that Enable Secure Snaps Secure can't be deleted before the expiry time you set. Once you tick the and Hours drop-down menus will appear and you can checkbox the Days use these to set the snapshot's expiry time. Once the retention time has expired, the Secure snapshot will be automatically terminated unless there is a linked device or an active restore session is ongoing. StorageAdmins can choose to move the retention time forward. l Both Sides — Select this option to create a snapshot at both sides of an SRDF pairing simultaneously. The following limitations apply: n A consistent snapshot on both sides is only allowed when the SRDF pairs exist on the source Storage Group volumes in Synchronous RDF mode and the SRDF pair state is Synchronous. n A consistent snapshot on both sides is only allowed when the SRDF pairs are in Active SRDF mode and the SRDF pair state is ActiveActive or ActiveBias. n A mixture of R1 and R2 devices is not allowed. n All the RDF devices in the SG must be in same RDF group. n Concurrent RDF is not supported. 388 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

389 Data Protection n For cascaded SRDF setups, the Both Sides option is supported by selected Storage Group and the next immediate hop, but not the subsequent hops. l Enable Force Flag — Select this option to force the operation even though one or more volumes may not be in the normal, expected states. Next . 8. Click 9. Choose one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. This option can be used to create a recurring daily SnapVX snapshot for a given time. In the event of a failed recurring snapshot, an Alert will be raised to notify the user. The schedule continues to run in the event of a failed snapshot, issuing alerts to the user. The alerts list view will retain a record of the failed snapshots (unless the alert is deleted). A warning level alert will be issued. There will not be an end date for the schedule specified when setting it up, so you will need to cancel the schedule manually, if desired. For more information, refer on page 920 and to on page 920. Scheduling jobs Previewing jobs l Add to Job List Run Now to perform the operation now. Expand and click Modifying TimeFinder SnapVX snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The Secure snapshot feature requires the HYPERMAX OS 5977 Q1 2017 Service Release or higher. To modify TimeFinder SnapVX snapshots: Procedure 1. Select the storage system. DATA PROTECTION 2. Select and click on the SnapVX tab to > Storage Groups view. TimeFinder/SnapVX open the to open the Edit Snapshot dialog. Modify 3. Select a snapshot and click 4. Enter the new name for the snapshot. 5. Choose one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. This option can be used to create a recurring daily SnapVX snapshot for a given time. In the event of a failed recurring snapshot, an Alert will be raised to notify the user. The schedule continues to run in the event of a failed snapshot, issuing alerts to the user. The alerts list view will retain a record of the failed snapshots (unless the alert is deleted). A warning level alert will be issued. There will not be an end date for the schedule specified when setting it up, so you will need to cancel the schedule manually, if desired. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Add to Job List and click Run Now to perform the operation now. Expand Modifying TimeFinder SnapVX snapshots 389

390 Data Protection Linking to snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l The targets must not be linked to any other snapshots. l The target volume must be of equal or greater size than the source volume. l Any pre-existing data that was exclusive to the target will be lost during a link or relink. l This procedure explains how to perform this operation from the TimeFinder/ SnapVX dashboard. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. l The SnapVX link storage group dialog is updated to always create CKD devices when the New storage group target name radio button is selected. This procedure explains how to link one or more host-mapped target volumes to a snapshot, thereby making the snapshot's point-in-time data available to applications running on the host. Snapshots can be linked to target volumes in the following modes: l NoCopy mode—Creates a temporary, space-saving snapshot of only the changed data on the snapshot's Storage Resource Pool (SRP). Target volumes linked in this mode will not retain data after the links are removed. This is the default mode. This mode cannot be used when either the source or link target volume is a Data Domain volume. l Copy mode—Creates a permanent, full-volume copy of the data on the target volume's SRP. Target volumes linked in this mode will retain data after the links are removed. Linking a Storage Groups snapshot after the SG volumes have been subsequently expanded will pick volumes to link to by using the volume size at the time of the snapshot being taken. Procedure 1. To link to snapshots: 1. Select the storage system. 2. Select Storage Groups and click on the SnapVX tab to DATA PROTECTION > view. open the TimeFinder/SnapVX . 3. Select the storage group and click Link . 4. Select the Snapshot Name 5. Specify whether to link to a new target storage group (one not already linked to a snapshot) or an existing target storage group. 6. Optional: Modify the default name for the new storage group. Advanced Options to continue setting the advanced options, as described 7. Click next. Setting Advanced options: To force the operation even though one or more volumes may not be in the normal, expected state(s), select Force . Setting advanced options 390 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

391 Data Protection l To create a permanent, full-time copy of the data on the target volume's Copy enables the Remote option. . Selecting SRP, select Copy l To force the operation even though one or more volumes may not be in the Force normal, expected state(s), select . l To specify that the operation is for devices in STAR mode, select Star . l check box to turn off Compression. Optional: Uncheck the Compression Compression is only allowed on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or higher. 8. Do one of the following: l to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Previewing jobs Scheduling jobs l , and click Run Now to perform the operation now. Expand Add to Job List Relinking to snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l To relink in Copy mode: n The original link must be fully copied prior to the relink. n The copy will be differential between the original linked snapshot and the newly linked snapshot. l Any pre-existing data that was exclusive to the target will be lost during a link or relink. l This procedure explains how to perform this operation from the TimeFinder/ SnapVX dashboard. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. This procedure explains how to unlink a target storage group from a snapshot, and then automatically link it to another snapshot. After a relink operation, the copy between the original linked snapshot and the newly linked snapshot is differential. You can also relink storage group to the same snapshot, thereby refreshing the point- in-time copy on the target storage group when it's been modified by host writes. Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups and click on the SnapVX tab to 2. Select TimeFinder/SnapVX open the view. 3. , and select Relink Relink dialog to open the Select the storage group, click box. 4. Select the link target Storage group and the Snapshot Name. Advanced Options to continue setting the advanced options, as described 5. Click next. Setting Advanced options: Relinking to snapshots 391

392 Data Protection l To create a permanent, full-time copy of the data on the target volume's Copy enables the Remote option. . Selecting SRP, select Copy l . Star To specify that the operation is for devices in STAR mode, select l To force the operation even though one or more volumes may not be in the normal, expected state(s), select Force . 6. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List Run Now to perform the operation now. Expand , and click Unlinking from snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l This procedure explains how to perform this operation from the TimeFinder/ SnapVX dashboard. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. This procedure explains how to unlink target volumes from their snapshots. For instructions on unlinking target volumes, and then automatically linking to other snapshots, refer to Relinking to snapshots on page 391. Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups and click on the SnapVX tab to 2. Select TimeFinder/SnapVX open the view. 3. , and select Unlink dialog Unlink to open the Select the storage group, click box. . Snapshot Name 4. Select the 5. Click Advanced Options to continue setting the advanced options, as described next. Setting Advanced options: To force the operation even though one or more volumes may not be in the normal, expected state(s), select Force . To specify that the operation is for devices in STAR mode, select Star . To force the operation when the operation would normally be rejected, select . SymForce 6. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand 392 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

393 Data Protection Restoring snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l This procedure explains how to perform this operation from the TimeFinder/ SnapVX view. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. This procedure explains how to restore snapshot data back to the original source volumes. TimeFinder SnapVX restore operations are inherently differential, meaning that only tracks that have changed since the snapshot was created are copied back to the source volumes. Procedure 1. Select the storage system. Storage Groups and click on the SnapVX tab to > DATA PROTECTION 2. Select view. open the TimeFinder/SnapVX Restore 3. Select the storage group and clink . and (0 is the latest). Snapshot Name 4. Select the Creation Date to continue setting the advanced options, as described 5. Click Advanced Options next. Setting Advanced options: To force the operation even though one or more volumes may not be in the normal, expected state(s), select Force . Star . To specify that the operation is for devices in STAR mode, select 6. Do one of the following: l to add this task to the job list, from which you can Add to Job List Click schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Expand Run Now to perform the operation now. Add to Job List , and click Setting snapshots to automatically terminate Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l This procedure explains how to perform this operation from the TimeFinder/ SnapVX dashboard. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups and click on the SnapVX tab to 2. Select TimeFinder/SnapVX view. open the Restoring snapshots 393

394 Data Protection 3. Set Time to Live Select the storage group, click to open the , and select Set Time to Live dialog box. Snapshot Name Creation Date . and 4. Select the 5. Select the amount of days and hours you want the snapshot to exist for. Once the time has expired, the snapshot is automatically terminated, provided that it is not linked to any target volumes. If an expired snapshot is linked, the system will wait until the last link has been removed before terminating the option, which will allow snapshot. To override this behavior, select the Force the system to terminate the snapshot regardless of whether it is linked. To remove the Time to Live attribute, select None . 6. Click Advanced Options to continue setting the advanced options, as described next. Setting Advanced options: To force the operation even though one or more volumes may not be in the Force . normal, expected state(s), select To specify that the operation is for devices in STAR mode, select Star . 7. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Setting "Secure" status on an existing snapshot Before you begin To perform this operation, you must be a StorageAdmin. The Secure snapshot feature requires the HYPERMAX OS 5977 Q1 2017 Service Release or higher. This procedure explains how to set "Secure" status on an existing snapshot. It can also be performed by clicking on a storage group in the TimeFinder SnapVX view to open the Snapshots view. Note Secure snapshots may only be terminated after they expire or by customer-authorized support. Please refer to Knowledge Base article 498316 for additional information. Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups and click on the SnapVX tab to 2. Select TimeFinder/SnapVX view. open the 3. Select the storage group, click , and select Terminate to open the Terminate dialog box. Days and Hours 4. Select the name of an existing snapshot and then use the drop-down menus to set the expiry time. 394 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

395 Data Protection 5. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Previewing jobs Scheduling jobs on page 920. on page 920 and l Run Now to perform the operation now. Expand Add to Job List , and click Terminating snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l The snapshot must not have any links. l This procedure explains how to perform this operation from the TimeFinder/ SnapVX dashboard. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. l If the snapshot is Restored, then this action terminates the restore session. If you want to terminate the snapshot, the dialog and action have to be executed again. Procedure 1. Select the storage system. > Storage Groups 2. Select SnapVX tab to DATA PROTECTION and click on the TimeFinder/SnapVX view. open the 3. , and select Terminate to open the Select the storage group, click dialog box. Terminate . Snapshot Name 4. Select the Advanced Options to continue setting the advanced options, as described 5. Click next. Setting Advanced options: To force the operation even though one or more volumes may not be in the . normal, expected state(s), select Force . Star To specify that the operation is for devices in STAR mode, select To force the operation when the operation would normally be rejected, select SymForce . CAUTION Use extreme caution with this option. If used when a link is copy in progress or when a restore is restore in progress, this will cause an incomplete copy and data on the copy target would not be usable. 6. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Add to Job List , and click Run Now to perform the operation now. Expand Terminating snapshots 395

396 Data Protection Setting copy mode for snapshots Before you begin l To perform this operation, you must be a StorageAdmin. l The storage system must be running HYPERMAX OS 5977 or higher. l This procedure explains how to perform this operation from the TimeFinder/ SnapVX dashboard. You can also perform this operation from other locations in the interface. Depending on the location, some of the steps may not apply. Procedure 1. Select the storage system. Storage Groups 2. Select SnapVX tab to DATA PROTECTION > and click on the view. TimeFinder/SnapVX open the 3. Set Mode Set Mode , and select to open the Select the storage group, click dialog box. Snapshot Name 4. Select the . 5. Select a new mode: l Copy —Creates a permanent, full-volume copy of the data on the target volume's SRP. Target volumes linked in this mode will retain data after the links are removed. l No Copy —Creates a temporary, space-saving snapshot of only the changed data on the snapshot's Storage Resource Pool (SRP). Target volumes linked in this mode will not retain data after the links are removed. This is the default mode. 6. Click Advanced Options to continue setting the advanced options, as described next. Setting Advanced options: To force the operation even though one or more volumes may not be in the Force . normal, expected state(s), select . To specify that the operation is for devices in STAR mode, select Star 7. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand , and click Run Now to perform the operation now. Add to Job List Viewing snapshots Before you begin The storage system must be running HYPERMAX OS 5977 or higher. This procedure explains how to view and manage snapshots of a storage group. Procedure 1. Select the storage system. 396 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

397 Data Protection 2. Select Storage Groups and click on the SnapVX tab to DATA PROTECTION > view. open the TimeFinder/SnapVX 3. Select a storage group, click and click on the number next to Number of Snapshots Snapshots list view allows you to view and manage the The storage group snapshots associated with a storage group. The following properties display: l —Name of the snapshot. Snapshot l —Date, time, and generation number for the snapshot. Creation Time l Linked —Indication whether the snapshot is linked to another storage group. A checkmark indicates that the snapshot is linked. l Restored —Indication hether the snapshot is restored to the source. A checkmark indicates that the snapshot is restored. l Time To Live —time the snapshot has to live. l —Whether the snapshot is Secured or not. A checkmark indicates Secured that the snapshot is Secured, a dash indicates that it isn't. "Expired" indicates that the snapshot was Secured but is now expired. l — Viewing snapshot details on page 397 l Creating snapshots on page 387 Create — l — on page 389 Modify Modifying TimeFinder SnapVX snapshots l Restore Restoring snapshots on page 393 — l — Linking to snapshots on page 390 Link l Unlink on page 392 — Unlinking from snapshots l — on page 391 Relink Relinking to snapshots l Set Mode — Setting copy mode for snapshots on page 396 l — Setting snapshots to automatically terminate Set Time to Live on page 393 l Set Secure Setting "Secure" status on an existing snapshot on page 394 — l — Terminating snapshots on page 395 Terminate Viewing snapshot details Before you begin The storage system must be running HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. > Storage Groups and click on the 2. Select tab to DATA PROTECTION SnapVX TimeFinder/SnapVX view. open the 3. Select a storage group, click and click on the number next to Number of Snapshots 4. to open the snapshot Details view. Select a snapshot and select Viewing snapshot details 397

398 Data Protection The Snapshot Details view allows you to view and manage a snapshot. Properties panel The following properties display: l Name —Name of the snapshot. l Storage Group Name —Name of the snapshot. l Generation —Generation number assigned to the snapshot. This number is used to differentiate between point-in-time copies of the same name and same volumes. assigns generation numbers to the snapshots in the order in which they were created (latest = generation 0, previous incrementing by one). l —Date and time the snapshot was created. Creation Time l Expiry Date —Date and time the snapshot is set to automatically terminate if either "Secure" or "Time to Live" has been set. If the snapshot is not set to automatically terminate, this field displays N/A. l State —Snapshot state. l —Indicates whether the snapshot is Secured or not. A checkmark Secured indicates that the snapshot is Secured, a dash indicates that it isn't. "Expired" indicates that the snapshot was Secured but is now expired. There are also links to views displaying objects (Source Volumes, Links and SRP) contained in and associated with the snapshot. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking Links opens a view listing the links associated with the snapshot. Viewing snapshot links Before you begin The storage system must be running HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups and click on the SnapVX tab to 2. Select TimeFinder/SnapVX open the view. 3. and click on the number next to Number of Select a storage group, click Snapshots 4. Select a snapshot and select to open the snapshot Details view. to open the snapshot Links list view. 5. Click on the number next to Links Links list view allow you to view and manage the storage groups The snapshot containing the linked volumes. The following properties display: l Storage Group —Name of the storage group. l State —Snapshot state. l Snapshot Timestamp —Date and time the snapshot was created. l —Date and time the link was created. Link Timestamp The following controls are available: 398 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

399 Data Protection l —Displays a propertiees panel listing the following properties: Source Storage Group and Linked Volumes. l on page 392 — Unlink Unlinking from snapshots l Relink — Relinking to snapshots on page 391 Viewing snapshot link details Before you begin The storage system must be running HYPERMAX OS 5977 or higher. Procedure 1. To view snapshot link details: 1. Select the storage system. > Storage Groups and click on the SnapVX tab to 2. Select DATA PROTECTION view. open the TimeFinder/SnapVX 3. and click on the number next to Number of Select a storage group, click Snapshots 4. Select a snapshot and select to open the snapshot Details view. to open the snapshot Links list view. 5. Click on the number next to Links 6. to open the snapshot links Details view. Select a snapshot and select The snapshot link Details view allow you to view and manage the linked volume pairs. The following properties display: l Source Volume —Name of the source volume. l Linked Volumes —Name of the linked volume(s). l State —Snapshot state. l —Snapshot flags. Possible values are: Failed, Copied, Flags (FCMD) Modified, Defined (FCMD). Viewing snapshot source volumes Before you begin The storage system must be running HYPERMAX OS 5977 or higher. This view displays SnapVX ICDP snapshots created from the Mainframe product. Management of these snapshots is not supported. Procedure 1. Select the storage system. 2. Select > Storage Groups and click on the SnapVX tab to DATA PROTECTION TimeFinder/SnapVX view. open the 3. Select a storage group, click and click on the number next to Number of Snapshots Viewing snapshot link details 399

400 Data Protection 4. Details Select a snapshot and select view. to open the snapshot Source to open the snapshot Source Volumes 5. Click on the number next to list view. Volumes The snapshot Source Volumes view allow you to view and manage the source volumes in a snapshot. The following properties are displayed: l —Name of volume Name l —Snapshot state. State l —Date and time the snapshot was created. Creation Date l Failed —Indication of failure. l Linked —Indication of link status. l Restored —Indication of restoration status. The following controls are available: l on page 400 — Viewing snapshot source volume details l Restore — Restoring snapshots on page 393 l Linking to snapshots on page 390 Link — l — on page 391 Relink Relinking to snapshots l Unlink Unlinking from snapshots on page 392 — l — Setting copy mode for snapshots on page 396 Set Mode l Set Time to Live on page — Setting snapshots to automatically terminate 393 l Set Secure — on page 394 Setting "Secure" status on an existing snapshot l Terminating snapshots on page 395 Terminate — Viewing snapshot source volume details Before you begin The storage system must be running HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. DATA PROTECTION > and click on the SnapVX tab to 2. Select Storage Groups TimeFinder/SnapVX view. open the 3. and click on the number next to Number of Select a storage group, click Snapshots 4. Select a snapshot and select to open the snapshot Details view. 5. Click on the number next to Source Volumes to open the snapshot Source list view. Volumes 6. Select the volume and click Details to open the snapshot source volume view. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 400

401 Data Protection The snapshot source volume Details view allows you to view and manage the source volume in a snapshot. The following properties display: l Name —Name of the volume. l —Snapshot state. State l —Snapshot secured indication. Secured l Flags —Snapshot flags. Possible values are: Failed, Link, Restore, GCM, Type (FLRGFT). l —Capacity of the volume. Capacity (GB) l Tracks —Number of source tracks that the host has not yet overwritten. l Track Size —Track size in bytes. l —Linked volumes. Linked Volumes Viewing snapshot source volume linked volumes Before you begin The storage system must be running HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. > Storage Groups and click on the SnapVX 2. Select DATA PROTECTION tab to TimeFinder/SnapVX view. open the 3. and click on the number next to Number of Select a storage group, click Snapshots 4. Select a snapshot and select view. to open the snapshot Details to open the snapshot 5. Click on the number next to Source Volumes Source Volumes list view. 6. Details to open the snapshot source volume Select the volume and click view. 7. Click the number next to Linked Volumes to open the snapshot source volume list view. Link Volumes The snapshot source volume Link Volumes list view allow you to view and manage the linked volumes for a snapshot source volume. The following properties display: l —Name of the volume. Name l Storage Group —Storage group that contains the target volume. l State —Snapshot state. l Snapshot Timestamp —Date and time the snapshot was created. l —Date and time the link was created. LinkTimestamp The following controls are available: l Unlink — Unlinking from snapshots on page 392 Viewing snapshot source volume linked volumes 401

402 Data Protection l Relinking to snapshots — on page 391 Relink RBAC roles for performing local and remote replication actions The table below details the roles needed to perform TimeFinder SnapVX local and remote replication actions. Note Unisphere for PowerMax does not support RBAC Device Group management. Local Replication Remote Device Manager Replication (a) Yes Protection Wizard - Create SnapVx Snapshot (a) Yes Create Snapshot Edit Snapshot Yes (b) (c) (d) Yes Yes Link Snapshot (b) (c) (d) Yes Yes Relink Snapshot (b) (b) Restore Snapshot Yes Yes Yes Set Time To Live (b) (d) Set Mode Yes Yes Terminate Snapshot Yes (b) (d) Yes Unlink Snapshot Yes (a) - Set Secure will be blocked for users who only have Local_REP rights. (b) - The user must have the specified rights on the source volumes. (c) - The user may only choose existing storage groups to link to. Creating a new storage group requires Storage Admin rights. (d) - The user must have the specified rights on the link volumes. Managing remote replication sessions The SRDF dashboard provides a single place to monitor and manage SRDF sessions on a storage system. This includes device groups types R1, R2, and R21. Unisphere provides the ability to monitor and manage the SRDF replication on storage groups directly without the need to map to a device group. Unisphere provides the ability to monitor and manage SRDF/Metro from the SRDF dashboard. SRDF/Metro delivers active-active high availability for non-stop data access and workload mobility – within a data center and across metro distance. It provides array clustering for storage systems running HYPERMAX OS 5977 or higher enabling even more resiliency, agility, and data mobility. SRDF/Metro enables hosts and host clusters to directly access a LUN or storage group on the primary SRDF array and secondary SRDF array (sites A and B). This level of flexibility delivers the highest availability and best agility for rapidly changing business environments. 402 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

403 Data Protection In an SRDF/Metro configuration, SRDF/Metro utilizes the SRDF link between the two sides of the SRDF device pair to ensure consistency of the data on the two sides. If the SRDF device pair becomes Not Ready (NR) on the SRDF link, SRDF/Metro must respond by choosing one side of the SRDF device pair to remain accessible to the hosts, while making the other side of the SRDF device pair inaccessible. There are two options which enable this, Bias and Witness. The first option, Bias, is a function of the two storage systems running HYPERMAX OS 5977 taking part in the SRDF/Metro and is a required and integral component of the configuration. The second option, Witness, is an optional component of SRDF/ Metro which allows a third storage system running Enginuity 5876 or HYPERMAX OS 5977 system to act as an external arbitrator to avoid an inconsistent result in cases where the bias functionality alone may not result in continued host availability of a surviving non-biased array. Creating SRDF connections This task provides a mechanism to make a connection to storage array that is currently not visible to the Unisphere server and to bring the connected array into Unisphere as remote. Before you begin: The physical connectivity and zoning must be in place before undertaking this task. To create SRDF connections: Procedure 1. Select the storage system. DATA PROTECTION SRDF Groups . 2. Select > 3. Select an SRDF group, click , and select to open Create SRDF Connection Create SRDF Connection the wizard. 4. On the Local page, specify the following information: l Type a value for the SRDF group label. l Select a SRDF Group Number from the list of unused RDFG numbers for the local array. l From the list, select a local port to be used by the new SRDF Group. 5. (Optional) Click NEXT . page, specify the following information: 6. On the Remote l Scan to scan the SRDF SAN for the port selected on the local page. Select l Select an Array ID from the list. l Type a value for the SRDF Group Number. This is not selectable as there is no knowledge of the remote candidate array's used RDFG numbers at this point. l From the list, select a remote port to be used by the new SRDF Group. 7. (Optional) Click NEXT . Summary page, verify your selections. To change any of them, click 8. On the . Note that some changes may require you to make additional changes to BACK your configuration. Creating SRDF connections 403

404 Data Protection 9. Do one of the following: l Add to Job List to add this task to and click Expand Add to Job List Now the job list, from which you can schedule or run the task at your Scheduling jobs convenience. For more information, refer to on page 920 and Previewing jobs on page 920. l , and click Run Now to perform the operation now. Add to Job List Expand Results A SRDF group has been created with a single port on each side. After creation, further SRDF group changes can be performed using Unisphere functionality. Creating SRDF pairs Before you begin Creation of an SRDF pair can be blocked when the R2 is larger than the R1. This feature requires that you disable the SYMAPI_RDF_CREATEPAIR_LARGER_R2 option in the SYMAPI options file (enabled by default). For more information on . Solutions Enabler Installation Guide disabling SYMAPI options, refer to the You can create SRDF pairs containing standard and thin volumes, or thin and diskless volumes. To use this feature, the thin and diskless volumes must be on a storage system running Enginuity OS 5876 or higher, and the standard volume must be on a storage system running Enginuity OS 5876. Meta volumes are supported on storage systems running Enginuity OS 5876. On storage systems running HYPERMAX OS 5977 or higher, you can specify a RecoverPoint volume as the R1 volume. The cascaded R1 -> R21 -> R2 configuration of which an SRDF pair can be part, depends on the Enginuity/HYPERMAX OS version of each of the devices. The following combinations are supported: Unisphere provides support for creating RDF pairs in a concurrent RDF in a SRDF/ Metro configuration resulting in one Metro RDF mirror and one Async or Adaptive Copy RDF mirror. 404 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

405 Data Protection Note The following restrictions apply: l Adding a Metro RDF mirror when the device is already part of an SRDF/Metro configuration. l Adding a Metro RDF mirror when the device is already an R2 device. l Adding a non-Metro RDF R2 mirror to a device that has a Metro RDF mirror. l Adding a Metro RDF mirror when the non-Metro RDF mirror is in Synchronous mode. l Adding a non-Metro RDF mirror in Synchronous mode when the device is already part of an SRDF/Metro configuration l Operations that make the Metro RDF mirror RW on the RDF link are not allowed if the Metro device is the target of the data copy from the non-Metro RDF mirror. l Operations that make the non-Metro RDF mirror RW on the RDF link and result in the data copy to the Metro device are not allowed if the Metro RDF mirror is RW on the RDF link. l The Create Pair - Invalidate R1 operation is not allowed on the non-Metro RDF mirror if it results in a Metro device becoming write-disabled (WD). R1 R2 R21 5977 5977 5977 5977 5876 5977 5876 5977 5876 5977 5977 5876 5876 5977 5876 5977 5876 5977 5876 5977 5876 If the RDF interaction includes a storage system running HYPERMAX OS 5977 or higher, then the other storage system must be running Enginuity OS 5876 or higher. It is possible to create a SRDF/Metro device pair when SRDF/Metro exist in a current group or an empty SRDF group exists on the storage device. CKD devices are not supported by SRDF/Metro. Only CKD storage groups are selectable if the volumes chosen are of that emulation. If Local or Remote storage system is running Enginuity OS 5876, only Bound TDEVs are supported, and this requires the selection of a thin pool. Adding to Storage Groups will list SGs which are either empty or not a parent (i.e. child or standalone). SGs which already contain devices must have those devices in the SRDF group which the wizard is being run against, and have the devices of the same SRDF polarity (R1s or R2s). This procedure supports adding SRDF pairs to a SRDF/Metro group. To create an SRDF pair: Creating SRDF pairs 405

406 Data Protection Procedure 1. Select the storage system. DATA PROTECTION to open the SRDF Groups list > 2. Select SRDF Groups view. to open the Create Pairs 3. Select the SRDF group and click Create SRDF Pairs dialog box. This selection will determine the remote storage system. 4. Select Mirror Type to apply to the local devices. 5. Select SRDF Mode. 6. Select Adaptive Copy Mode option for (Disk / Write Pending) (storage systems running Enginuity OS 5876 only) 7. Select one of the following options: l Invalidate R1 - Invalidates the source R1 device(s) so that a full copy can be initiated from the remote mirror. l Invalidate R2 - Invalidates the target R2 device(s) so that a full copy can be initiated from the remote mirror. l Establish - Begins a full copy from the source to the target, synchronizing the dynamic SRDF pairs in the device file. l Restore - Begins a full copy from the target to the source, synchronizing the dynamic SRDF pairs in the device file. l Format - No data resynchronization is done between source and target dynamic SRDF pairs in the device file after all tracks are cleared on what will become the R1 and R2 side. - Bypasses the check that ensures that the target of 8. Optional: Select No WD the operation is not writable by the host. 9. Click NEXT to go to the Local Volumes page. 10. If you wish to do manual selection for local devices, turn Automatic Selection off. 11. Select the thin pool name. 12. Specify criteria to find the volumes of interest, and choose volumes. Add to Storage Group checkbox and select a storage group. 13. Click Remote Volumes page. 14. Click to go to the NEXT 15. If you wish to do manual selection for remote devices, turn Automatic Selection off. 16. Select the thin pool name. checkbox and select a storage group. 17. Click Add to Storage Group NEXT to go to the Summary page. 18. Click 19. Review the changes. 20. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 Scheduling jobs and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand 406 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

407 Data Protection Deleting SRDF pairs Deleting SRDF pairs cancels the dynamic SRDF pairing by removing the pairing information from the storage system and converting the volumes from SRDF to regular volumes. This operation can be performed on a storage group, a SRDF/Metro, or a device group. Deleting SRDF pairs To delete SRDF pairs from the SRDF List Volumes View, refer to on page 421. from the SRDF List Volumes View Half deleting SRDF pairs cancels the dynamic SRDF pairing information for one side (R1s or R2s) of the specified volume pairs and converts the volumes from RDF to regular volumes. This operation can only be performed on a device group. If you select all pairs for a delete pair action, then the option to remove the devices from the device group, or the local or remote Storage Group is not displayed, as it will not render the device group, storage group, or SRDF/Metro unmanageable. Before you begin: SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure supports the deletion of SRDF pairs from a SRDF/Metro group. To delete SRDF pairs: Procedure 1. Select the storage system. > SRDF . 2. Select Data Protection , SRDF/Metro or Storage Groups . 3. Click Device Groups 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: n Select a group, click Delete Pair . , and select n Select the Use 2nd Hop option if including the second hop of a cascaded SRDF configuration (not applicable if the hop2 is SRDF/Metro). n Select the option if deleting one side of the volume pair. Half Delete n , Remove from Optional: Select Remove from local Storage Groups Remove from local Storage Groups if the , and remote Storage Groups pair deletion results in devices that are no longer SRDF protected, and results in the related device groups becoming invalid. n Only one side of the RDF device pairs that are removed from the SRDF/ Metro session will remain host-accessible when the operation completes. The or Keep R2 option is used to specify the side that should Keep R1 remain host-accessible. n Advanced Options . Select the advanced SRDF session options and Click OK . click Deleting SRDF pairs 407

408 Data Protection n Do one of the following: Add to Job List to add this task to the job list, from which you – Click can schedule or run the task at your convenience. For more on page 920 and Previewing information, refer to Scheduling jobs jobs on page 920. , and click to perform the operation Add to Job List – Expand Run Now now. l Pair level: n . Select a group and click n Select one or more pairs and click Delete Pair . n option if including the second hop of a cascaded Select the Use 2nd Hop SRDF configuration (only applicable for device groups). n option if deleting one side of the volume pair. Select the Half Delete n Remove from local Storage Optional: Deselect the selected (by default) , Remove from remote Storage Groups , and Remove from Groups Device Groups check boxes. If you deselect the selected defaults, you will be warned if the pair deletion results in devices that are no longer SRDF protected, and results in the related device groups becoming invalid. This option is not displayed if all pairs are selected. n . Select the advanced SRDF session options Click Advanced Options OK . and click n Do one of the following: Add to Job List – Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more Scheduling jobs Previewing information, refer to on page 920 and jobs on page 920. , and click Run Now to perform the operation – Expand Add to Job List now. Moving SRDF pairs This procedure explains how to move the SRDF pair from one SRDF group to another. The move type can be a full move or a half move. A half move specifies to move only the local half of the RDF pair. When using this action on an RDF 1 type pair, only the R1 volume is moved. When using this action on an RDF 2 type pair, only the R2 volume is moved. This procedure supports moving SRDF pairs to a SRDF/Metro group. To move SRDF pairs: Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups > SRDF or DATA 2. Select > PROTECTION SRDF . Device Groups > 3. Select a group, click , and select Move . Use 2nd Hop option if including the second hop of a cascaded 4. Select the SRDF configuration (only applicable for device groups). 5. Select New SRDF Group . 408 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

409 Data Protection 6. Select Half Move . Full Move or . 7. Optional: Select Use Consistency Exempt This allows volumes to be added, removed, or suspended without affecting the state of the SRDF/A session. 8. Only one side of the RDF device pairs that are moved from the SRDF/Metro Keep R1 session will remain host-accessible when the operation completes. The Keep R2 option is used to specify the side that should remain host- or accessible. SRDF session options and click Advanced Options 9. Click . Select the advanced . OK 10. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and on page 920. Scheduling jobs Previewing jobs l , and click Run Now Expand Add to Job List to perform the operation now. Setting SRDF mode This procedure explains how to set the mode of operation for an SRDF configuration. SRDF modes determine the following: l How R1 volumes are remotely mirrored to R2 volumes across the SRDF links l How I/Os are processed in an SRDF solution l When acknowledgments are returned to the production host that issued a write I/O command Before you begin: SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. The Adaptive Copy Mode value Enabled: WP Mode is not available if the R1 mirror of an SRDF pair is on a storage system running HYPERMAX OS 5977 or higher. It is not allowed to set SRDF devices in the non-Metro SRDF mirror to operate in Synchronous mode. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. To set SRDF mode: Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups > SRDF or DATA 2. Select > > SRDF . Device Groups PROTECTION 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: n , and select Set Mode . Select a group, click Setting SRDF mode 409

410 Data Protection n option if including the second hop of a cascaded Select the Use 2nd Hop SRDF configuration (not applicable if the hop2 is SRDF/Metro). n SRDF Mode AC Skew to set the type Select Adaptive Copy Mode , and . of SRDF session modes n Select to set consistent transition from asynchronous to Use Consistent synchronous mode. l Pair level: n and click the number next to SRDF Pairs. Select a group, click n Set Mode Select one or more pairs, click , and select . . n , Adaptive Copy Mode and AC Skew to set the type Select SRDF Mode . SRDF session modes of n to set consistent transition from asynchronous to Select Use Consistent synchronous mode. Advanced Options 4. Click SRDF session options and click . Select the advanced OK . 5. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 on page 920. and Previewing jobs l Run Now Expand , and click Add to Job List to perform the operation now. Viewing SRDF volume pairs This procedure explains how to view and manage the volume pairs in a SRDF group. Procedure 1. Select the storage system. DATA PROTECTION > SRDF . 2. Select 3. Select a device group from the list and click to open the SRDF Pair List view. The following properties display: Show Group Details: Displays the following device group properties: Group Valid —Indicates if device group is valid or invalid for SRDF management. —Application name managing SRDF actions. Application ID Vendor ID —Vendor name. —Group creation time stamp. Group Creation Time Group Modify Time —Group modification time stamp. —Remote storage system ID Remote Symmetrix —Indicates if volume pacing exempt is enabled. Volume Pacing Exempt State Write Pacing Exempt State —Indicates if write pacing exempt is enabled. 410 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

411 Data Protection Effective Write Pacing Exempt State —Indicates if effective write pacing exempt is enabled. Local tab: Displays the following local SRDF link properties: —Source volume ID. Source Volume —Source logical volume ID Source LDev —SRDF group ID. Group Remote Symmetrix —Remote storage system ID. Target Volume —Target volume ID. —State of the RDF volume pairs. State Volume State —State of the source volume. —State of the remote volume. Remote Volume State SRDF Mode —SRDF copy type. Local R1 Invalid —Number of invalid R1 tracks on the source volume. Local R2 Invalid —Number of invalid R2 tracks on the source volume. Remote R1 Invalid —Number of invalid R1 tracks on the target volume. Remote R2 Invalid —Number of invalid R2 tracks on the target volume. Hop2 tab: Displays the following remote SRDF link properties: Source LDev —Source logical volume ID Concurrent Volume —Concurrent volume ID. SRDF Group —SRDF group ID. Remote Symmetrix —Remote storage system ID. Target Volume —Target volume ID. State —State of the RDF volume pairs. Volume State —State of the source volume. Remote Volume State —State of the remote volume. The following controls are available: Viewing SRDF volume pair details — on page 412 Establishing SRDF pairs Establish on page 421 — Split — Splitting SRDF pairs on page 436 Suspending SRDF pairs on page 436 Suspend — Restoring SRDF pairs on page 433 — Restore Resume — Resuming SRDF links on page 429 — Failing over on page 422 Failover — on page 423 Failing back Failback Set SRDF/A — Setting SRDF/A controls to prevent cache overflow on page 431 — Invalidating R1/R2 volumes on page 424 Invalidate — Making R1/R2 volumes ready on page 425 Ready Viewing SRDF volume pairs 411

412 Data Protection Not Ready on page 426 — Making R1/R2 volumes not ready Updating R1 volumes R1 Update on page 438 — on page 428 RW Enable — Read/write enabling R1/R2 volumes — Read/write disabling R1/R2 volumes on page 429 Write Disable RW Disable R2 — Read/write disabling R2 volumes on page 427 Refreshing R1 or R2 volumes on page 430 — Refresh Setting SRDF mode on page 409 Set Mode — Viewing SRDF volume pair details Procedure 1. Select the storage system. > to open the SRDF dashboard. 2. Select SRDF Data Protection 3. Select a device group from the list and click click to open the SRDF Pair List view. 4. to open its details view. On the Local tab, select the pair and click click The following properties display: Device Group —Device group ID. Source Volume —Source volume ID. —Source logical device ID. Source LDev —SRDF Group ID. SRDF Group —Remote storage system ID. Remote Symmetrix Remote SRDF Group —Remote SRDF Group ID. Target Volume —Target volume ID. Pair State —Indicates volume pair state. SRDF mode —SRDF copy type. Adaptive Copy Mode —Indicates if adaptive copy mode is enabled. —Indicates consistency state. Consistency State Consistency Exempt —Indicates if consistency is exempt. Link Status —Indicates link state. SRDF Domino —Indicates SRDF Domino state. —SRDF Hop2 Group ID. SRDF Hop2 Group —Number of invalid R1 tracks on Source Volume Invalid R1 Track Count source volume. Source Volume Invalid R2 Track Count —Number of invalid R2 tracks on source volume. Source Volume SRDF State —Indicates source volume SRDF state. Source Volume SRDF Type —Indicates source volume SRDF type. —Source volume track size. Source Volume Track Size —Number of invalid R1 tracks on target Target Volume Invalid R1 Track Count volume. 412 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

413 Data Protection Target Volume Invalid R2 Track Count —Number of invalid R2 tracks on target volume. —Indicates target volume SRDF state. Target Volume SRDF State —Target volume track size. Target Volume Track Size —Indicates if the SRDF pair allows write pacing SRDF/A Pacing Capable capability. Configured Group-level Exempt State —Indicates if group-level write pacing exemption capability is enabled or disabled. —Indicates if effective group-level write Effective Group-level Exempt State pacing exemption capability is enabled or disabled. —Indicates if group level write pacing is enabled or Group Level Pacing State disabled. Volume Level Pacing State —Indicates if volume level write pacing is enabled or disabled. —Indicates SRDF/A consistency protection SRDF/A Consistency Protection state. —Average cycle time (seconds) configured for SRDF/A Average Cycle Time this session. SRDF/A Minimum Cycle Time —Minimum cycle time (seconds) configured for this session. SRDF/A Cycle Number —Indicates target volume SRDF state. —Indicates DSE autostart state. SRDF/A DSE Autostart SRDF/A Session Number —SRDF/A session number. —Priority used to determine which SRDF/A sessions SRDF/A Session Priority to drop if cache becomes full. Values range from 1 to 64, with 1 being the highest priority (last to be dropped). —The cycle time (in secs) of the most SRDF/A Duration Of Last Cycle recently completed cycle. It should be noted that in a regular case the cycles switch every ~30 sec, however, in most cases the collection interval is in minutes, which means some cycle times will be skipped. This an important counter to look at to figure out if SRDF/A is working as expected. SRDF/A Flags —RDFA Flags: (C)onsistency: X = Enabled, . = Disabled, - = N/A A = Active, I = Inactive, - = N/A (S)tatus : (R)DFA Mode : S = Single-session, M = MSC, - = N/A (M)sc Cleanup: C = MSC Cleanup required, - = N/A X = Enabled, . = Disabled, - = N/A (T)ransmit Idle: A = Active, I = Inactive, - = N/A (D)SE Status: X = Enabled, . = Disabled, - = N/A DSE (A)utostart: SRDF/A Uncommitted Track Counts —Number of uncommitted tracks. SRDF/A Number of Volumes in Session —Number of volumes in session. —Number of uncommitted SRDF/A Session Uncommitted Track Counts session tracks. Viewing SRDF volume pair details 413

414 Data Protection SRDF/A R1 DSE Used Track Count —Number of tracks used for R1 DSE. —Percent of R1 cache used. SRDF/A R1 Cache In Use Percent —Number of R1 shared tracks. SRDF/A R1 Shared Track Count SRDF/A R1 to R2 Lag Time —Time that R2 is behind R1 (RPO). This is calculated as the last cycle time plus the time since last switch. In a regular case, the cycles switch every ~30 sec and the samples are taken every few minutes, therefore this counter may not show very significant data, however, when cycles elongate beyond the sample time, this counter can help indicate an estimate of the RPO. SRDF/A R2 DSE Used Track Count —Number of tracks used for R2 DSE. —Percent of R2 cache used. SRDF/A R2 Cache In Use Percent —Minimum cycle time (seconds) SRDF/A Session Minimum Cycle Time configured for this session. SRDF/A Transmit Idle State —Indicates SRDF/A transmit idle state. SRDF/A Transmit Idle Time —Time the transmit cycle has been idle. Suspended State —Suspended state. Sqar Mode —Indicates if SRDF pair is in a SQAR configuration. There are are links to views for objects contained in and associated with the SRDF group. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number next to SRDF Group will open a view listing the volumes contained in the SRDF group. Viewing SRDF volume pair details This procedure explains how to view an SRDF pair's SRDF group. Procedure 1. Select the storage system. 2. Select DATA PROTECTION > SRDF Groups . SRDF Groups to open the SRDF 3. Select a device group from the list and click view. Pair List tab, select the pair and click to open its details view Local SRDF Groups 4. On the to open the Pair's SRDF Group view. 5. Click the number next to SRDF Group The following properties display: —RDF group number. Group SRDF Group Label —RDF group label. Remote SRDF Group —Remote SRDF Group ID. —Remote Symmetrix ID. Remote Symmetrix —SRDF group flags. SRDF Group Flags —Number of volumes in the group. Volume Count Copy Jobs —Maximum number of RDF copy jobs per RDF group. Link Limbo (sec) —Number of seconds (0-10) for the Symmetrix system to continue checking the local RDF link status. —RDFA Flags: SRDF/A Flags 414 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

415 Data Protection X = Enabled, . = (C)onsistency: Disabled, - = N/A (S)tatus : A = Active, I = Inactive, - = N/A (R)DFA Mode : S = Single-session, M = MSC, - = N/A C = MSC Cleanup (M)sc Cleanup: required, - = N/A (T)ransmit Idle: X = Enabled, . = Disabled, - = N/A (D)SE Status: A = Active, I = Inactive, - = N/A X = Enabled, . = DSE (A)utostart: Disabled, - = N/A Minimum Cycle Time —Minimum cycle time (seconds) configured for this session. Session Priority —Priority used to determine which SRDF/A sessions to drop if cache becomes full. Values range from 1 to 64, with 1 being the highest priority (last to be dropped). Transmit Idle Time —Whether SRDF/A Transmit Idle state is active for the RDF group. Viewing SRDF protected storage group pairs The SRDF SG pair list displays a notification if a capacity mismatch exists between R1 and R2 devices. Mismatch can be R1 > R2 or R1 < R2 To view SRDF group volumes, refer to Viewing SRDF group volumes on page 454. Procedure 1. Select the storage system. 2. Select > Storage Groups . DATA PROTECTION . 3. Click SRDF 4. to open the Storage Group pair list Select a storage group instance and click view. 5. Click the number next to SRDF pairs to open the SRDF pair list view. Local and Hop2 . The non-metro leg of a concurrent Two tabs are displayed : RDF pair is viewable in the SRDF/Metro view and the SRDF/Metro leg of the concurrent RDF pair is viewable in the standard RDF view. Local tab: The following properties display in the Source Volume —The name of the source volume. Source Type —The source type of the source volume. —RDF group number. SRDF Group — The target volume ID. Target Volume State —The state of the storage group pair. Possible values are: Viewing SRDF protected storage group pairs 415

416 Data Protection l Consistent l Failed Over l Invalid l Partitioned l R1 Updated l R1 Update in progress l Suspended l Synchronization in progress l Synchronized l Transmit Idle If Unisphere detects an asynchronous state change event for a SRDF group from Solutions Enabler, it updates the Unisphere state for the SRDF group and its related SRDF device groups and SRDF storage groups. The Storage Group list view must be refreshed so that the latest state is reflected. Hop2 tab: The following properties display in the Concurrent Volume —The name of the concurrent volume. Symmetrix ID —Storage system ID. —RDF group number. SRDF Group Remote Symmetrix —Remote Symmetrix ID. — The target volume ID. Target Volume —The state of the storage group pair. Possible values are: State l Consistent l Failed Over l Invalid l Partitioned l R1 Updated l R1 Update in progress l Suspended l Synchronization in progress l Synchronized l Transmit Idle If Unisphere detects an asynchronous state change event for a SRDF group from Solutions Enabler, it updates the Unisphere state for the SRDF group and its related SRDF device groups and SRDF storage groups. The Storage Group list view must be refreshed so that the latest state is reflected. SRDF Mode —The SRDF copy mode. The following controls are available, depending on the operating environment: 416 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

417 Data Protection Note You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. Note Use 2nd The dialogs associated with controls listed below do not display the Hop option if the hop2 is SRDF/Metro. Note In the event of a concurrent SRDF SG where one leg is SRDF/Metro and one is not SRDF/Metro, the action launching the dialog (Metro or non-Metro) preselects the correct RDFG in the combination box and disables edits on it. The selected RDFG is the one for the SRDF mode of the launching SG. on page 417 Viewing SRDF protected storage group pair properties — Establish — Establishing SRDF pairs on page 421 Splitting SRDF pairs on page 436 Split — — on page 436 Suspend Suspending SRDF pairs Restoring SRDF pairs — on page 433 Restore Resuming SRDF links on page 429 Resume — Deleting SRDF pairs on page 407 — Delete Pair — Moving SRDF pairs on page 408 Move — Set Mode Setting SRDF mode on page 409 Invalidating R1/R2 volumes — on page 424 Set Volume Attributes > Invalidate Set Volume Attributes > Ready — Making R1/R2 volumes ready on page 425 Set Volume Attributes > R1 Update on page 438 — Updating R1 volumes — Read/write enabling R1/R2 volumes on Set Volume Attributes > RW Enable page 428 — Read/write disabling R1/R2 volumes Set Volume Attributes > Write Disable on page 429 Set Volume Attributes > RW Disable R2 Read/write disabling R2 volumes — on page 427 Set Volume Attributes > Refresh — Refreshing R1 or R2 volumes on page 430 — Setting SRDF/A controls to prevent cache overflow on page Set SRDF/A 431 Viewing SRDF protected storage group pair properties Procedure 1. Select the storage system. Viewing SRDF protected storage group pair properties 417

418 Data Protection 2. Select Storage Groups . DATA PROTECTION > . 3. Click SRDF 4. Select a storage group and click to open the storage group list view. SRDF pairs to open the SRDF pair list view. 5. Click the number next to 6. to open the SRDF pair list properties panel. Select a pair and click The following properties display, depending on the operating environment: —The storage group ID. Storage Group Local Volume —The local volume ID. SRDF Group Number —SRDF group number. —Remote SRDF group number. Remote SRDF Group Number Remote Volume — The remote volume ID. —The state of the SRDF pair. Pair State SRDF Mode —The SRDF mode. Adaptive Copy Mode —The adaptive copy mode. Adaptive Copy Skew —The adaptive copy skew. Consistency State —The consistency state. Consistency Exempt —Indicates consistency exempt status. Link Status —Indicates link state. —Indicates link Domino state. Link Domino —Indicates Local Volume Invalid R1 Local Volume Invalid R1 Track Count Track Count. —Indicates Local Volume Invalid R2 Local Volume Invalid R2 Track Count Track Count. Local Volume SRDF State —Indicates SRDF state of the local volume. Local Volume SRDF Type —Indicates SRDF type of the local volume. Local Volume Remote Write Pacing Track Count —Indicates Local Volume Remote Write Pacing Track Count. —Indicates track size of the local volume. Local Volume Track Size Remote Local Volume Invalid R1 Track Count —Indicates Remote Volume Invalid R1 Track Count. Remote Volume Invalid R2 Track Count —Indicates Remote Volume Invalid R2 Track Count. —Indicates SRDF state of the remote volume. Remote Volume SRDF State —Indicates Remote Remote Volume Remote Write Pacing Track Count Volume Remote Write Pacing Track Count. Remote Volume Track Size —Indicates track size of the remote volume. SRDF/A Pacing capable —Indicates SRDF/A pacing capability. Configured Group Level Exempt State —Configured Group Level Exempt state indication. —Effective Group Level Exempt state Effective Group Level Exempt State indication. 418 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

419 Data Protection Volume Level Pacing State —Volume Level Pacing state indication. —SRDF/A Consistency Protection SRDF/A Consistency Protection indication. —SRDF/A Average Cycle Time. SRDF/A Average Cycle Time —SRDF/A Minimum Cycle Time. SRDF/A Minimum Cycle Time —SRDF/A Cycle Number. SRDF/A Cycle Number SRDF/A Session Number —SRDF/A Session Number. —Transmit Queue Depth of R1 side. Transmit Queue Depth of R1 side —SRDF/A Uncommitted Tracks count. SRDF/A Uncommitted Tracks Count —SRDF/A Number of Volumes in SRDF/A Number of Volumes in Session session. SRDF/A Session Uncommitted Tracks Count —SRDF/A Session Uncommitted Tracks count. SRDF/A R1 DSE Used Track Count — SRDF/A R1 DSE Used Track c. SRDF/A R1 Cache In Use Percent —SRDF/A R1 Cache In Use Percent. SRDF/A R1 Shared Track Count —SRDF/A R1 Shared Track count. SRDF/A R1 to R2 Lag Time —SRDF/A R1 to R2 Lag Time. —SRDF/A R2 DSE Used Track count. SRDF/A R2 DSE Used Track Count SRDF/A R2 Cache In Use Percent —SRDF/A R2 Cache In Use Percent. —SRDF/A Session Minimum Cycle SRDF/A Session Minimum Cycle Time time. SRDF/A Transmit Idle State —SRDF/A Transmit Idle state. SRDF/A Transmit Idle Time — SRDF/A Transmit Idle time. Suspended State —Suspended state. SQAR Mode —SQAR Mode status (enabled or disabled). There are also links to views displaying objects contained in and associated with the SRDF pair. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number next to opens a view listing the related SRDF groups. SRDF Group Number Deleting SRDF pairs Deleting SRDF pairs cancels the dynamic SRDF pairing by removing the pairing information from the storage system and converting the volumes from SRDF to regular volumes. This operation can be performed on a storage group, a SRDF/Metro, or a device group. To delete SRDF pairs from the SRDF List Volumes View, refer to Deleting SRDF pairs from the SRDF List Volumes View on page 421. Half deleting SRDF pairs cancels the dynamic SRDF pairing information for one side (R1s or R2s) of the specified volume pairs and converts the volumes from RDF to regular volumes. This operation can only be performed on a device group. If you select all pairs for a delete pair action, then the option to remove the devices from the device group, or the local or remote Storage Group is not displayed, as it will not render the device group, storage group, or SRDF/Metro unmanageable. Before you begin: Deleting SRDF pairs 419

420 Data Protection SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure supports the deletion of SRDF pairs from a SRDF/Metro group. To delete SRDF pairs: Procedure 1. Select the storage system. > . Data Protection 2. Select SRDF Device Groups 3. Click SRDF/Metro . Storage Groups , or 4. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: n Select a group, click . , and select Delete Pair n option if including the second hop of a cascaded Select the Use 2nd Hop SRDF configuration (not applicable if the hop2 is SRDF/Metro). n Select the option if deleting one side of the volume pair. Half Delete n , Remove from Optional: Select Remove from local Storage Groups Remove from local Storage Groups if the remote Storage Groups , and pair deletion results in devices that are no longer SRDF protected, and results in the related device groups becoming invalid. n Only one side of the RDF device pairs that are removed from the SRDF/ Metro session will remain host-accessible when the operation completes. Keep R1 or Keep R2 option is used to specify the side that should The remain host-accessible. n . Select the advanced Click SRDF session options Advanced Options and . OK click n Do one of the following: – Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more on page 920 and information, refer to Scheduling jobs Previewing on page 920. jobs – Expand , and click Run Now to perform the operation Add to Job List now. l Pair level: n . Select a group and click n . Select one or more pairs and click Delete Pair n Use 2nd Hop option if including the second hop of a cascaded Select the SRDF configuration (only applicable for device groups). n Select the Half Delete option if deleting one side of the volume pair. 420 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

421 Data Protection n Optional: Deselect the selected (by default) Remove from local Storage , and Remove from , Groups Remove from remote Storage Groups Device Groups check boxes. If you deselect the selected defaults, you will be warned if the pair deletion results in devices that are no longer SRDF protected, and results in the related device groups becoming invalid. This option is not displayed if all pairs are selected. n SRDF session options Click Advanced Options . Select the advanced . OK and click n Do one of the following: – Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more on page 920 and Previewing jobs information, refer to Scheduling jobs on page 920. Add to Job List Run Now to perform the operation – Expand , and click now. Deleting SRDF pairs from the SRDF List Volumes View To delete SRDF pairs from the SRDF List Volumes View: Procedure 1. Select the storage system. 2. Select DATA PROTECTION > SRDF Groups 3. . Select a group and click . 4. Click the number next to Volumes Delete Pair Delete Pairs to open the 5. Select a volume and click and select dialog box. Half Delete option if deleting one side of the volume pair. 6. Select the 7. Optional: Deselect the selected (by default) Remove from Local Storage , Remove from Remote Storage Groups , and Remove from Device Groups Groups check boxes. If you deselect the selected defaults, you will be warned if the pair deletion results in devices that are no longer SRDF protected, and results in the related device groups becoming invalid. This option is not displayed if all pairs are selected. 8. Optional: Select Use Force . . OK 9. Click Establishing SRDF pairs Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. You can run an establish operation on a cascaded R1 -> R21 -> R2 configuration if any of the storage systems in the cascaded configuration is running HYPERMAX OS Q1 2015 SR or later. Deleting SRDF pairs from the SRDF List Volumes View 421

422 Data Protection To establish SRDF pairs: Procedure 1. Select the storage system. Storage Groups > SRDF or DATA DATA PROTECTION 2. Select > > SRDF . PROTECTION > Device Groups 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. Select a group and click Establish . or Incremental session type. b. Select Full option if including the second hop of a cascaded c. Select the Use 2nd Hop SRDF configuration (not applicable if the hop2 is SRDF/Metro). Witness (only applicable for SRDF/Metro). Witness, if or d. Select Bias available, is the default option. If Witness is not available, Bias is set by the system and the radio buttons are disabled. l Pair level: a. Select a group, click , and click the number next to SRDF Pairs. Establish . b. Select one or more pairs and click c. Select Full or Incremental establish type. Use 2nd Hop option if including the second hop of a cascaded d. Select the SRDF configuration (not applicable if the hop2 is SRDF/Metro). e. Select Bias (only applicable for SRDF/Metro). Witness, if Witness or available, is the default option. If Witness is not available, Bias is set by the system and the radio buttons are disabled. 4. Click to set the advanced options . Select the advanced Advanced Options . options and click OK 5. Do one of the following: l and click Add to Job List Now to add this task to Add to Job List Expand the job list, from which you can schedule or run the task at your Scheduling jobs on page 920 convenience. For more information, refer to Previewing jobs on page 920. and l Expand Run Now to perform the operation now. Add to Job List , and click Failing over Before you begin If the target (R2) volume is on a storage system running HYPERMAX OS 5977 or higher, and the mode of the source (R1) volume is Adaptive Copy Write Pending, SRDF will set the mode to Adaptive Copy Disk. As a result of a failover (with establish or restore) operation, a cascaded R1 -> R21 -> R2 configuration can be created if any of the storage systems in the cascaded configuration is running HYPERMAX OS Q1 2015 SR or later. In a period of scheduled downtime for maintenance, or after a serious system problem which has rendered either the host or storage system containing the source (R1) volumes unreachable, no read/write operations can occur on the source (R1) volumes. 422 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

423 Data Protection In this situation, the fail over operation should be initiated to make the target (R2) volumes read/write enabled to their local hosts. The Failing Over operation is not allowed on the non-Metro SRDF mirror if it results in a Metro device becoming write-disabled (WD). To initiate a failover: Procedure 1. Select the storage system. 2. Select > SRDF or DATA > DATA PROTECTION Storage Groups > SRDF . Device Groups PROTECTION > 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. Failover Select a group, click , and select . b. Select the Use 2nd Hop option if including the second hop of a cascaded SRDF configuration (only applicable for device groups). c. Select the fail over. l Pair level: a. , and click the number next to SRDF Pairs. Select a group, click b. Select one or more pairs, click Failover . , and select c. Select the fail over. 4. Click options . Select the advanced Advanced Options to set the advanced . OK options and click 5. Do one of the following: l and click Add to Job List Now Expand Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 Previewing jobs on page 920. and l Expand Run Now to perform the operation now. Add to Job List , and click Failing back Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. A fail back operation is performed when you are ready to resume normal SRDF operations by initiating read/write operations on the source (R1) volumes, and stopping read/write operations on the target (R2) volumes. The target (R2) volumes become read-only to their local hosts while the source (R1) volumes are read/write enabled to their local hosts. To initiate a failback: Procedure 1. Select the storage system. Failing back 423

424 Data Protection 2. Select Storage Groups > SRDF or DATA DATA PROTECTION > Device Groups > . PROTECTION > SRDF 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. . Select a group, click , and select Failback option if including the second hop of a cascaded Use 2nd Hop b. Select the SRDF configuration (only applicable for device groups). c. Select the fail over. l Pair level: a. , and click the number next to SRDF Pairs. Select a group, click b. , and select Failback . Select one or more pairs, click c. Select the fail over. to set the advanced 4. Click . Select the advanced Advanced Options options OK . options and click 5. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 on page 920. Previewing jobs and l Expand Add to Job List , and click Run Now to perform the operation now. Invalidating R1/R2 volumes Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure explains how to run internal checks to see if a volume swap is valid. To invoke this operation, the RDF pairs at the source must already be Suspended and Write Disabled or Not Ready. To invalidate R1/R2 volumes: Procedure 1. Select the storage system. 2. Select > Storage Groups > SRDF or DATA DATA PROTECTION > Device Groups > SRDF . PROTECTION 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: 424 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

425 Data Protection l Group level: a. . , and select Set Volume Attributes > Invalidate Select a group, click b. Select R1 or R2 volume type. option if including the second hop of a cascaded Use 2nd Hop c. Select the SRDF configuration (not applicable if the hop2 is SRDF/Metro). l Pair level: a. , and click the number next to SRDF Pairs. Select a group, click b. Set Volume Attributes > , and select Select one or more pairs, click Invalidate . or R2 . c. Select side R1 to set the advanced 4. Click . Select the advanced Advanced Options options . OK options and click 5. Do one of the following: l and click Add to Job List Now to add this task to Expand Add to Job List the job list, from which you can schedule or run the task at your Scheduling jobs on page 920 convenience. For more information, refer to and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Making R1/R2 volumes ready Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. To make R1 or R2 volumes ready to their local hosts: Procedure 1. Select the storage system. > Storage Groups > SRDF or DATA 2. Select DATA PROTECTION > Device Groups > SRDF . PROTECTION 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. , and select Set Volume Attributes > Ready . Select a group, click R1 or R2 . b. Select side Use 2nd Hop option if including the second hop of a cascaded c. Select the SRDF configuration (not applicable if the hop2 is SRDF/Metro). Making R1/R2 volumes ready 425

426 Data Protection l Pair level: a. , and click the number next to SRDF Pairs. Select a group, click b. Set Volume Attributes > Select one or more pairs, click , and select . Ready or volume type. R1 c. Select R2 options . Select the advanced Advanced Options 4. Click to set the advanced . OK options and click 5. Do one of the following: l Expand Add to Job List Now to add this task to Add to Job List and click the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs on page 920. Previewing jobs and l , and click Run Now to perform the operation now. Expand Add to Job List Making R1/R2 volumes not ready Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure explains how to set the source (R1) or the target (R2) volumes not ready to the local host. To make R1/R2 volumes not ready: Procedure 1. Select the storage system. 2. Select > Storage Groups > SRDF or DATA DATA PROTECTION PROTECTION > SRDF . > Device Groups 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. , and select Set Volume Attributes > Not Select a group, click Ready . or R2 . b. Select side R1 Use 2nd Hop option if including the second hop of a cascaded c. Select the SRDF configuration (not applicable if the hop2 is SRDF/Metro). l Pair level: a. , and click the number next to SRDF Pairs. Select a group, click 426 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

427 Data Protection b. Set Volume Attributes > Select one or more pairs, click , and select Not Ready . R1 c. Select or volume type. R2 . Select the advanced to set the advanced Advanced Options 4. Click options OK options and click . 5. Do one of the following: l Expand Add to Job List and click Add to Job List Now to add this task to the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs on page 920. and Previewing jobs l Expand Add to Job List , and click Run Now to perform the operation now. Read/write disabling R2 volumes Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. To read/write disable R2 volumes: Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups > SRDF or DATA 2. Select > PROTECTION SRDF . Device Groups > 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. Set Volume Attributes > RW Select a group, click , and select . Disable R2 Use 2nd Hop option if including the second hop of a cascaded b. Select the SRDF configuration (not applicable if the hop2 is SRDF/Metro). l Pair level: a. , and click the number next to SRDF Pairs. Select a group, click b. , and select Set Volume Attributes > Select one or more pairs, click RW Disable R2 . Advanced Options to set the advanced options . Select the advanced 4. Click OK . options and click 5. Do one of the following: Read/write disabling R2 volumes 427

428 Data Protection l and click Expand Add to Job List Now Add to Job List to add this task to the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to Previewing jobs on page 920. and l Run Now to perform the operation now. Expand Add to Job List , and click Read/write enabling R1/R2 volumes Before you begin You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure explains how to write enable the R1 (source) or R2 (target) volumes ready to their local hosts. To read/write enable R1/R2 volumes: Procedure 1. Select the storage system. > Storage Groups > SRDF or DATA 2. Select DATA PROTECTION Device Groups SRDF > . PROTECTION > 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. Select a group, click , and select RW Enable . Use 2nd Hop option if including the second hop of a cascaded b. Select the SRDF configuration (not applicable if the hop2 is SRDF/Metro). c. Select RW Enable R2s volume type. RW Enable R1s or l Pair level: a. , and click the number next to SRDF Pairs. Select a group and click b. Select one or more pairs, click . , and select RW Enable R1 R2 volume type. c. Select or 4. Click Advanced Options to set the advanced options . Select the advanced OK . options and click 5. Do one of the following: l and click to add this task to Add to Job List Add to Job List Now Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 Previewing jobs on page 920. and l Add to Job List Run Now to perform the operation now. Expand , and click Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 428

429 Data Protection Resuming SRDF links Before you begin You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure explains how to resume I/O traffic on the SRDF links for all remotely mirrored SRDF pairs in the group. To resume SRDF links: Procedure 1. Select the storage system. 2. Select > SRDF or DATA DATA PROTECTION > Storage Groups Device Groups PROTECTION SRDF . > > 3. Resume . Select a group, click , and select to set the advanced options Advanced Options 4. Click . Select the advanced options and click OK . 5. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 on page 920. Previewing jobs and l , and click Run Now to perform the operation now. Expand Add to Job List Read/write disabling R1/R2 volumes Before you begin You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure explains how to write disable source (R1) volumes/target (R2) volumes to their local hosts. The Write Disable R1 operation is not allowed on the non-Metro RDF mirror if it results in a Metro device becoming write-disabled (WD). To write disable R1/R2 volumes: Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups and click the SRDF tab. 2. Select 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: Resuming SRDF links 429

430 Data Protection n Write Disable Select a group, click . , and click n option if including the second hop of a cascaded Select the Use 2nd Hop SRDF configuration (not applicable if the hop2 is SRDF/Metro). n Select Write Disable R1s or Write Disable R2s volume type. l Pair level: n Select a group and click n Select one or more pairs, click Write Disable . , and select n Select R1 or R2 volume type. 4. Click options . Select the advanced Advanced Options to set the advanced OK options and click . 5. Do one of the following: l Expand and click Add to Job List Now to add this task to Add to Job List the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Refreshing R1 or R2 volumes Before you begin To invoke this operation, the SRDF pair(s) must be in one of the following states: l Suspended and Write Disabled at the source l Suspended and Not Ready at the source l Failed Over with the -force option specified l This operation is rejected if the target has invalid local (R2) tracks. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. The refresh R1 action marks any changed tracks on the source (R1) volume to be refreshed from the target (R2) side. The Refresh R2 action marks any changed tracks on the target (R2) volume to be refreshed from the source (R1) side. To refresh volumes: Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups > SRDF or DATA 2. Select > > SRDF . Device Groups PROTECTION 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: 430 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

431 Data Protection a. Set Volume Attributes > Refresh Select a group, click . , and select b. Select R1 or R2 volume type. option if including the second hop of a cascaded c. Select the Use 2nd Hop SRDF configuration (not applicable if the hop2 is SRDF/Metro). l Pair level: This action can also be run from pair level details view. Select a pair and click . a. , and click the number next to SRDF Pairs. Select a group and click b. Set Volume Attributes > , and select Select one or more pairs, click Refresh . c. Select R1 or R2 volume type. 4. Click options . Select the advanced Advanced Options to set the advanced OK options and click . 5. Do one of the following: l Expand and click Add to Job List Now to add this task to Add to Job List the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Setting SRDF/A controls to prevent cache overflow Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. This procedure explains how to activate or deactivate SRDF/A control actions that detect cache overflow conditions and take corrective action to offload cache or slow down the host I/O rates to match the SRDF/A service rates. To activate or deactivate SRDF/A controls: Procedure 1. Select the storage system. > Storage Groups > SRDF or DATA DATA PROTECTION 2. Select > Device Groups > SRDF . PROTECTION 3. Select a group, click more Asynchronous > Set SRDF/A . , and select Activate SRDF/A or Deactivate SRDF/A . 4. Select Use 2nd Hop option if including the second hop of a cascaded SRDF 5. Select the configuration (not applicable if the hop2 is SRDF/Metro). Setting SRDF/A controls to prevent cache overflow 431

432 Data Protection 6. Select Deactivate Type . Activate Type or to set the advanced 7. Click options Advanced Options . Select the advanced . options and click OK 8. Do one of the following: l Add to Job List Now to add this task to Expand Add to Job List and click the job list, from which you can schedule or run the task at your on page 920 convenience. For more information, refer to Scheduling jobs and Previewing jobs on page 920. l , and click Run Now to perform the operation now. Expand Add to Job List Setting consistency protection Before you begin To set consistency protection: Procedure 1. Select the storage system. 2. Select > Storage Groups > SRDF or DATA DATA PROTECTION PROTECTION > SRDF . > Device Groups 3. Select a group, click more , and select Asynchronous > Set Consistency . Enable or Disable . 4. select Use 2nd Hop 5. Select the option if including the second hop of a cascaded SRDF configuration (only applicable for device groups). Advanced Options . Select the advanced to set the advanced 6. Click options OK options and click . 7. Do one of the following: l and click Expand to add this task to Add to Job List Add to Job List Now the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Resetting original device identity After deleting a SRDF/Metro pair, the unbiased devices keep the new identity. This procedure explains how to reset the original device identity. To reset the original device identity: Procedure 1. Select the storage system. > SRDF groups 2. Select DATA PROTECTION . 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level Reset Select a former unbiased SRDF/Metro storage group and click to open the Reset Original Identity dialog box. SRDF/Metro Identity Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 432

433 Data Protection l Pair level: n Volumes . STORAGE Select > n Filter the view to display volume(s) that were formally part of a SRDF/ Metro pair. n Do one of the following: – Select a volume and click Reset SRDF/Metro Identity . – Select a volume, click , and then click Reset SRDF/Metro Identity . options . Select the advanced to set the advanced 4. Click Advanced Options options and click OK . 5. Do one of the following: l Add to Job List Add to Job List Now to add this task to Expand and click the job list, from which you can schedule or run the task at your Scheduling jobs convenience. For more information, refer to on page 920 and Previewing jobs on page 920. l , and click Expand to perform the operation now. Add to Job List Run Now Restoring SRDF pairs This procedure explains how to restore data from the target (R2) volumes to the source (R1) volumes. When you fully restore SRDF pairs, the entire contents of the R2 volume is copied to the R1 volume. When you incrementally restore the R1 volume, only the new data that was changed on the R2 volume while the RDF group pair was split is copied to the R1 volume. Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. To restore SRDF pairs: Procedure 1. Select the storage system. > Storage Groups > SRDF 2. Select DATA DATA PROTECTION or > Device Groups > SRDF . PROTECTION 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. Select a group and click . Restore b. Select the Use 2nd Hop option if including the second hop of a cascaded SRDF configuration (not applicable if the hop2 is SRDF/Metro). Full or Incremental restore type. c. Select Witness or Bias (only applicable for SRDF/Metro). Witness, if d. Select available, is the default option. If Witness is not available, Bias is set by the system and the radio buttons are disabled. Restoring SRDF pairs 433

434 Data Protection l Pair level: a. and click the number next to SRDF Pairs. Select a group, click . Restore b. Select one or more pairs and click Use 2nd Hop c. Select the option if including the second hop of a cascaded SRDF configuration (not applicable if the hop2 is SRDF/Metro). or restore type. Full d. Select Incremental e. Select (only applicable for SRDF/Metro). Witness, if Witness or Bias available, is the default option. If Witness is not available, Bias is set by the system and the radio buttons are disabled. 4. Click Advanced Options to set the advanced options . Select the advanced . options and click OK 5. Do one of the following: l and click Add to Job List Now to add this task to Expand Add to Job List the job list, from which you can schedule or run the task at your Scheduling jobs convenience. For more information, refer to on page 920 and Previewing jobs on page 920. l , and click Run Now Expand Add to Job List to perform the operation now. Setting bias location This procedure explains how to set Bias. If Bias is chosen to be set as part of the Suspend operation, the side with the Bias is the side that the host can see after the Suspend operation completes. Note Set Bias cannot be invoked for a witness protected SRDF/Metro group. To set bias: Procedure 1. Select the storage system. > 2. Select SRDF or DATA Storage Groups DATA PROTECTION > > SRDF . > PROTECTION Device Groups 3. Set Bias . , and select Select a group, click Advanced Options to set the advanced options . Select the advanced 4. Click options and click OK . 5. Do one of the following: l Add to Job List Expand to add this task to and click Add to Job List Now the job list, from which you can schedule or run the task at your Scheduling jobs convenience. For more information, refer to on page 920 and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Setting the SRDF GCM flag This procedure supports the setting of the SRDF GCM flag at the Storage Group level and at the individual volume level. 434 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

435 Data Protection The Geometry Compatible Mode (GCM) parameter modifies how a storage system running HYPERMAX OS 5997 or later manages the size of a volume. When the GCM attribute is set, the volume is treated as ½ a cylinder smaller than its true configured size. This enables a volume on a storage system running HYPERMAX OS 5977 to be paired with a volume on an storage system running Enginuity 5876, when the 5876 volume has an odd number of cylinders. Before you begin: SRDF requires HYPERMAX OS 5977 or later. Procedure 1. Select the storage system. SRDF groups 2. Select DATA PROTECTION > . 3. Do the following, depending on whether you want to perform the operation at the group level or volume level: Group level: l a. Set SRDF GCM Set GCM Select a storage group and click to open the dialog box. l to set the GCM flag or to unset the flag. Click On Off Note The only way to unset this flag is to unmap the device which requires an outage at the host which would mean losing access to volumes. l OK . Click b. From the Storage Volumes View: l Storage > Storage Volumes to open the Storage Volumes view. Select l Select a storage group and click Set GCM Set SRDF GCM to open the dialog box. l Click Off to unset the flag. On to set the GCM flag or Note The only way to unset this flag is to unmap the device which requires an outage at the host which would mean losing access to volumes. l OK . Click Setting volume status After deleting an SRDF/Metro pair, the volumes can be in a Not Ready state. This dialog allows you to set the volume state. Before you begin: SRDF requires HYPERMAX OS 5977 or later. To set the volume state: Procedure 1. Select the storage system. STORAGE > Storage Groups . 2. Select Setting volume status 435

436 Data Protection 3. Select a former unbiased SRDF/Metro storage group and click Set Volume . Status 4. Click OK . Splitting SRDF pairs This procedure explains how to stop SRDF pair mirroring. Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. To split SRDF pairs: Procedure 1. Select the storage system. > Storage Groups > SRDF or DATA 2. Select DATA PROTECTION Device Groups > SRDF . PROTECTION > 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. Select a group and click Split . option if including the second hop of a cascaded b. Select the Use 2nd Hop SRDF configuration (not applicable if the hop2 is SRDF/Metro). Use Immediate for immediate split on asynchronous devices. c. Select l Pair level: a. and click the number next to SRDF Pairs. Select a group, click Split b. Select one or more pairs and click . option if including the second hop of a cascaded c. Select the Use 2nd Hop SRDF configuration (not applicable if the hop2 is SRDF/Metro). d. Select Use Immediate for immediate split on asynchronous devices. to set the advanced 4. Click . Select the advanced Advanced Options options OK . options and click 5. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 Scheduling jobs and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Suspending SRDF pairs This procedure explains how to stop data transfer between SRDF pairs. Before you begin: 436 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

437 Data Protection You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard RDF view. If you are viewing a storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. To suspend SRDF pairs: Procedure 1. Select the storage system. > SRDF DATA PROTECTION DATA Storage Groups 2. Select > or > PROTECTION . > Device Groups SRDF 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: n Select a group and click Suspend . n Select the Use 2nd Hop option if including the second hop of a cascaded SRDF configuration (not applicable if the hop2 is SRDF/Metro). n or Use Immediate Use Consistency Exempt Select n to move the Bias from one side to the other (only Move Bias Click applicable for SRDF/Metro). The side with the Bias set is the side that the host can see after the suspend action completes. This option is not allowed until all the devices in the SRDF/Metro config, both new and or ActiveBias existing, are in the ActiveActive SRDF pair state. l Pair level: n and click the number next to SRDF Pairs. Select a group, click n Suspend . Select one or more pairs and click n Use 2nd Hop option if including the second hop of a cascaded Select the SRDF configuration (not applicable if the hop2 is SRDF/Metro). n or Select Use Consistency Exempt Use Immediate n to move the Bias from one side to the other (only Move Bias Click applicable for SRDF/Metro). The side with the Bias set is the side that the host can see after the suspend action completes. This option is not allowed until all the devices in the SRDF/Metro config, both new and or ActiveBias existing, are in the ActiveActive SRDF pair state. n Only one side of the RDF device pairs that are suspended from the SRDF/Metro session will remain host-accessible when the operation Keep R1 or option is used to specify the side completes. The Keep R2 that should remain host-accessible. This applies to storage systems running PowerMaxOS 5978 only. Advanced Options to set the advanced options . Select the advanced 4. Click OK options and click . 5. Do one of the following: l Expand Add to Job List Now to add this task to Add to Job List and click the job list, from which you can schedule or run the task at your Scheduling jobs on page 920 convenience. For more information, refer to Previewing jobs on page 920. and l Expand Add to Job List , and click Run Now to perform the operation now. Suspending SRDF pairs 437

438 Data Protection Swapping SRDF personalities This procedure explains how to swap the SRDF volume designations for a specified device group. It changes source (R1) volumes to target (R2) volumes and target (R2) volumes to source (R1) volumes. Half swapping SRDF personalities swaps one side of the RDF device designations for a specified group. It changes source (R1) volumes to target (R2) volumes or target (R2) volumes to a source (R1) volumes. Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. To swap SRDF personalities: Procedure 1. Select the storage system. 2. Select > SRDF or DATA DATA PROTECTION > Storage Groups Device Groups > SRDF PROTECTION > . 3. Swap . Select a group, click , and select option if including the second hop of a cascaded SRDF Use 2nd Hop 4. Select the configuration. on page 430, select R1, R2 or None . 5. For optional Refreshing R1 or R2 volumes . Half Swap 6. For optional half swapping, select When the SRDF device pairs of an SRDF/Metro configuration are Not Ready (NR) on the link, and the SRDF pair state is Partitioned, a half swap operation is allowed. If the half swap is issued to the R2, the SRDF link to the R1 must be unavailable. If the half swap is issued to the R1, the SRDF link to the other side must be available and the SRDF pair must be seen as R1 – R1 (duplicate pair). to set the advanced . Select the advanced Advanced Options options 7. Click options and click OK . 8. Do one of the following: l and click Add to Job List Now to add this task to Expand Add to Job List the job list, from which you can schedule or run the task at your convenience. For more information, refer to Scheduling jobs on page 920 Previewing jobs on page 920. and l Expand Run Now to perform the operation now. Add to Job List , and click Updating R1 volumes This procedure explains how to incrementally update R1 volumes with changed tracks from R2 volumes. Before you begin SRDF requires Enginuity version 5876 or HYPERMAX OS 5977 or higher. You are not able to perform SRDF/Metro control actions at the SG level on the SRDF/ Metro pairs in a standard SRDF view and you are not allowed to perform standard SRDF actions on the SRDF/Metro leg in a standard SRDF view. If you are viewing a 438 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

439 Data Protection storage system not associated with either side of the pair of interest then you need to go to the view of the relevant storage system. To update R1 volumes: Procedure 1. Select the storage system. > > SRDF or DATA 2. Select Storage Groups DATA PROTECTION > PROTECTION . > Device Groups SRDF 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: l Group level: a. Set Volume Attributes > R1 Update Select a group, click , and click . Use 2nd Hop b. Select the option if including the second hop of a cascaded SRDF configuration (not applicable if the hop2 is SRDF/Metro). Remote c. Select if R1 volumes are a remote. l Pair level: a. and click the number next to SRDF Pairs. Select a group, click b. Select one or more pairs, click Set Volume Attributes > , and select R1 Update . c. Select Remote if R1 volumes are a remote. Advanced Options to set the advanced options . Select the advanced 4. Click OK options and click . 5. Do one of the following: l Add to Job List Now to add this task to Add to Job List and click Expand the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to on page 920. and Previewing jobs l Expand Add to Job List , and click Run Now to perform the operation now. SRDF session options Session option Available with action Description Bypasses the exclusive locks Bypass Establish for the local and/or remote Failback storage system during SRDF Failover operations. Use this option only if you are sure that no Restore other SRDF operation is in Incremental Restore progress on the local and/or remote storage systems. Split Suspend Swap SRDF session options 439

440 Data Protection Session option Description Available with action Write Disable R1 Ready R1 Ready R2 RWDisableR2 Enable Disable Consistent Allows only consistent Activate transition from async to sync mode. Half Move Consistency Exempt Allows you to add or remove volumes from an RDF group Move that is in Async mode without Suspend requiring other volumes in the group to be suspended. Fails over the volume pairs, Establish Failover performs a dynamic swap, and incrementally establishes the pairs. This option is not supported when volumes operating in Asynchronous mode are read/write on the RDF link. To perform a fail over operation on such volumes, specify the Restore option detailed higher in this table. Overrides any restrictions and Force Establish forces the operation, even Incremental Establish though one or more paired Restore volumes may not be in the expected state. Use caution Incremental Restore when checking this option Write Disable R1 because improper use may result in data loss. Ready R1 Ready R2 RWDisableR2 Enable Disable Swap Immediate Causes the suspend, split, and Suspend failover actions on Split asynchronous volumes to Failover happen immediately. No write disable - bypasses NoWD the check to ensure that the target of operation is write disabled to the host. This Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 440

441 Data Protection Session option Description Available with action applies to the source (R1) volumes when used with the Invalidate R1option and to the target (R2) volumes when used with the Invalidate R2 option. Restore SymForce Forces an operation on the volume pair including pairs Incremental Restore that would be rejected. Use Write Disable R1 caution when checking this option because improper use Ready R1 may result in data loss. Ready R2 RWDisableR2 Enable Disable Swap Restore RecoverPoint Tag Specifies that the operation will be performed on Failback RecoverPoint volumes. Refresh R1 Update Marks any changed tracks on Swap Refresh R1 the source (R1) volume to be refreshed from the target (R2) side. Marks any changed tracks on Refresh R2 Swap the target (R2) volume to be refreshed from the source (R1) side. When performing a restore or Remote Restore failback action with the Incremental Restore concurrent link up, data Failback copied from the R2 to the R1 will also be copied to the concurrent R2. These actions require this option. When the fail over swap Restore Failover completes, invalid tracks on the new R2 side (formerly the R1 side) will be restored to the new R1 side (formerly the R2 side). When used together with the Immediate option, the fail over operation will immediately deactivate the SRDF/A session without SRDF session options 441

442 Data Protection Session option Description Available with action waiting two cycle switches for session to terminate. Establish Selecting this option indicates Star that the volume pair is part of Failback an SRDF/Star configuration. Failover SRDF/Star environments are three-site disaster recovery Restore solutions that use one of the Incremental Restore following: Split l Concurrent SRDF sites with SRDF/Star Suspend l Cascaded SRDF sites Write Disable R1 with SRDF/Star Ready R1 This technology replicates Ready R2 data from a primary RWDisableR2 production (workload) site to both a nearby remote site and Enable a distant remote site. Data is Disable transferred in SRDF/ Synchronous (SRDF/S) mode to the nearby remote site (referred to as the synchronous target site) and in SRDF/Asynchronous (SRDF/A) mode to the distant remote site (referred to as the asynchronous target site). SRDFR/Star is supported on Enginuity 5876. The Solutions Enabler SRDF Family CLI Product Guide contains more information on SRDF/Star. SRDF session modes Mode Description Adaptive Copy Allow the source (R1) volume and target (R2) volume to be out of synchronization by a number of I/Os that are defined by a skew value. Data is read from the disk and the unit of Adaptive Copy Disk Mode transfer across the SRDF link is the entire track. While less global memory is consumed it is typically slower to read data from disk than from global memory. Additionally, more bandwidth is used because the unit of transfer is the entire track. Additionally, because it is slower to read data from disk than global 442 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

443 Data Protection Mode Description memory, device resynchronization time increases. The unit of transfer across the SRDF link is Adaptive Copy WP Mode the updated blocks rather than an entire track, resulting in more efficient use of SRDF link bandwidth. Data is read from global memory than from disk, thus improving overall system performance. However, the global memory is temporarily consumed by the data until it is transferred across the link. This mode requires that the device group containing the RDF pairs with R1 mirrors be on a storage system running Enginuity 5876. Provides the host access to the source (R1) Synchronous volume on a write operation only after the storage system containing the target (R2) volume acknowledges that it has received and checked the data. Asynchronous The storage system acknowledges all writes to the source (R1) volumes as if they were local devices. Host writes accumulate on the source (R1) side until the cycle time is reached and are then transferred to the target (R2) volume in one delta set. Write operations to the target device can be confirmed when the current SRDF/A cycle commits the data to disk by successfully de- staging it to the R2 storage volumes. For storage systems running Enginuity 5876, you can put an RDF relationship into Asynchronous mode when the R2 device is a snap source volume. AC Skew Adaptive Copy Skew - sets the number of tracks per volume the source volume can be ahead of the target volume. Values are 0 - 65535. RBAC roles for performing local and remote replication actions The table below details the roles needed to perform SRDF local and remote replication actions. Note Unisphere for PowerMax does not support RBAC Device Group management. RBAC roles for performing local and remote replication actions 443

444 Data Protection Local Replication Device Manager Remote Replication Yes SRDF Delete Yes SRDF Establish Yes SRDF Failback Yes SRDF Failover Yes SRDF Invalidate SRDF Move Yes Yes SRDF Not Ready SRDF R1 Update Yes SRDF Ready Yes SRDF Refresh Yes Yes SRDF Restore SRDF Resume Yes SRDF RW Disable R2 Yes SRDF RW Enable Yes SRDF Set Bias Yes SRDF Set Yes Consistency SRDF Set Mode Yes SRDF Set SRDFA Yes SRDF Split Yes SRDF Suspend Yes SRDF Swap Yes SRDF Write Disable Yes Understanding Virtual Witness The Witness feature supports a third party that the two storage systems consult if they lose connectivity with each other, that is, their SRDF links go out of service. When this happens, the Witness helps to determine, for each SRDF/Metro Session, which of the storage systems should remain active (volumes continue to be read and write to hosts) and which goes inactive (volumes not accessible). Prior to the HYPERMAX OS 5977 Q3 2016 or higher release, a Witness could only be a third storage system that the two storage systems involved in a SRDF/Metro Session could both connect to over their SRDF links. The HYPERMAX OS 5977 Q3 2016 or higher release adds the ability for these storage systems to instead use a Virtual Witness (vWitness) running within a management virtual application (vApp) deployed by the customer. The following Virtual Witness tasks can be performed from Unisphere. Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 444

445 Data Protection Viewing Virtual Witness instances Adding a Virtual Witness Viewing Virtual Witness instance details Enabling a Virtual Witness Disabling a Virtual Witness Removing Virtual Witness Adding SRDF Virtual Witness instances Before you begin Unisphere provides monitoring and management for SRDF/Metro Virtual Witness instances on Virtual Witness capable storage systems running HYPERMAX OS 5977 Q3 2016 or higher. A Virtual Witness instance needs to be created for both participating arrays. See on page 444 for additional information. Understanding Virtual Witness Procedure 1. Select the storage system. > 2. Select to open the Virtual Witness list DATA PROTECTION Virtual Witness view. . 3. Click Create 4. Type values for the following: l Virtual Witness Name —User-defined Virtual Witness instance name. l IP/DNS —IPv4 or IPv6 address, or DNS name from embedded Guest that is associated with Virtual Witness instance. Add Virtual Witness to remote arrays checkbox and 5. Optional: Select the select the arrays (these arrays support the Virtual Witness functionality) that are to have the same Virtual Witness added. 6. 7. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand Removing SRDF Virtual Witness instances Before you begin Unisphere provides monitoring and management for SRDF/Metro Virtual Witness instances on Virtual Witness capable storage systems running HYPERMAX OS 5977 Q3 2016 or higher. You cannot remove a Virtual Witness instance that is in use (protecting one or more SRDF/Metro sessions). Understanding Virtual Witness on page 444 for additional information. See Adding SRDF Virtual Witness instances 445

446 Data Protection Procedure 1. Select the storage system. DATA PROTECTION to open the Virtual Witness list > 2. Select Virtual Witness view. . DELETE 3. Select a virtual witness instance and click 4. Do one of the following: l Add to Job List Now to add this task to Expand Add to Job List and click the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to and Previewing jobs on page 920. l , and click Expand to perform the operation now. Add to Job List Run Now Set state for SRDF Virtual Witness instances Before you begin Unisphere provides monitoring and management for SRDF/Metro Virtual Witness instances on Virtual Witness capable storage systems running HYPERMAX OS 5977 Q3 2016 or higher. The Virtual Witness disable operation may or may not require additional force flags based on if it is currently protecting SRDF/Metro sessions and if an alternate witness is available. If the vWitness is currently protecting Metro Sessions, the storage system performs a search for replacement Witnesses (virtual or physical) to use. You cannot disable a Virtual Witness instance that is in use (protecting one or more SRDF/Metro sessions). Set State operation changes the state of the Virtual Witness instance from The enabled to disabled or from disabled to enabled. See Understanding Virtual Witness on page 444 for additional information. Procedure 1. Select the storage system. DATA PROTECTION 2. Select to open the Virtual Witness list > Virtual Witness view. Set State 3. Select a Virtual Witness instance and click . Note: When disabling an enabled Virtual Witness instance: l and select the Use Force check box. Click Advanced Options The command fails if the virtual Witness is currently in use (protecting a SRDF/Metro Session) and there is another witness (either virtual or physical) that is available to take over for it. The force flag is needed in order to continue. l Advanced Options and select the Use SymForce check box. Click The command fails if the virtual Witness is currently in use (protecting a SRDF/Metro Session) and there is no other witness (either virtual or symforce flag is needed in physical) that is available to take over for it. The order to continue 4. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your 446 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

447 Data Protection convenience. For more information, refer to Scheduling jobs on page 920 on page 920. and Previewing jobs l , and click to perform the operation now. Expand Run Now Add to Job List Viewing SRDF Virtual Witness instances Before you begin Unisphere provides monitoring and management for SRDF/Metro Virtual Witness instances on Virtual Witness capable storage systems running HYPERMAX OS 5977 Q3 2016 or higher. A Virtual Witness needs to be created for both participating arrays. on page 444 for additional information. Understanding Virtual Witness See Procedure 1. Select the storage system. > 2. Select to open the Virtual Witness list DATA PROTECTION Virtual Witness view. The following properties display, depending on the operating environment: l —User-defined Virtual Witness instance name. Witness name l State —State of Virtual Witness instance . l Alive —Flag to indicate if the Virtual Witness instance is alive. l —Flag to indicate if the Virtual Witness instance is in use. In Use The following controls are available, depending on the operating environment: l Viewing SRDF Virtual Witnesses details on page 447 — l Create on page 445 — Adding SRDF Virtual Witness instances l — on page 446 Set State Set state for SRDF Virtual Witness instances l Removing SRDF Virtual Witness instances on page 445 Delete — Viewing SRDF Virtual Witnesses details Before you begin Unisphere provides monitoring and management for SRDF/Metro Virtual Witness on Virtual Witness capable storage systems running HYPERMAX OS 5977 Q3 2016 or higher. See Understanding Virtual Witness on page 444 for additional information. Procedure 1. Select the storage system. 2. Select > Virtual Witness to open the Virtual Witness list DATA PROTECTION view. 3. to open the Details view. Select a Virtual Witness instance and click The following properties display: l Witness name —User-defined witness name. Viewing SRDF Virtual Witness instances 447

448 Data Protection l IP/DNS —IPv4 or IPv6 address, or DNS name from embedded Guest that is associated with Virtual Witness instance. . l Port —Port associated with Virtual Witness instance. l —Flag to indicate if the Virtual Witness instance is alive. Alive l State —State of Virtual Witness instance. l InUse —Flag to indicate if the Virtual Witness instance is in use. l Duplicate —Flag to indicate if the Virtual Witness instance is a duplicate. A duplicate witness is a witness which has the same unique ID as another witness on the storage system, for example, in the case where it was added twice. l SRDF Groups —Number of SRDF groups. There are links to views for objects associated with the Virtual Witness instance. Each group link is followed the name of the group, or by a number, indicating the number of objects in the corresponding view. For example, SRDF Groups opens the view listing the SRDF Groups associated with clicking the Virtual Witness instance. Creating SRDF/A DSE pools Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. DATA PROTECTION > SRDF/A DSE Pools to open the SRDF/A DSE 2. Select list view. Pools . Create 3. Click You can also create DSE pools from the DSE pools details view. 4. Type a Pool Name . DSE pool names can contain up to 12 alpha-numeric characters. The only special character allowed is the underscore ( _ ). The name DEFAULT_POOL is reserved for SAVE volumes that are enabled and not in any other pool. 5. Select the volumes to add. 6. Optional: Click on slider bar to enable the new pool member(s). OK . 7. Click Deleting SRDF/A DSE pools Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. 2. Select SRDF/A DSE Pools to open the SRDF/A DSE > DATA PROTECTION list view. Pools 3. . Select a pool and click 448 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

449 Data Protection 4. Click OK . Adding volumes to SRDF/A DSE pools Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. > to open the SRDF/A DSE SRDF/A DSE Pools 2. Select DATA PROTECTION list view. Pools . Add 3. Select a pool and click 4. Select the volumes to add. 5. Optional: Click on slider bar to enable the new pool member(s). OK 6. Click . Removing volumes from SRDF/A DSE pools Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. 2. Select SRDF/A DSE Pools to open the SRDF/A DSE DATA PROTECTION > list view. Pools Remove 3. Select a pool and click . . 4. Click OK Enabling all volumes in SRDF/A DSE pools Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. > SRDF/A DSE Pools to open the SRDF/A DSE 2. Select DATA PROTECTION list view. Pools 3. Select a pool, click Enable All . , and select 4. Click OK . Disabling all volumes in SRDF/A DSE pools Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. Adding volumes to SRDF/A DSE pools 449

450 Data Protection 2. Select SRDF/A DSE Pools to open the SRDF/A DSE DATA PROTECTION > Pools list view. 3. . Select a pool, click Disable All , and select . 4. Click OK Viewing SRDF/A DSE pools Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. DATA PROTECTION SRDF/A DSE Pools to open the SRDF/A DSE 2. Select > Pools list view. Use this list view to display and manage the SRDF/A DSE pools on a storage system. The following properties display: l —Name of the pool. Name l DSE Pool Configuration —Configuration of the volumes in the pool. l Technology —Technology on which the volumes in the pool reside. l Emulation —Emulation type. l —Whether the pool is Enabled or Disabled. Pool State l % Used —Percent of pool used. The following controls are available: l Viewing SRDF DSE pool details on page 450 — l Create on page 448 — Creating SRDF/A DSE pools l — on page 449 Add Adding volumes to SRDF/A DSE pools l — Deleting SRDF/A DSE pools on page 448 l — on page 449 Enable All Enabling all volumes in SRDF/A DSE pools l Disable All — Disabling all volumes in SRDF/A DSE pools on page 449 l — Assigning dynamic cache partitions on Assign Dynamic Cache Partition page 945 Viewing SRDF DSE pool details Before you begin SRDF/A DSE pools are supported on storage systems running Enginuity 5876. Procedure 1. Select the storage system. DATA PROTECTION > SRDF/A DSE Pools to open the SRDF/A DSE 2. Select list view. Pools 450 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

451 Data Protection 3. Details Select the pool and click view. to open its view to display and manage a TimeFinder/ Details Use the SRDF/A DSE Pool Snap pool. The following properties display: l —Storage system on which the pool resides. Array ID l —Name of the pool. DSE Pool Name l Pool Type —Pool type. l Emulation —Emulation type. l RAID Protection —Protection level of the volumes in the pool. l Technology —Technology on which the volumes in the pool reside. l Pool State —Whether the pool is Enabled or Disabled. l —Number of volumes in the pool. Num Volumes l Disabled Volumes —Number of disabled volumes in the pool. l Enabled Volumes —Number of enabled volumes in the pool. l —Sum of all enabled and disabled volumes in the pool. Capacity (GB) l Enabled Capacity (GB) —Sum of all enabled volumes in the pool. l —Total free space in MB. Free Capacity (GB) l % Used —Percentage used in GB. l —Total used in GB. Used (GB) l Free (GB) —Total free space in GB. The properties panel provides links to views for objects contained in and associated with the pool. Each link is followed by a number, indicating the number of objects in the corresponding view. For example, clicking the number opens a view listing the SAVE volumes contained in the next to NumVolumes pool. Creating TimeFinder/Snap pools Procedure 1. Select the storage system. DATA PROTECTION > TimeFinder Snap Pools to open the TimeFinder 2. Select list view. Snap Pools . Create 3. Click 4. Type a Pool Name . Snap pool names can contain up to 12 alpha-numeric characters. The only special character allowed is the underscore ( _ ). The name DEFAULT_POOL is reserved for SAVE volumes that are enabled and not in any other pool. 5. Select one or more volumes. Enable new pool Member . 6. Optional: To enable new volumes in the pool, select The total enabled pool capacity in GB is displayed. 7. Click OK . Creating TimeFinder/Snap pools 451

452 Data Protection Adding volumes to TimeFinder/Snap pools TimeFinder/Snap pools are supported on storage systems running Enginuity OS 5876. Procedure 1. Select the storage system. TimeFinder Snap Pools TimeFinder DATA PROTECTION to open the 2. Select > Snap Pools list view. 3. Select a pool and click to open its Details view. to open ithe SAVE Volumes view. Num Volumes 4. Click the number next to 5. Select one or more volumes and click Add . 6. Select one or more volumes. . 7. Optional: To enable new volumes in the pool, select Enable new pool Member The total enabled pool capacity in GB is displayed. OK 8. Click . Enabling all volumes in TimeFinder/Snap pools TimeFinder/Snap pools are supported on storage systems running Enginuity OS 5876. Procedure 1. Select the storage system. 2. Select > TimeFinder Snap Pools . DATA PROTECTION 3. , and select Enable All . Select a snap pool, click OK 4. Click . Disabling all volumes in TimeFinder/Snap pools TimeFinder/Snap pools are supported on storage systems running Enginuity OS 5876. Procedure 1. Select the storage system. 2. Select DATA PROTECTION > TimeFinder Snap Pools . 3. Select a snap pool, click Disable All . , and select 4. Click OK . Deleting TimeFinder/Snap Pools TimeFinder/Snap pools are supported on storage systems running Enginuity OS 5876. Procedure 1. Select the storage system. 2. Select DATA PROTECTION > TimeFinder Snap Pools to open the TimeFinder list view. Snap Pools 452 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

453 Data Protection 3. Select a pool and click . OK 4. Click . Removing volumes from TimeFinder/Snap pools TimeFinder/Snap pools are supported on storage systems running Enginuity OS 5876. Procedure 1. Select the storage system. TimeFinder Snap Pools to open the TimeFinder DATA PROTECTION 2. Select > Snap Pools list view. 3. Select a pool and click to open its Details view. to open ithe SAVE Volumes 4. Click the number next to Num Volumes view. Remove 5. Select one or more volumes and click . . OK 6. Click Viewing TimeFinder/Snap pools TimeFinder/Snap pools are supported on storage systems running Enginuity OS 5876. Procedure 1. Select the storage system. > TimeFinder Snap Pools 2. Select TimeFinder DATA PROTECTION to open the list view. Snap Pools Use the TimeFinder Snap Pools list view to display and manage the TimeFinder/Snap pools on a storage system. The following properties display: l Name —Name of the pool. l —Configuration of the volumes in the pool. Configuration l Technology —Technology on which the volumes in the pool reside. l Emulation —Emulation type. l Capacity (GB) —Capacity in GB. l —Whether the pool is Enabled or Disabled. Pool State l % Used —Percentage of pool used. l Used (GB) —Total used space in GB. l —Total free space in GB. Free (GB) The following controls are available: l on page 454 Viewing TimeFinder/Snap pool details — l Create Creating TimeFinder/Snap pools on page 451 — l — Adding volumes to TimeFinder/Snap pools on page 452 Add l — Deleting TimeFinder/Snap Pools on page 452 Removing volumes from TimeFinder/Snap pools 453

454 Data Protection l Enabling all volumes in TimeFinder/Snap pools Enable All on page 452 — l Disabling all volumes in TimeFinder/Snap pools on page 452 Disable All — l on Assign Dynamic Cache Partition — Assigning dynamic cache partitions page 945 Viewing TimeFinder/Snap pool details TimeFinder/Snap pools are supported on storage systems running Enginuity OS 5876. Procedure 1. Select the storage system. > TimeFinder Snap Pools to open the TimeFinder 2. Select DATA PROTECTION Snap Pools list view. 3. Details view. to open its Select a pool and click The following properties display: l Array ID —Storage system on which the pool resides. l Name —Name of the pool. l —Pool type. Pool Type l RAID Protection —Protection level of the volumes in the pool. l — Technology on which the volumes in the pool reside. Technology l Pool State — State of the pool (Enabled or Disabled). l —Number of volumes in the pool. Num Volumes l Disabled Volumes —Number of disabled volumes in the pool. l Enabled Volumes —Number of enabled volumes in the pool. l Capacity (GB) —Sum of all enabled and disabled volumes in the pool. l Enabled Capacity (GB) —Sum of all enabled volumes in the pool. l Free (GB) — Total free space in GB. l —Total used space in GB. Used (GB) There are links to views for objects contained in and associated with the pool. Each link is followed by a number, indicating the number of objects in the opens a view listing NumVolumes corresponding view. For example, clicking the SAVE volumes contained in the TimeFinder snap pool. The panel links you to the performance monitor and Performance Views analyze views for the snap pool. This panel only displays when the Performance option is installed. This panel displays with inactive links if the selected storage system is not registered for data collection. Viewing SRDF group volumes This procedure explains how to view the volumes in an SRDF group: 454 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

455 Data Protection Procedure 1. Select the storage system. Data Protection to open the SRDF groups list view. > 2. Select SRDF groups 3. Details to open its view. Select the SRDF group and click SRDF List 4. Click the number next to SRDF Group Volumes to open the Volumes view. The following properties display: Volumes —Local volume ID. Configuration —SRDF configuration. —Remote storage system ID. Remote Symmetrix Remote SRDF Group —Remote SRDF group ID. Target Volume —Target volume ID. State —Session state of the pair Pair State —Volume pair state. Remote Volume State —State of the remote volume. —SRDF copy type. SRDF Mode Viewing SRDF protected storage groups Unisphere 8.1 and higher provides SRDF monitoring and management for storage groups. Thiis includes SRDF/Metro protected storage groups on storage systems running HYPERMAX OS 5977 or higher. Only single hop SRDF is supported for SRDF/ Managing remote Metro, that is, current or cascaded setups are not supported. See on page 402 for additional information. replication sessions Procedure 1. Select the storage system. 2. Select > Storage Groups . DATA PROTECTION 3. Click the SRDF tab. The following properties display, depending on the operating environment: l —User-defined storage group name. Storage Group l States —The state of the storage group. Possible values are: n ActiveActive n ActiveBias n Consistent n Failed Over n Invalid n Partitioned n R1 Updated n R1 Update in progress n Suspended Viewing SRDF protected storage groups 455

456 Data Protection n Synchronization in progress n Synchronized n Transmit Idle If Unisphere detects an asynchronous state change event for a SRDF group from Solutions Enabler, it updates the Unisphere state for the SRDF group and Storage Group its related SRDF device groups and SRDF storage groups. The list view must be refreshed so that the latest state is reflected. l Modes —The SRDF modes. l SRDF Type —The SRDF type. SGs with volumes having multiple SRDF types display multiples here, for example, R1 and R2. l —The SRDF group number. Concurrent SRDF setups list SRDF Groups multiple SRDF Group numbers. Click to view the following additional properties: l Capacity(GB) —Total capacity of the storage group in GB. l SRDF Pairs —Number of associated SRDF pairs. l Masking Views —The number of associated masking views. l Emulation —The emulation type (ALL, FBA, CKD). l Group Type —The group type. l —The bias type. Bias Type l Production Volumes —The number of production volumes. l Last Updated —The date and time of the last update. The following controls are available, depending on the operating environment and the mode: l Establishing SRDF pairs Establish — on page 421 l Splitting SRDF pairs on page 436 Split — l Suspend Suspending SRDF pairs on page 436 — l — Restoring SRDF pairs on page 433 Restore l Resume on page 429 — Resuming SRDF links l — Failing over Failover on page 422 l — on page 423 Failback Failing back l Swapping SRDF personalities on page 438 Swap — l — on page 408 Move Moving SRDF pairs l Delete Pair Deleting SRDF pairs on page 407 — l — Setting SRDF mode on page 409 Set Mode l Set Bias on page 434 — Setting bias location l — Set Volume Attributes > Invalidate on page Invalidating R1/R2 volumes 424 l — Making R1/R2 volumes ready on page Set Volume Attributes > Ready 425 l — on Set Volume Attributes > Not Ready Making R1/R2 volumes not ready page 426 Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 456

457 Data Protection l Updating R1 volumes Set Volume Attributes > R1 Update on page 438 — l Read/write enabling R1/R2 Set Volume Attributes > RW Enable — on page 428 volumes l Set Volume Attributes > Write Disable — Read/write disabling R1/R2 on page 429 volumes l Read/write disabling R2 — Set Volume Attributes > RW Disable R2s volumes on page 427 l Refreshing R1 or R2 volumes Set Volume Attributes > Refresh — on page 430 l Asynchronous > Set SRDF/A Setting SRDF/A controls to prevent cache — overflow on page 431 l Asynchronous > Set Consistency on page — Setting consistency protection 350 Viewing related SRDF groups Procedure 1. Select the storage system. DATA PROTECTION > Storage Groups . 2. Select SRDF 3. Click . 4. to open the Storage Group pair list view. Select a storage group and click SRDF pairs to open the SRDF pair list view. 5. Click the number next to 6. Select a pair and click to open the SRDF pair list properties panel. to open the related SRDF 7. Click the number next to SRDF Group Number groups list view. The following properties display, depending on the operating environment: SRDF Group Number —SRDF group number. —SRDF group label. SRDF Group label —Remote SRDF group number. Remote SRDF Group Number —Remote Symmetrix ID. Remote Symmetrix ID Volumes Count —Indicates Volumes count. Creating SRDF groups SRDF groups provide a collective data transfer path linking volumes of two separate storage systems. These communication and transfer paths are used to synchronize data between the R1 and R2 volume pairs associated with the RDF group. At least one physical connection must exist between the two storage systems within the fabric topology. Before you begin: The maximum number of supported RDF groups differs by Enginuity version: Viewing related SRDF groups 457

458 Data Protection Enginuity Maximum number of RDF Groups supported Group numbers per director per storage per port system 250 250 5977 or higher 1 to 250 250 64 1 to 250 5876 250 64 l When specifying a local or remote director for a storage system running HYPERMAX OS 5977, you can select one or more SRDF ports. l If the RDF interaction includes a storage system running HYPERMAX OS 5977, then the other storage system must be running Enginuity 5876. In addition, in this interaction the maximum storage system volume number allowed on the system running HYPERMAX OS 5977 is FFFF (65635). To create an SRDF group: Procedure 1. Select the storage system. > and click Create SRDF Group , or 2. Select SRDF Groups DATA PROTECTION from the REPLICATION select CREATE SRDF GROUP dashboard. Communication Protocol to use when moving data across the 3. Select a SRDF links. The value you select here will populate the Director Port list. Remote Array ID . 4. Select a Scan 5. To refresh the remote storage system information, click . The scan operation looks for SRDF capable systems known to Unisphere. SRDF Group Label (name). 6. Type a 7. Click SRDF/Metro Witness Group . This checkbox is selectable when the local storage system and the remote selected storage system are both Witness capable. . 8. Select a local SRDF Group Number 9. Select the local director ports through which the group will communicate. Advanced Options to set the advanced options, as described next. 10. Click Setting Advanced options: a. Select a local Link Limbo Period . This is a length of time for the storage system to continue checking the local SRDF link status. (The range is 0-120 seconds, default is 10.) If the link status is Not Ready after the link limbo time, the volumes are made Not Ready to the link. b. Select (enable) for the local group. With this feature Local Link Domino enabled from either the local or remote side of group's RDF links, failure of the group's last remaining link will make all source (R1) volumes in the group unavailable (not ready) to their host when an R1-side operation occurs. This ensures that the data on the source (R1) and target (R2) devices is always in synch. Local Auto Link Recovery for the local group. With this c. Select (enable) feature enabled, once the link failure is corrected, volumes that were ready to their host before the failure will automatically be restored to the ready state. 458 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

459 Data Protection d. Click OK . . 11. Select a Remote SRDF Group Number 12. Select the remote director ports through which the group will communicate. 13. Click Advanced Options to set the advanced options, as described next. Setting Advanced options: Remote Link Limbo Period . This is a length of time for the storage a. Select a system to continue checking the remote SRDF link status. (The range is 0-120 seconds, default is 10.) If the link status is Not Ready after the link limbo time, the volumes are made Not Ready to the link. for the remote group. With this b. Select (enable) Remote Link Domino feature enabled from either the local or remote side of group's RDF links, failure of the group's last remaining link will make all source (R1) volumes in the group unavailable (not ready) to their host when an R1-side operation occurs. This ensures that the data on the source (R1) and target (R2) volumes is always in synch. for the remote group. With Remote Auto Link Recovery c. Select (enable) this feature enabled, once the link failure is corrected, volumes that were ready to their host before the failure will automatically be restored to the ready state. OK . d. Click e. A summary page, displaying all values and options selected, is displayed. 14. Optional: Set one or more of the following: l for the local group. This enables Select (enable) Software Compression SRDF software data compression for SRDF groups defined on GigE, or Fibre Channel. Although you can enable/disable software compression on the R2 side, the setting of hardware compression on the R1 side is what enables or disables the feature. l Hardware Compression for the local group. This enables Select (enable) SRDF hardware data compression on an SRDF group defined on a GigE director. Although you can enable/disable hardware compression on the R2 side, the setting of hardware compression on the R1 side is what enables or disables the feature. This feature requires PowerMaxOS 5978 or higher. 15. Do one of the following: l Expand Add to Job List and click Add to Job List Now to add this task to the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Modifying SRDF groups To modify SRDF groups: Procedure 1. Select the storage system. DATA PROTECTION > SRDF Groups to open the SRDF groups list 2. Select view. 3. Select a group and click Modify . Modifying SRDF groups 459

460 Data Protection 4. Do any number of the following steps: through which the group will communicate. a. Select a new Local Array When specifying a local or remote array for a storage system running HYPERMAX OS 5977, you can select one or more SRDF ports. Remote Array b. Select a new through which the group will communicate. 5. Select and do any number of the following steps: Advanced Options . The length of time for the storage system a. Select a local Link Limbo Period to continue checking the local SRDF link status. (The range is 0-120 seconds, default is 10.) If the link status is Not Ready after the link limbo time, the volumes are made Not Ready to the link. Link Domino for the local group. With this feature enabled b. Select (enable) from either the local or remote side of group's RDF links, failure of the group's last remaining link will make all source (R1) volumes in the group unavailable (not ready) to their host when an R1-side operation occurs. This ensures that the data on the source (R1) and target (R2) devices is always in synch. Auto Link Recovery c. Select (enable) for the local group. With this feature enabled, once the link failure is corrected, volumes that were ready to their host before the failure will automatically be restored to the ready state. d. Select (enable) for the local group. This enables Software Compression SRDF software data compression for SRDF groups defined on GigE, or Fibre Channel. Although you can enable/disable software compression on the R2 side, the setting of hardware compression on the R1 side is what enables or disables the feature. This feature requires Enginuity 5876 or later. for the local group. This enables e. Select (enable) Hardware Compression SRDF hardware data compression on an SRDF group defined on a GigE director. Although you can enable/disable hardware compression on the R2 side, the setting of hardware compression on the R1 side is what enables or disables the feature. This feature requires Enginuity 5876 or later. Link Limbo Period . This is a length of time for the storage f. Select a remote system to continue checking the remote SRDF link status. (The range is 0-120 seconds, default is 10.) If the link status is Not Ready after the link limbo time, the volumes are made Not Ready to the link. Link Domino g. Select (enable) for the remote group. With this feature enabled from either the local or remote side of group's RDF links, failure of the group's last remaining link will make all source (R1) volumes in the group unavailable (not ready) to their host when an R1-side operation occurs. This ensures that the data on the source (R1) and target (R2) volumes is always in synch. for the remote group. With this feature h. Select (enable) Auto Link Recovery enabled, once the link failure is corrected, volumes that were ready to their host before the failure will automatically be restored to the ready state. 6. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 Scheduling jobs and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand 460 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

461 Data Protection Setting SRDF/A DSE attributes Procedure 1. Select a storage system. SRDF Groups . DATA PROTECTION 2. Select > 3. Select a group, click SRDF/A DSE Setting , and select . 4. Select the pool. For systems running HYPERMAX OS 5977, this option may not be available. ). Threshold 5. Type the percentage of the storage system’s write pending limit ( Once the cache usage of all active groups in the storage system exceeds this limit, data tracks for this group start to spill over to disks. Possible values are from 20 to 100, with 50 being the default. 6. (Optional) Select (enable) the SRDF/A write pacing feature to automatically start for the group when an SRDF/A session is activated ( Autostart ). This feature must be activated for host write I/O pacing to be invoked. is always enabled. For systems running HYPERMAX OS 5977, Autostart the SRDF/A Delta Set Extension (DSE) feature. Activate/Deactivate 7. Manually DSE allows SRDF/A cache to be extended by offloading some or all of the session cycle data to preconfigured disks or pools. Possible values are: l No change —Leaves the current write pacing setting. l —Activates the feature for the local side of the SRDF link. Activate l Activate Both Sides —Activates the feature for both sides of the SRDF link. l —Deactivates the feature for the local side of the SRDF link. Deactivate l Deactivate Both Sides —Deactivates the feature for both sides of the SRDF link. This feature is supported with thin devices. 8. Do one of the following: l Expand Add to Job List and click Add to Job List Now to add this task to the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand Setting SRDF/A group attributes Procedure 1. Select a storage system. 2. Select DATA PROTECTION > SRDF Groups to open the SRDF groups list view. 3. , and select SRDF/A Setting . Select a group, click Setting SRDF/A DSE attributes 461

462 Data Protection 4. Type the Minimum Cycle Time . This is the minimum amount of time (in seconds) the storage system will wait before attempting to perform an RDF/A cycle switch. Possible values range from 1 to 60 seconds. Session Priority 5. Type the . This priority is used to determine which RDF/A session to drop if cache is full. Possible values range from 1 (highest) to 64 (lowest). Transmit Idle Enabled 6. Select to preserve the data in cache (if the link is idle) and then retry transmitting the data. This option must be enabled on both local and remote sides. 7. Do one of the following: l Expand to add this task to Add to Job List and click Add to Job List Now the job list, from which you can schedule or run the task at your Scheduling jobs on page 920 convenience. For more information, refer to and Previewing jobs on page 920. l , and click Run Now to perform the operation now. Expand Add to Job List Setting SRDF/A pace attributes Procedure 1. Select a storage system. > SRDF Groups . DATA PROTECTION 2. Select 3. Select a group, click , and select SRDF/A Pacing Setting . 4. Type the maximum I/O delay to apply to each host write I/O when the pacing Pacing Delay ). Possible values range from 1 to 1,000,000 algorithm is invoked ( usec (0.000001 to 1 second), with 50,000 (0.05 seconds or 50 ms) being the default. 5. Type the minimum cache percentage when host write pacing will start Threshold ). Possible values range from 1 to 99, with 60% being the default ( 6. (Optional) Select to set the threshold on both the R1 and R2 sides (Both Sides). 7. (Optional) Set the following write pacing attributes for the RDF group, the volumes in the group, or both: a. Select (enable) the SRDF/A write pacing feature to automatically start when an SRDF/A session is activated ( and Autostart Group Pacing Autostart Volume Pacing ). This feature must be activated for host write I/O pacing to be invoked. the SRDF/A write pacing feature for the Activate/Deactivate b. Manually RDF group. Setting this option to No Change leaves the current write pacing setting. SRDF/A write pacing can only be activated when the SRDF/A session is active. 8. Do one of the following: l Add to Job List and click Add to Job List Now to add this task to Expand the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 Scheduling jobs and Previewing jobs on page 920. l Add to Job List , and click Run Now to perform the operation now. Expand 462 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

463 Data Protection Swapping SRDF groups Before you begin Before you begin: l If the target (R2) volume is on a storage system running HYPERMAX OS 5977 or later, and the mode of the source (R1) volume is Adaptive Copy Write Pending, SRDF will set the mode to Adaptive Copy Disk. l As a result of a swap, operation, a cascaded R1 -> R21 -> R2 configuration can be created if any of the storage systems in the cascaded configuration is running HYPERMAX OS Q1 2015 SR or later. When you swap the SRDF personality of the designated SRDF volumes, the source (R1) volumes become target (R2) volumes and the target (R2) volumes become source (R1) volumes. To swap SRDF groups: Procedure 1. Select the storage system. > SRDF Groups . DATA PROTECTION 2. Select 3. Select an SRDF group, click . , and select Swap Groups 4. Select the mirror to refresh. 5. Do one of the following: l Add to Job List Click to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation now. Expand Setting consistency protection Before you begin To set consistency protection: Procedure 1. Select the storage system. DATA PROTECTION Storage Groups > SRDF or DATA 2. Select > Device Groups PROTECTION SRDF . > > 3. , and select Asynchronous > Set Consistency . Select a group, click more Enable or Disable . 4. select Use 2nd Hop option if including the second hop of a cascaded SRDF 5. Select the configuration (only applicable for device groups). 6. Click Advanced Options to set the advanced options . Select the advanced OK . options and click 7. Do one of the following: Swapping SRDF groups 463

464 Data Protection l and click Expand Add to Job List Now Add to Job List to add this task to the job list, from which you can schedule or run the task at your on page 920 Scheduling jobs convenience. For more information, refer to Previewing jobs on page 920. and l Expand to perform the operation now. Add to Job List , and click Run Now Deleting SRDF groups To delete SRDF groups: Procedure 1. Select the storage system. > SRDF Groups 2. Select DATA PROTECTION . 3. Select the SRDF group and select . and select the check box. Advanced Options Use Force 4. Optional: Click This forces the operation. and select the Use SymForce check box. 5. Optional: Click Advanced Options This forces the operation when the operation would normally be rejected. 6. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Run Now to perform the operation now. Add to Job List , and click Viewing SRDF groups The SRDF SG list displays a notification if a capacity mismatch exists between R1 and R2 devices. Mismatch can be R1 > R2 or R1 < R2 Procedure 1. Select the storage system. > SRDF Groups to open the SRDF groups 2. Select DATA PROTECTION list view. Use the SRDF groups list view to display and manage SRDF groups. The following properties display: SRDF Group —SRDF group number. —SRDF group label, for example, Async, Metro, Witness. SRDF Group Label —Remote RDF group number. Remote SRDF group —Type of group, for example, Dynamic or Witness. Type SRDF Mode —SRDF modes associated with the SRDF group. Online — Indication if online. — Time the transmit cycle has been idle. Transmit Idle Volumes Count —Number of volumes in the group. The following controls are available: Viewing SRDF group details on page 465 — 464 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

465 Data Protection Create SRDF Group on page 457 — Creating SRDF groups Modifying SRDF groups Modify on page 459 — on page 404 Create Pairs — Creating SRDF pairs Deleting SRDF groups on page 464 — Create SRDF Connection — Creating SRDF connections on page 403 Swapping SRDF groups on page 463 — Swap Groups Setting SRDF/A pace attributes SRDF/A Pacing Setting — on page 462 — on page 461 SRDF/A Setting Setting SRDF/A group attributes Setting SRDF/A DSE attributes — on page 461 SRDF/A DSE Setting Assigning dynamic cache partitions on page Assign Dynamic Cache Partition — 945 Viewing SRDF group details To view SRDF group details: Procedure 1. Select the storage system. DATA PROTECTION > SRDF Groups to open the SRDF groups list 2. Select view. 3. Details to open its view. Select the SRDF group and click The properties listed depend on the specifics of the storage system. Some or all of the following properties display: SRDF Group Number —SRDF group number. SRDF Group Label —SRDF group label. SRDF Group Volumes —SRDF group volumes. —Director identifier(s). Director Identity Remote SRDF Group —Remote group number(s). Remote Array ID —Remote storage system serial ID(s). Remote Director Identity —Remote director identifier(s). —SRDF Modes. Possible values are: N/A, Adaptive Copy, SRDF Modes Synchronous, Asynchronous, Active, and Metro. Prevent Auto Link Recovery — ndicates the state of preventing automatic data copy across SRDF links upon recovery. —Maximum number of SRDF copy jobs per SRDF group. Copy Jobs Prevent RAs Online Upon Power On —Indicates the state of preventing the SRDF. directors from automatically coming back online with power on. Link Domino —Sets the domino mode for the source (R1) volumes. —Link configuration. Link Config —Indicates the Fibre adapter type. Director Config SRDF Group Configuration —RA group configuration. Possible values are: Dynamic, Static, Witness. Viewing SRDF group details 465

466 Data Protection Link Limbo (sec) —Number of seconds (0-10) for the storage system to continue checking the local SRDF link status. —Minimum cycle time (seconds) configured for this Minimum Cycle Time session. Transmit Idle Time —Time the transmit cycle has been idle. —Whether SRDF/A Transmit Idle state is active for the Transmit Idle Enabled SRDF group. Dynamic Cache Partition Name —Cache partition name. —The SRDF/A mode. The status of the property can be Single- SRDF/A Mode session, MSC, or N/A. —Indicates if MSC cleanup is required. The status of MSC Cleanup Required the property can be Yes, No, or N/A. SRDF/A Session Status —The SRDF/A session status. The status of the property can be Active, Inactive, or N/A. —Indicates if consistency protection is SRDF/A Consistency Protection enabled. The status of the property can be Enabled, Disabled, or N/A. —Indicates if SRDF/A DSE is active. SRDF/A DSE Status —Indicates if SRDF/A DSE is automatically enabled SRDF/A DSE Autostart when an SRDF/A session is activated for the group. SRDF/Metro —SRDF/Metro. Possible values are: Yes, No. SRDF/Metro Witness Degraded —SRDF/Metro Witness Degraded. Possible values are: Yes, No. SRDF/A DSE Threshold —Percentage of the storage systems write pending limit. —Indicates if SRDF/A write pacing is active. SRDF/A Write Pacing Status —Max delay allowed for host I/O in seconds. SRDF/A Write Pacing Delay —Minimum cache percentage when host SRDF/A Write Pacing Threshold write pacing will start. —Indicates if group pacing auto start is enabled/ Group Pacing Auto Start disabled on the SRDF group. Device Pacing Supported —Indicates if SRDF/A device pacing is supported. Group Level Pacing State —Indicates if group level write pacing is enabled or disabled. —Group-level pacing status of the SRDF/A session. Device Pacing Activated The status of the feature can be Active, Inactive, N/A. Group Pacing Auto Start —Indicates if group pacing auto start is enabled/ disabled on the SRDF group. —Indicates if software compression is enabled/ SRDF Software Compression disabled on the SRDF group. SRDF Single Round Trip —Indicates if single round trip is enabled/disabled on the SRDF group. SRDF Hardware Compression —Indicates if hardware compression is enabled/ disabled on the SRDF group. SRDF Software Compression Support —Indicates if SRDF software compression is enabled or disabled. 466 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

467 Data Protection SRDF Hardware Compression Support — Indicates if SRDF hardware compression is supported on the storage system. — Indicates if SRDF group is in a star configuration. Star Mode SQAR Mode — Indicates if SRDF group is in a SQAR configuration. Links are also provided to views for objects contained in and associated with the SRDF group. Each link is followed by a number, indicating the number of objects SRDF in the corresponding view. For example, clicking the number next to will open a view listing the volumes contained in the SRDF Group Volumes group. Viewing SRDF protected device groups The SRDF dashboard provides you with a single place to monitor and manage SRDF sessions on a storage system. This includes device groups types R1, R2, and R21. See on page 402 for additional information. Managing remote replication sessions Before you begin: SRDF requires Enginuity version 5876 or HYPERMAX OS 5977. The following configurations are not supported: l An R21 or R22 SRDF device on a system running HYPERMAX OS 5977. l A cascaded SRDF configuration containing a system running HYPERMAX OS 5977. l A concurrent R22 configuration containing a system running HYPERMAX OS 5977. To access the SRDF dashboard: Procedure 1. Select the storage system. 2. Select Data Protection > Device Groups . 3. Click SRDF . The following properties display: l Device Group —Device group name. l Standard —Number of standard volumes. l —Number of BCV volumes. BCV l —Current state of device group. State l Group Type —Device group type. l Group Valid —Indicates if the group is valid or invalid for SRDF management. The following controls are available: l Establishing SRDF pairs on page 421 Establish — l Split — Splitting SRDF pairs on page 436 l — Suspending SRDF pairs on page 436 Suspend l Restore Restoring SRDF pairs on page 433 — l Resume — Resuming SRDF links on page 429 l — Failing over on page 422 Failover Viewing SRDF protected device groups 467

468 Data Protection l Failing back Failback on page 423 — l Swapping SRDF personalities Swap on page 438 — l on page Setting SRDF/A controls to prevent cache overflow — Set SRDF/A 431 l on page 350 Set Consistency — Setting consistency protection l Moving SRDF pairs Move — on page 408 l — on page 424 Invalidate Invalidating R1/R2 volumes l Making R1/R2 volumes ready on page 425 Ready — l — on page 426 Not Ready Making R1/R2 volumes not ready l — Updating R1 volumes R1 Update on page 438 l — on page 428 RW Enable Read/write enabling R1/R2 volumes l Read/write disabling R1/R2 volumes on page 429 Write Disable — l — on page 427 RW Disable R2s Read/write disabling R2 volumes l Refresh Refreshing R1 or R2 volumes on page 430 — l — Setting SRDF mode on page 409 Set Mode l Delete Pair on page 407 — Deleting SRDF pairs Resuming SRDF links This procedure explains how to resume I/O traffic on the SRDF links for all remotely mirrored RDF pairs in a group. Procedure 1. Select the storage system. > SRDF groups DATA PROTECTION 2. Select . 3. Do the following, depending on whether you want to perform the operation at the group level or pair level: Group level: a. , and select to open the Resume dialog Select a group, click Resume box. Use 2nd Hop option if including the second hop of a cascaded b. Select the SRDF configuration. Advanced Options to set the advanced SRDF session options . Select c. Click OK the advanced options and click . d. Do one of the following: l Click to add this task to the job list, from which you can Add to Job List schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , and click Run Now to perform the operation Expand now. Pair level: 468 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

469 Data Protection a. SRDF pair list Select a group and click view. to open the b. Select one or more pairs, click more , and select Resume to open the Resume dialog box. c. Click Advanced Options to set the advanced SRDF session options . Select OK the advanced options and click . d. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l , and click Run Now to perform the operation Expand Add to Job List now. Viewing SRDF group volumes This procedure explains how to view the volumes in an SRDF group: Procedure 1. Select the storage system. > 2. Select to open the SRDF groups list view. Data Protection SRDF groups 3. to open its Details view. Select the SRDF group and click SRDF Group Volumes 4. Click the number next to to open the SRDF List view. Volumes The following properties display: —Local volume ID. Volumes Configuration —SRDF configuration. Remote Symmetrix —Remote storage system ID. Remote SRDF Group —Remote SRDF group ID. —Target volume ID. Target Volume State —Session state of the pair Pair State —Volume pair state. Remote Volume State —State of the remote volume. —SRDF copy type. SRDF Mode SRDF/A control actions Action Descriptio Write Pacing Type Activate Type n Activate N/A Activates DSE the SRDF/A Delta Set Extension feature, which extends the available Viewing SRDF group volumes 469

470 Data Protection Action Descriptio Activate Type Write Pacing Type n cache space by using device SAVE pools. Write Pacing Activates Group write pacing SRDF/A This feature extends the Group level write pacing is write pacing supported on Symmetrix availability of SRDF/A by at the group preventing conditions that result in systems running Enginuity level. 5876 and higher. cache overflow on both the R1 and R2 sides. Group & Volume Write Activates Pacing SRDF/A write pacing at the group level and the volume level Activates Volume Write Pacing SRDF/A Volume write pacing is write pacing supported on Symmetrix at the systems running Enginuity volume 5876 and higher. level. N/A Write Pacing Exempt Activates write pacing exempt. Write pacing exempt allows you to remove a volume from write pacing. RDFA flags Flag Status X = Enabled, . = Disabled, - = N/A (C)onsistency A = Active, I = Inactive, - = N/A (S)tatus S = Single-session, M = MSC, - = N/A (R)DFA Mode (M)sc Cleanup C = MSC Cleanup required, - = N/A (T)ransmit Idle X = Enabled , . = Disabled, - = N/A (D)SE Status A = Active, I = Inactive, - = N/A X = Enabled, . = Disabled, - = N/A DSE (A)utostart 470 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

471 Data Protection SRDF group modes The following values can be set for SRDF groups: —Provides the host access to the source (R1) volume on a write Synchronous operation only after the storage system containing the target (R2) volume acknowledges that it has received and checked the data. —The storage system acknowledges all writes to the source (R1) Asynchronous volumes as if they were local volumes. Host writes accumulate on the source (R1) side until the cycle time is reached and are then transferred to the target (R2) volume in one delta set. Write operations to the target volume can be confirmed when the current SRDF/A cycle commits the data to disk by successfully de-staging it to the R2 storage volumes. For storage systems running Enginuity 5876, you can put an RDF relationship into Asynchronous mode when the R2 volume is a snap source volume. Semi Synchronous —The storage system containing the source (R1) volume informs the host of successful completion of the write operation when it receives the data. The RDF (RA) director transfers each write to the target (R2) volume as the RDF links become available. The storage system containing the target (R2) volume checks and acknowledges receipt of each write. —(adaptive copy write pending) the storage system acknowledges AC WP Mode On all writes to the source (R1) volume as if it was a local volume. The new data accumulates in cache until it is successfully written to the source (R1) volume and the remote director has transferred the write to the target (R2) volume. AC Disk Mode On —For situations requiring the transfer of large amounts of data without loss of performance; use this mode to temporarily to transfer the bulk of your data to target (R2) volumes; then switch to synchronous or semi synchronous mode. —Ensures that the data on the source (R1) and target (R2) Domino Mode On volumes are always in sync. The storage system forces the source (R1) volume to a Not Ready state to the host whenever it detects one side in a remotely mirrored pair is unavailable. —The remotely mirrored volume continues processing I/Os with its Domino Mode Off host, even when an SRDF volume or link failure occurs. AC Mode Off —Turns off the AC disk mode. —Modifies the adaptive copy skew threshold. When the skew AC Change Skew threshold is exceeded, the remotely mirrored pair operates in the predetermined SRDF state (synchronous or semi-synchronous). As soon as the number of invalid tracks drop below this value, the remotely mirrored pair reverts back to the adaptive copy mode. —Sets the R2 device to Not Ready if there are invalid tracks. (R2 NR If Invalid) On —Turns off the (R2 NR_If_Invalid) On mode. (R2 NR If Invalid) Of Understanding RecoverPoint RecoverPoint provides block-level continuous data protection and continuous remote replication for on-demand protection and recovery at any point-in-time, and enables you to implement a single, unified solution to protect and/or replicate data across heterogeneous servers and storage. SRDF group modes 471

472 Data Protection RecoverPoint operations on Unisphere require Enginuity 5876 on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. Tagging and untagging volumes for RecoverPoint (storage group level) Before you begin l Volumes that are part of an RDF pair cannot be tagged for RecoverPoint. l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the Symmetrix system. l This feature is not supported on storage systems running HYPERMAX OS 5977. This procedure explains how to tag (enable) or untag (disable) volumes for RecoverPoint. Enabling volumes makes them accessible to the RecoverPoint appliance. Procedure 1. Select the storage system. > 2. Select . Storage Storage Groups 3. Do one of the following: l To tag the storage group, select it, click Tag for , and select . RecoverPoint l Untag for To untag the storage group, select it, click , , and select . RecoverPoint OK . 4. Click Tagging and untagging volumes for RecoverPoint (volume level) Before you begin l Volumes that are part of an RDF pair cannot be tagged for RecoverPoint. l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. l This feature is not supported on storage systems running HYPERMAX OS 5977. This procedure explains how to tag (enable) or untag (disable) volumes for RecoverPoint. Enabling volumes makes them accessible to the RecoverPoint appliance. Procedure 1. Select the storage system. Storage > Volumes . 2. Select All Volumes panel, expand the type of volume to tag or untag. 3. In the 4. Do one of the following: l , and select Tag for RecoverPoint . To tag volumes, select volumes, click 472 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

473 Data Protection l Untag for To untag volumes, select volumes, click , and select . RecoverPoint OK . 5. Click Untagging RecoverPoint tagged volumes Before you begin This feature is not supported on storage systems running HYPERMAX OS 5977. Procedure 1. Select the storage system Open Replicator . DATA PROTECTION 2. Select > tab. RecoverPoint Volumes 3. Click the Opens the RecoverPoint Volumes view. Untag 4. Select a volume and click . . OK 5. Click Viewing RecoverPoint copies Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. This procedure explains how to view the RecoverPoint copies for a particular consistency group. Procedure 1. Select the storage system. DATA PROTECTION > RecoverPoint Systems . 2. Select RecoverPoint Opens the list view. 3. and click the number next to Select a RecoverPoint system, click . Consistency Groups Opens the Consistency Group list view. . 4. Select a RecoverPoint consistency group and click the number next to Copies list view which lists the consistency groups on the selected Opens the Copies RecoverPoint system. The following properties display: l Copy Name —Name of copy. l —State of the copy. Valid values are State or . Suspended Enabled l Copy Size (GB) —Size of the copy. l —Current role of the copy. Valid values are Active or Replica . Copy Role l RTO (MB) —Recovery time objective Untagging RecoverPoint tagged volumes 473

474 Data Protection l Journal State —Indicates the state of the journal. Valid values include . and Locked Distributing l Journal Size (GB) —Size of the journal, in GB. The following controls are available: l — on page 474 Viewing RecoverPoint copy details Viewing RecoverPoint copy details Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. Procedure 1. Select the storage system. DATA PROTECTION RecoverPoint Systems . 2. Select > Opens the RecoverPoint list view. 3. Select a RecoverPoint system, click and click the number next to . Consistency Groups Opens the list view. Consistency Group . Copies 4. Select a RecoverPoint consistency group and click the number next to 5. . Select a copy and click Opens the copy's details view. The following properties display: l Name —Name of copy. l —State of the copy. Valid values are State or Suspended . Enabled l —Current role of the copy. Valid values are Role Replica . Active or l —Size of the copy. Copy Size l Journal Size —Size of the journal, in GB. l Journal State —Indicates the state of the journal. Valid values include Distributing . Locked and l — Indicates the state of the journal. Valid values Journal Volume Name include Locked and Distributing . l —Indicates the state of the journal. Valid values include Locked and Cluster . Distributing l —Recovery time objective in seconds RTO (seconds) l Journal Size Limit —Journal size limit l AllowDistribOfLargeSnaps —Allow distribution of large snapshots l AllowSymmWithOneRPA —Allow storage sytem with one RPA l —Active primary RPA ActivePrimaryRPA l FastForwardBound —Fast forward bound Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 474

475 Data Protection l NumCopySplitters —Number of copy splitters l —Number of copy volumes NumCopyVolumes l NumJournalVolumes —Number of journal volumes l PhoenixDevices —Phoenix devices l —Tsp writes cleared TspWritesCleared l —User snapshot UserSnapshot l Production Copy —Production copy. l Volumes —Number of associated volumes. l —Capacity of the copy, in GB. Copy Capacity (GB) Viewing RecoverPoint sessions Procedure 1. Select a storage system. DATA PROTECTION > Open Replicator . 2. Select 3. Click the RecoverPoint Sessions tab. 4. Use the RecoverPoint Sessions list view to view RecoverPoint sessions on the storage system. The following properties display: Cluster name —Session name. —Control volume name. Control volume — Remote volume name. Remote volume — Session status. Status Protected Tracks — Number of protected tracks. The following controls are available: — Viewing RecoverPoint session details on page 475 Viewing RecoverPoint session details Procedure 1. Select the storage system. > Open Replicator . DATA PROTECTION 2. Select 3. Click the RecoverPoint Sessions tab. 4. . Select a session and click Opens the session details view. The following properties display: l Cluster Name —Session Name. l Control Volume —Control volume name. l —Remote volume name. Remote Volume l Remote Volume Specification —Indicates the remote volume name format. Viewing RecoverPoint sessions 475

476 Data Protection l Status —Session status. l —Copy pace value. Copy pace l Protected Tracks —Number of protected tracks. Viewing RecoverPoint storage groups Before you begin RecoverPoint operations on Unisphererequire 5876. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. To perform this operation, you must be a monitor or higher. Procedure 1. Select the storage system. > RecoverPoint Systems . DATA PROTECTION 2. Select 3. Consistency Group list to open the Select a RecoverPoint system and click view. 4. Select a RecoverPoint consistency group and click the number next to Copies list view. Copies to open the 5. Select a copy and click to open the details view. 6. In the properties panel, click the number next to Storage Groups . The following information displays: Name —Name of the storage group. —Number of volumes in the group. Volumes —Number of associated masking views. Masking views —FAST policy associated with the RecoverPoint storage group. FAST_Policy Capacity —Capacity of the storage group. Child SG —For parent storage groups, this field displays the number of child storage groups; otherwise, this field displays zero. Viewing RecoverPoint tagged volumes Procedure 1. Select the storage system. > Open Replicator . 2. Select DATA PROTECTION RecoverPoint Volumes tab to open the RecoverPoint Volumes list 3. Select the view. The following properties display: —Volume name. Name —Volume volume. Type Status —Volume status. Reserved —Indicates if volume is reserved. Capacity (GB) —Volume capacity in GB. —Volume emulation type. Emulation 476 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

477 Data Protection The following controls are available: on page 477 — Viewing RecoverPoint tagged volume details on — Untag Tagging and untagging volumes for RecoverPoint (volume level) page 472 Viewing RecoverPoint tagged volume details Procedure 1. Select the storage system. 2. Select . DATA PROTECTION > Open Replicator tab to open the RecoverPoint Volumes list RecoverPoint Volumes 3. Select the view. 4. to open its Details view. Select a volume and click This view allows you to view the volume details. The following properties display: Masking Info —Number of masking groups. Storage Groups —Number of storage groups. FBA Front End Paths —Number of FBA front end paths. RDF Info —Number of SRDFs. Volume Name —Volume name. Physical Name —Physical name. —Volume identifier. Volume Identifier —Volume configuration. Type —Whether the volumes is encapsulated. Relevant for Encapsulated Volume external disks only. —World Wide Name for encapsulated volume. Relevant Encapsulated WWN for external disks only. Encapsulated Device Flag —Device flag for encapsulated volume. Relevant for external disks only. Encapsulated Device Array ID —Array ID for encapsulated volume. Relevant for external disks only. Status —Volume status. Reserved —Whether the volume is reserved. —Volume capacity in GBs. Capacity (GB) —Volume capacity in MBs. Capacity (MB) —Volume capacity in cylinders. Capacity (CYL) Emulation —Volume emulation. Symmetrix ID —Storage system on which the volume resides. —Storage volume name/number. Symmetrix Vol ID —User-defined volume name (1-128 alpha-numeric HP Identifier Name characters), applicable to HP-mapped devices. This value is mutually exclusive of the VMS ID. Viewing RecoverPoint tagged volume details 477

478 Data Protection VMS Identifier Name —Numeric value (not to exceed 32766) with relevance to VMS systems. This value is mutually exclusive of the HP ID. —Nice name generated by storage system. Nice Name —World Wide Name of the volume. WWN —External ID World Wide Name of the volume. External ID WWN —Name of the device group in which the volume resides, if DG Name applicable. CG Name —Name of the device group in which the volume resides, if applicable. Attached BCV —Defines the attached BCV to be paired with the standard volume. Attached VDEV TGT Volume —Volume to which this source volume would be paired. RDF Type —RDF configuration. —Method used to define the volume's geometry. Geometry - Type Geometry - Number of Cylinders —Number of cylinders, as defined by the volume's geometry. —Number of sectors per track, as defined by Geometry - Sectors per Track the volume's geometry. Geometry - Tracks per Cylinder —Number of tracks per cylinder, as defined by the volume's geometry. —Number of 512 blocks, as defined by the Geometry - 512 Block Bytes volume's geometry. Geometry Capacity (GB) —Geometry capacity in GBs. —Indicates whether an encapsulated volume has a Geometry Limited Symmetrix cylinder size larger than the reported user-defined geometry. GCM —Indicator of GCM. SSID —Subsystem ID. Capacity (Tracks) —Capacity in tracks. SA Status —Volume SA status. Host Access Mode —Host access mode. Pinned —Whether the volume is pinned. —Whether or not the volume is tagged for RecoverPoint Tagged RecoverPoint. Service State —Service state. —Type of user-defined label. Defined Label Type —RDF capability of the volume. Dynamic RDF Capability —Mirror set for the volume and the volume characteristic of Mirror Set Type the mirror. Mirror Set DA Status —Volume status information for each member in the mirror set. —Number of invalid tracks for each mirror in the Mirror Set Invalid Tracks mirror set. 478 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

479 Data Protection Priority QoS —Priority value assigned to the volume.Valid values are 1 (highest) through 16 (the lowest). —Name of the cache partition. Dynamic Cache Partition Name —Whether the volume is currently controlled by XtremSW Cache Attached cache cards. Compressed Size (GB) —Size of the compressed volume. Compressed Ratio (%) —Percentage of volume compressed. Compressed Size Per Pool (GB) —Size of the compressed pool. —Cacheless read miss status. Optimized Read Miss System Managed —The storage system determines the appropriate optimized read miss mode. Protecting storage groups using RecoverPoint Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation you must be a StorageAdmin. l The storage group being replicated must be masked to the host. l The storage group being replicated must not contain any volumes that are already tagged for RecoverPoint. l Connectivity to the RecoverPoint system/cluster is available. l RecoverPoint 4.1 is setup and operational. For each cluster in the setup, gatekeepers and repository volumes must be configured in their relevant masking view. uses a default journal masking view naming convention. l Depending on the options selected as part of the Protect Storage Group wizard and the existing configuration, some values for some options might populate automatically. Procedure 1. Select the storage system. STORAGE 2. Select . > Storage Groups . Protect 3. Select the storage group and click Select Technology page, select Remote Replication using 4. On the RecoverPoint . . 5. Click NEXT 6. On the Configure RecoverPoint page, specify the following information: l —RecoverPoint system. RecoverPoint System l RecoverPoint Group Name —Name of the RecoverPoint group. l RecoverPoint Cluster —RecoverPoint cluster. l Production Name —Name of the production. l —Data initiator group. Data Initiator Group l Journal Thin Pool —Journal thin pool. Protecting storage groups using RecoverPoint 479

480 Data Protection l Journal Port Group —Journal port group. l Data Initiator Group —Journal initiator group. NEXT . 7. Click 8. On the page, specify the following information: Add Copies l RecoverPoint Cluster —RecoverPoint cluster. l Copy Name —Name of the RecoverPoint copy. l —Specify whether the mode is Synchronous or Asynchronous. Mode l Array —Storage system. l Target Storage Group —Specify whether the RecoverPoint copy targets a new storage group or an existing group. l Copy Storage Group —Name of storage group to be copied. l —Name of data thin pool. Data Thin Pool l Data Port Group —Name of data port group. l Journal Thin Pool —Name of journal thin pool. l Journal Port Group —Name of journal port group. 9. Click Add Copy . Lists the copy in the Copy Summary table. NEXT . 10. Click FINISH 11. On the page, verify your selections. To change any of them, click BACK . Some changes may require you to make additional changes to your configuration. 12. Do one of the following: l Click Add to Job List to add this task to the job list, from which you can schedule or run the task at your convenience. For more information, refer to Previewing jobs on page 920. Scheduling jobs on page 920 and l Add to Job List , then click Run Now to perform the operation now. Expand Viewing RecoverPoint volumes Before you begin RecoverPoint operations on Unisphere require Enginuity 5876. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. To view information on RecoverPoint tagged volumes, refer to Viewing RecoverPoint tagged volumes on page 476. To perform this operation, you must be a monitor or higher. This procedure explains how to view the RecoverPoint volumes for a particular consistency group. To view RecoverPoint volumes: Procedure 1. Select the storage system. DATA PROTECTION > RecoverPoint Systems to open the 2. Select RecoverPoint Systems view. 480 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

481 Data Protection 3. Select a RecoverPoint system and click . 4. Click the number next to RecoverPoint consistency group. 5. Select a RecoverPoint consistency group and click . 6. Click the number next to Replication Sets. 7. Select the replication set and click . 8. Click the number next to Volumes. The following properties display: Volume Name —Name of the volume. —Capacity, in GB, of the volume. Capacity (GB) Replication Set —RecoverPoint replication set. Copy Name —RecoverPoint copy. —Type of storage system. Storage Type Array ID —Array ID. Vendor —Vendor of the volume. Product Name —Storage product installed. Viewing RecoverPoint clusters Before you begin RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. DATA PROTECTION > RecoverPoint Systems . 2. Select RecoverPoint Systems Opens the list view. system. 3. Select a RecoverPoint 4. Click and click the number next to Clusters . Opens the RecoverPoint Clusters table view. The following information displays: l Cluster Name —Name of the cluster. l —Number of RecoverPoint appliances. RecoverPoint Appliances l IPv4 Address —IP address, in IPv4 format. If an IPv6 address is used, this column has the value "N/A". l IPv6 Address —IP address, in IPv6 format. If an IPv4 address is used, this column has the value "N/A". l —RecoverPoint appliance type. RPA Type l Maintenance Mode —Maintenance mode in use. Viewing RecoverPoint clusters 481

482 Data Protection Viewing RecoverPoint cluster details Procedure 1. Select the storage system. RecoverPoint Systems . DATA PROTECTION 2. Select > list view. Opens the RecoverPoint Systems 3. . and click the number next to Clusters Click Opens the cluster list view. 4. . Select a cluster and click Opens the cluster details view. The following properties display: l Cluster Name —Volume name. l IPv4 Address —IP address, in IPv4 format. If an IPv6 address is used, this column has the value "N/A". l IPv6 Address —IP address, in IPv6 format. If an IPv4 address is used, this column has the value "N/A". l RecoverPoint Appliances —Number of RecoverPoint appliances. l —Number of RecoverPoint splitters. RecoverPoint Splitters l Software Serial ID —Serial ID of the software. l —RecoverPoint appliance type. RPA Type l Timezone —Time zone. l —Maintenance mode in use. Maintenance Mode l Internal Cluster Name —Internal name of the cluster. Viewing RecoverPoint splitters Before you begin RecoverPoint operations on Unisphere require Enginuity 5876 on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. This procedure explains how to view RecoverPoint splitters. To view RecoverPoint splitters: Procedure 1. Select the storage system. DATA PROTECTION > RecoverPoint Systems . 2. Select RecoverPoint Systems list view. Opens the 3. Select a RecoverPoint system. 4. Click and click the number next to Clusters . Opens the cluster list view. 5. . Select a cluster and click 482 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

483 Data Protection Opens the cluster details view. to open the 6. Click the number next to RecoverPoint Splitters Splitters list view. The following information displays: —Name of the splitter. Name —Array ID of the splitter. Array ID Array Type —Array type of the splitter. —Status of the splitter. Status Attached RPA Cluster —Number of attached clusters. Viewing RecoverPoint appliances Before you begin RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. > RecoverPoint Systems . 2. Select DATA PROTECTION list view. RecoverPoint Systems Opens the 3. Select a RecoverPoint system. 4. and click the number next to . Click Clusters Opens the cluster list view. 5. . Select a cluster and click Opens the cluster details view. 6. RecoverPoint Appliances and click the number next to . Click Opens the RecoverPoint Appliances view and displays the following information: l Name —Name of the RecoverPoint appliance. l Status —Status of the RecoverPoint appliance. l — Wide Area Network (WAN) IP address WAN (IP) l Management IPv4 — IP address, in IPv4 format. l —Local RPA Fibre Connectivity Local Fibre Connectivity l Remote Fibre Connectivity —Remote RPA Fibre Connectivity RecoverPoint systems Manage RecoverPoint discovery To discover a RecoverPoint system, see Discovering RecoverPoint Systems on page 484. Updating RecoverPoint discovery To update RecoverPoint discovery information, see on page 484. information Viewing RecoverPoint appliances 483

484 Data Protection Discovering RecoverPoint Systems Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l This operation requires StorageAdmin privileges. Procedure 1. Select the storage system. RecoverPoint Systems . 2. Select DATA PROTECTION > . Create 3. Click 4. In the Discover RecoverPoint System dialog box, type the following information: l —RecoverPoint system name. System Name l —System IP address, in IPv4 format. System IPv4 l Port —System port number. l System Username —System username. l —System password. System Password l Confirm System Password —Re-enter system password. OK . 5. Click Deleting RecoverPoint systems Before you begin RecoverPoint operations on Unisphere require Enginuity 5876 on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. Procedure 1. Select the storage system. 2. Select DATA PROTECTION > RecoverPoint Systems 3. Select a RecoverPoint system and click Delete RecoverPoint System . OK . 4. Click Updating RecoverPoint discovery information Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l This operation requires StorageAdmin privileges. Procedure 1. Select the storage system. DATA PROTECTION > RecoverPoint Systems. 2. Select 3. Select a RecoverPoint system. 484 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

485 Data Protection 4. Click Update Discovery Information . 5. Type the following information. l Port —System port number. l System Username —System username. l —System password. System Password l —Re-enter system password. Confirm System Password . OK 6. Click Viewing RecoverPoint systems Before you begin RecoverPoint operations on Unisphere require 5876 on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. This procedure explains how to view previously discovered RecoverPoint systems. To view RecoverPoint systems: Procedure 1. Select the storage system. > RecoverPoint Systems 2. Select DATA PROTECTION to open the RecoverPoint Systems view. The following properties display: System Name —Name of the system. —IP address of the system. IPv4 Address —Port of the system. Port —Number of RPA clusters in the system. Clusters Consistency Groups —Number of consistency groups associated with the system. Error Events —Number of events reported for the system. Error Alerts —Number of alerts reported for the system. The following controls are available: on page 485 Viewing RecoverPoint system details — Create — Discovering RecoverPoint Systems on page 484 — Updating RecoverPoint discovery information Update Discovery Information on page 484 Delete RecoverPoint System Deleting RecoverPoint systems on page 484 — Viewing RecoverPoint system details Before you begin RecoverPoint operations on Unisphere require Enginuity 5876. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. To view RecoverPoint system details: RecoverPoint systems 485

486 Data Protection Procedure 1. Select the storage system. DATA PROTECTION to open the > 2. Select RecoverPoint Systems RecoverPoint Systems view. 3. to open its Details view. Select the system and click The following properties display: RecoverPoint Systems —Name of the system. —Number of consistency groups associated with the Consistency Groups system. —Number of RPA clusters in the system. Clusters Critical Alerts Count —Number of critical alerts. OK Alerts Count —Number of OK alerts. Warning Alerts Count —Number of warning alerts. Critical Events Count —Number of critical events. —Number of warning events. Warning Events Count Events Error Count —Number of events errors. RecoverPoint consistency groups Viewing RecoverPoint consistency groups Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. This procedure explains how to view the consistency groups used to protect the RecoverPoint volumes. Procedure 1. Select the storage system. 2. Select RecoverPoint . DATA PROTECTION > list view. RecoverPoint Opens the 3. Select a RecoverPoint system, click and click the number next to Consistency Groups . list view which lists the consistency groups on Opens the Consistency Group the selected RecoverPoint system. The following properties display: l —Consistency group name. Consistency Group l Group Enabled —Consistency group state. l Link States —Lists the states of associated links. l Source Capacity (GB) —Source capacity in GB. l —Primary RecoverPoint appliance number. Primary RPA 486 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

487 Data Protection l Production Copy —Name of the production copy. The following controls are available: l on page 487 — Viewing RecoverPoint consistency group details l Copies — Viewing RecoverPoint copies on page 473 l on page 488 — Replication Sets Viewing RecoverPoint replication sets l — Viewing RecoverPoint links on page 489 Active Links Viewing RecoverPoint consistency group details Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. Procedure 1. Select the storage system. > RecoverPoint 2. Select DATA PROTECTION . RecoverPoint list view. Opens the 3. Select a RecoverPoint system, click and click the number next to . Consistency Groups 4. Select a consistency group and click to view the properties of that Consistency Group. Displays the properties of the Consistency Group. The following properties display: l Group State —State of the group. l Group Setting —Group setting. l Production Copy —Name of the production copy. l Copies —Number of associated copies. l Replication Sets —Number of associated replication sets. l —Number of active links. Active Links l Passive Links —Number of passive links. l Link States —Lists the states of associated links. l Distributed Group —Distributed group. l —Indicates if the consistency group is managed Managed by RecoverPoint by RecoverPoint. l Read Only Replica Volumes —Read-only replica volumes. RecoverPoint consistency groups 487

488 Data Protection RecoverPoint replication sets Viewing RecoverPoint replication sets Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. This procedure explains how to view the RecoverPoint replication sets for a particular consistency group. Procedure 1. Select the storage system. RecoverPoint . DATA PROTECTION 2. Select > list view. RecoverPoint Opens the 3. Select a RecoverPoint system, click and click the number next to Consistency Groups . list view. Consistency Group Opens the 4. Select a RecoverPoint consistency group and click the number next to Replication Sets . list view, which lists replication sets associated Opens the Replication Sets with the selected consistency group. The following properties display: l Name —Name of the replication set. l —Source capacity, in GB. Capacity (GB) l —Production volume capacity, in GB . Production Volume Capacity (GB) l Volumes —Number of associated volumes. The following control is available: l on page 488 — Viewing RecoverPoint replication set details Viewing RecoverPoint replication set details Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. This procedure explains how to view the details of a RecoverPoint replication set. Procedure 1. Select the storage system. 2. Select DATA PROTECTION > RecoverPoint . RecoverPoint list view. Opens the 488 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

489 Data Protection 3. Select a RecoverPoint system, click and click the number next to . Consistency Groups Opens the Consistency Group list view. 4. Select a RecoverPoint consistency group and click the number next to . Replication Sets 5. Select a replication set, and click . The following properties display: l —Name of the replication set. Name l Volumes —Number of associated volumes. l Volume Name —Name of associated volume. l —Production volume capacity, in GB . Production Volume Capacity (GB) l —Source capacity, in GB. Capacity (GB) RecoverPoint links Viewing RecoverPoint links Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. This procedure explains how to view the RecoverPoint links for a particular consistency group. Procedure 1. Select the storage system. DATA PROTECTION > RecoverPoint Systems . 2. Select RecoverPoint Systems Opens the list view. 3. and click the number next to Select a RecoverPoint system, click . Consistency Groups Opens the Consistency Group list view. . 4. Select a RecoverPoint consistency group and click the number next to Links list view, which lists the links associated with the selected Opens the Links consistency group. The following properties display: l Name —Name of the RecoverPoint link. l —Indicates if the transfer state is enabled for this Transfer Enabled RecoverPoint link. l Active or Replica —Current role of the copy. Valid values are Link State . l Local Link —Indicates if the link state is active or paused. l —Protection Mode. Protection Mode l RPO (seconds) —RPO. RecoverPoint links 489

490 Data Protection The following control is available: l Viewing RecoverPoint link details on page 490 — Viewing RecoverPoint link details Before you begin l RecoverPoint operations on Unisphere require Enginuity 5876 or higher on the storage system. RecoverPoint operations are not supported on storage systems running HYPERMAX OS 5977 or higher. l To perform this operation, you must be a monitor or higher. Procedure 1. Select the storage system. RecoverPoint . DATA PROTECTION 2. Select > list view. RecoverPoint Opens the 3. Select a RecoverPoint system, click and click the number next to . Consistency Groups list view. Opens the Consistency Group Links . 4. Select a RecoverPoint consistency group and click the number next to 5. Select a link and click . Opens the link's details view. The following properties display: l Name —Name of the RecoverPoint link. l Transfer State —Indicates if the transfer state is enabled for this RecoverPoint link. l Link State —Indicates if the link state is active or paused. l —Indicates if the link is local. Local l RPO (seconds) —Recovery point objective in seconds. l First Copy —First copy. l Second Copy —Second copy. l Protection Mode —Protection Mode. l Replication Over WAN —Indicates if replication over WAN is supported. l —Specifies what WAN compression, if any, is being WAN Compression used. l —Bandwidth limit. Bandwidth Limit l Deduplication —Specifies if deduplication is enabled. l Snapshot Granularity —Snapshot granularity. Creating Open Replicator copy sessions Before you begin When the ORS control volumes are on a storage system running HYPERMAX OS 5977 or higher, the following session options cannot be used: 490 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

491 Data Protection l Push l Differential l Precopy There are many rules and limitations for running Open Replicator sessions. Refer to Solutions Enabler Migration CLI Product Guide the before creating a session. For a quick . Open Replicator session options reference, refer to Procedure 1. Select the storage system. Open Replicator > Open Replicator SAN View . Data Protection 2. Select > panel by selecting items within Filtered LUNs 3. Filter the items displayed in the , the , and Remote Volumes panels. Control Ports Remote Ports Filtered LUNs Create 4. Select one or more volumes within the panel and click . Copy Session 5. Click . Create Copy Session and Copy Operation . 6. Select a Copy Direction . Next 7. Click The Source - Remote Volumes lists the remote volumes from the Open Target - Control Volumes lists all the Replicator remote volumes list view. The control volumes that can be paired with the remote volumes. For a cold push session, one control volume can concurrently push data to up to 16 remote volumes. For cold pull, hot push, and hot pull sessions only one control volume can push/pull to one remote device. Add Pair 8. Select a remote volume and target volume, then click . list. Volume Pairs If the pair is valid, it is added to the 9. Click to edit the Volume Pairs list. Remove Pair . 10. Click Next Session Name . 11. Enter 12. Enter Copy Pace value (0 - slowest to 9 - fastest). With offline copying, there is a slight pause between each track write. You can speed up a copy operation by reducing or eliminating this pause. While in the CopyInProgress or CopyOnAccess state, set a pace value higher than the default of 5. Setting the copy pace to 9 eliminates this pause. This feature is not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. 13. Select the Open Replicator session options and click Next . Summary and click Finish to create session or click Back to edit 14. View session session options. Activating Open Replicator session Before you begin The copy session must be in a created or recreated state before you can activate it. Procedure 1. Select the storage system. Activating Open Replicator session 491

492 Data Protection 2. Select Open Replicator > Open Replicator Sessions to Data Protection > list view. open the Open Replicator Sessions Activate Session dialog box. 3. Select a session and click to open the Activate 4. Select a copy option. for session copy and control options. Open Replicator session options Refer to 5. Click OK . Recreating Open Replicator sessions Before you begin Recreating operations are not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. Procedure 1. Select the storage system. Data Protection Open Replicator > Open Replicator Sessions . > 2. Select . Recreate 3. Select a session and click or Force checkbox or both checkboxes. PreCopy 4. Optional: Select the . 5. Click OK Restoring Open Replicator sessions Before you begin l The restore operation restores the copy session back to the control volume by pulling back only the changed tracks from the remote volume. The session must have been created with differential copying, and must be in the copied state. Hot or cold differential push sessions can be restored. l Restore operations are not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. Procedure 1. Select the storage system. > Open Replicator > Open Replicator Sessions 2. Select Data Protection . Restore . 3. Select a session and click 4. Select any number of the available options. Refer to Open Replicator session options for session control options. OK . 5. Click Renaming Open Replicator sessions Before you begin Renaming operations are not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. Procedure 1. Select the storage system. Data Protection > Open Replicator > Open Replicator Sessions . 2. Select Rename . 3. Select a session and click 492 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

493 Data Protection 4. Type a new name for the session. . 5. Click OK Removing Open Replicator sessions Before you begin Removing Open Replicator sessions is not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. Procedure 1. Select the storage system. 2. Select > Open Replicator Sessions . Data Protection > Open Replicator , and click OK . 3. Select a session and click Remove An error message is displayed if the session is in a state that does not allow the session to be removed. Setting Open Replicator session background copy mode Before you begin Setting background copy mode to precopy is not supported when the ORS control volume is on a storage system runningHYPERMAX OS 5977. This procedure sets the session background copy mode for an ORS session that has already been created. Procedure 1. Select the storage system. 2. Select Open Replicator > Open Replicator Sessions . Data Protection > Set Mode . 3. Select a session and click Open Replicator session options 4. Select the background copy mode. Refer to for session control options. OK 5. Click . Setting Open Replicator session donor update off This procedure deactivates donor update for a session that was created with donor update. Procedure 1. Select the storage system. > Open Replicator > Open Replicator Sessions to 2. Select Data Protection Open Replicator Sessions list view. open the Donor Update Off Set Donor Update to open the 3. Select a session and click Off dialog box. 4. Select the Open Replicator session options . OK . 5. Click Setting Open Replicator session front end zero detection off This procedure deactivates front end zero detection for a session that was created with front end zero. Removing Open Replicator sessions 493

494 Data Protection Procedure 1. Select the storage system. Data Protection > Open Replicator Sessions to > 2. Select Open Replicator list view. open the Open Replicator Sessions 3. Select a session and click to open Set Frontend Zero Off Frontend Zero Off dialog box. Open Replicator session options Refer to for session control options. 4. Click OK . Setting Open Replicator session pace Before you begin This feature is not supported on storage systems running HYPERMAX OS 5977 or higher. This procedure sets how fast data copies between volumes during an ORS session. Values can range from 0 to 9, with 0 being the fastest pace, and 9 being the slowest pace. If set to 0, there is no inserted delay time and the replication will proceed as fast as possible. Values of 1 - 9 add delays, which takes longer to complete copying but conserves system resources. The default for both online (hot) replication and offline (cold) replication is 5. Procedure 1. Select the storage system. > Open Replicator > Open Replicator Sessions . 2. Select Data Protection Set Pace . 3. Select a session and click Pace 4. Type a value (0 - fastest to 9 - slowest). . 5. Click OK Setting Open Replicator ceiling The Open Replicator ceiling value is the percentage of bandwidth available for background copying. You should only set this value after understanding the bandwidth being used by other applications. By default, the ceiling value is NONE. Procedure 1. Select a storage system. 2. Select System Dashboard > Front End Directors to open the Front System > list view. End Directors Set ORS Ceiling to open the Set ORS Ceiling dialog 3. Select a director and click box. 4. Type a value from 1 (minimum) to 100 (maximum) and Open Replicator Ceiling click OK . Terminating Open Replicator sessions Procedure 1. Select the storage system. 494 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

495 Data Protection 2. Select Open Replicator > Open Replicator Sessions View Data Protection > . to open the Open Replicator SAN View Terminate confirmation dialog 3. Select a session and click to open the Terminate box. 4. Select terminate options. Open Replicator session options for session control options. Refer to . OK 5. Click Viewing Open Replicator sessions Procedure 1. Select the storage system. > Open Replicator and click Open Replicator 2. Select DATA PROTECTION . Sessions Use the this view to view and manage Open Replicator sessions. The following properties display: l —ORS session name. Session l —Control volume name. Control Volume l —Remote volume name. Remote Volume l Status —Session status. l —Number of protected tracks. Protected Tracks The following controls are available: l — Viewing Open Replicator session details on page 496 l — Activating Open Replicator session on page 491 Activate l Terminate on page 494 — Terminating Open Replicator sessions l — Setting Open Replicator session donor update off Front End Zero Off on page 493 l Donor Update Off Setting Open Replicator session donor update off on — page 493 l Rename on page 492 This option is — Renaming Open Replicator sessions not available for systems running HYPERMAX OS 5977 or higher. l Remove Removing Open Replicator sessions on page 493 This option is — not available for systems running HYPERMAX OS 5977 or higher. l — Restore Restoring Open Replicator sessions on page 492 This option is not available for systems running HYPERMAX OS 5977 or higher. l Recreate — on page 492 This option is Recreating Open Replicator sessions not available for systems running HYPERMAX OS 5977 or higher. l — Setting Open Replicator session background copy mode on Set Mode page 493 l Set Pace — Setting Open Replicator session pace on page 494 This option is not available for systems running HYPERMAX OS 5977 or higher. Viewing Open Replicator sessions 495

496 Data Protection Viewing Open Replicator session details Procedure 1. Select the storage system. Open Replicator and click Open Replicator DATA PROTECTION 2. Select > Sessions . 3. Select a session and click to open the session details view. Depending on the configured system, some or all of the following properties display: l —ORS session name. Session l —Control volume name. Control Volume l Remote Volume —Remote volume name. l Remote Volume Specification — Remote volume specification. (Not applicable for storage systems running HYPERMAX OS 5977 or higher.) l —Session status. Status l Percent Complete — Percent tracks copied. (Not applicable for storage systems running HYPERMAX OS 5977 or higher.) l Copy Pace —Copy Pace value (0 - slowest to 9 - fastest, default is 5). (Not applicable for storage systems running HYPERMAX OS 5977 or higher.) l —Number of protected tracks. Protected Tracks l Modified Tracks —Number of modified tracks. (Not applicable for storage systems running HYPERMAX OS 5977 or higher.) l —Indicates if background copying is enabled. Background Copy l Differential Copy —Indicates if differential copying is enabled. l —Indicates if session is a pull session = Yes, or a push session = Pull Session No. l Cold Copy Session —Indicates if session is a cold copy session = Yes, or a hot copy session = No. l Donor Update —Indicates if donor update is enabled. l —Indicates if session is a RecoverPoint session. (Not RecoverPoint Session applicable for storage systems running HYPERMAX OS 5977 or higher.) l Standard ORS Session —Indicates if session is a standard session. (Not applicable for storage systems running HYPERMAX OS 5977 or higher.) l —Indicates if front-end zero detection is enabled. Front-End Zero Viewing Open Replicator SAN View Procedure 1. Select the storage system. 2. Select Data Protection > Open Replicator > Open Replicator SAN View . Filtered LUNs panel to use Use this view to view select remote volumes in the for Open Replicator copy sessions. The list of volumes can be filtered further by 496 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

497 Data Protection selecting items within the Remote Ports , and Remote Volumes Control Ports , panels. The following controls are available: l on page Create Copy Session Creating Open Replicator copy sessions — 490 l —Causes a rescan operation to be performed. Rescan Open Replicator session options Depending on the operation you are performing, some of the following options may not apply. Description Session Option Used with Command Causes the volume pairs to be Consistent Activate consistently activated. Consistently stops the donor Donor Update Off update portion of a session and maintains the consistency of data on the remote volumes. Copy Create Volume copy takes place in the background. This is the default for both pull and push sessions. Create Cold Control volume is write disabled to the host while the copy operation is in progress. A cold copy session can be created as long as one or more directors discovers the remote device. Differential Create Creates a one-time full volume copy. Only sessions created with the differential option can be recreated. For push operations, this option is selected by default. For pull operations, this option is cleared by default (no differential session). This option is not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. Donor Update Create Causes data written to the control volume during a hot pull to also be written to the remote volume. Maintains a remote copy of any Incremental Restore newly written data while the Open Replicator session is restoring. Open Replicator session options 497

498 Data Protection Session Option Used with Command Description Force Terminate Select the option if the Force copy session is in progress. This Restore will allow the session to continue to copy in its current mode without donor update. Donor Update Off Force option if the Select the copy session is in progress. This will allow the session to continue to copy in its current mode without donor update. Activate Force Copy Overrides any volume restrictions and allows a data copy. For a push operation, remote capacity must be equal to or larger than the control volume extents and vice versa for a pull operation. The exception to this is when you have pushed data to a remote volume that is larger than the control volume, and you want to pull the data back, you can use the Force_Copy option. Create Front-End Zero Enables front end zero detection for thin control volumes in the Detection session. Front end zero detection looks for incoming zero patterns from the remote volume, and instead of writing the incoming data of all zeros to the thin control volume, the group on the thin volume is de-allocated. Hot copying allows the control Hot Create device to be read/write online to the host while the copy operation is in progress. All directors that have the local devices mapped are required to participate in the session. A hot copy session cannot be created unless all directors can discover the remote device. Activate Temporarily stops the background Nocopy copying for a session by changing the state to CopyOnAccess or CopyOnWrite from CopyInProg. Pull Create A pull operation copies data to the control device from the remote device. 498 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

499 Data Protection Session Option Used with Command Description A push operation copies data from Push Create the control volume to the remote volume. This option is not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. Create Precopy For hot push sessions only, begins immediately copying data in the Recreate background before the session is activated. This option is not supported when the ORS control volume is on a storage system running HYPERMAX OS 5977. Terminate SymForce Forces an operation on the volume pair including pairs that would be rejected. Use caution when checking this option because improper use may result in data loss. Open Replicator flags Flag Status C X = Enabled . = Disabled Background copying X =Enabled D Differential copying . = Disabled S X =Pushing data to the remote volume(s) Copy direction . = Pulling data from the remote volume(s) X = Hot copy session H . = Cold copy session Copy operation U X =Enabled Donor update . = Disabled M = Migration session. T Session type R = RecoverPoint session. S = Standard ORS session. Z X =Enabled Front-end zero detection . = Disabled Failed session can be reactivated. * Open Replicator flags 499

500 Data Protection Understanding non-disruptive migration (NDM) Non-disruptive migration (NDM) allows you to migrate storage group (application) data in a non-disruptive manner with no downtime from source arrays running Enginuity OS 5876 Q3 2016 or higher to target arrays running HYPERMAX OS 5977 Q3 2016 or higher. Source side service levels are automatically mapped to target side service levels. NDM applies to open systems/FBA devices only. NDM supports the ability to compress data on all-flash storage systems while migrating. From Unisphere 8.4 onwards, an NDM session can be created on a storage group containing session target volumes (R2s) where the SRDF mode is synchronous. The target volumes of an NDM session may also have a SRDF/Synchronous session added after the NDM session is in the cutover sync state. The following NDM tasks can be performed from Unisphere. Setting up a migration environment Optional: Removing a migration environment Preparing a NDM session Optional: Creating a NDM session Viewing NDM sessions Viewing NDM session details Cutting over a NDM session Optional: Stop synchronizing data after NDM cutover Optional: Start synchronizing data after NDM cutover Committing a NDM session Cancelling a NDM session Optional: Recovering a NDM session Optional: Viewing migration environments Adding a migration environment Removing a migration environment Preparing a non-disruptive migration (NDM) session Non-disruptive migration of storage groups using SRDF is supported between a source storage system running Enginuity 5876 Q3 2016 or higher and a target storage system running HYPERMAX OS 5977 Q3 2016 or higher. on page 500 for additional See Understanding non-disruptive migration (NDM) information. There are two paths through the migration creation wizard. The default flow is for Creating a non-disruptive creating a migration session between two arrays (see on page 502). The secondary flow allows the user to migration (NDM) session prepare for a data migration with recommendations on the ports to be used for an optimal candidate migration result. When the prepare path is run (this is an option that can be run before the create path), you have the option to save your preparation to a Migration report containing zoning information. You need to implement the zoning Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 500

501 Data Protection before running the Create scenario in anticipation of the migration creation. If the plan is changed after running the prepare, these port groups need to be renamed or removed. If the user chooses the prepare path first, the same Symmetrix and SRP must be selected when running the second path for creating the actual migration session. Before you begin: To perform this procedure you must be an Administrator or Storage Admin. The data migration environment exists between two candidate arrays. The selected storage group is a masked candidate storage group. The selected storage group does not contain only gatekeepers. The local array must have online RDF ports. Unisphere is registered for performance data processing on the source and target arrays. When you register a storage system for performance data collection it takes at least two intervals (by default, 5 minutes each) before performance data begins to populate in the Unisphere GUI and charts. To prepare a migration session: Procedure 1. Select a storage system running Enginuity 5876 Q3 2016 or higher. STORAGE Storage Groups . 2. Select > 3. Select a storage group. 4. Click and click Migrate . 5. Select the target storage system. 6. Select the target Storage Resource Pool (SRP). Not specifying an SRP is allowed for data migration creation. 7. Select a port group. . NEXT 8. Click 9. Do the following: l Select Prepare Data Migration If the source or target array is remote to this instance of Unisphere, performance data processing is not registered on the target array, or there has not being sufficient time (at least two intervals (by default, 5 minutes each)) to gather performance data, an error popup informs you of this and NEXT button is disabled. the If any source port groups do not already exist on the target array, a panel is displayed allowing the user to select ports for any port group(s) to be created. All port group(s) involved in this migration are displayed. Any port group(s) that need to be created on the target array are at the top and any that already exist are at the bottom. Any existing port group(s) have the text "Already configured" in the title. Any port group to be created displays a selectable list of ports. This list of ports includes all available ports on the target array, but to avoid overlap, Preparing a non-disruptive migration (NDM) session 501

502 Data Protection port(s) already in use by any existing target array port group(s) are filtered out of the list. The port table within the panel contains the following columns: n —The port identifier of a target array port in Dir:Port format with a Port checkbox for selection. n — a bar indicating a utilization score for the port. A lower Utilization score indicates lower utilization. This is the default sort column for the list. n — a number indicating how many initiators, from the list of all Initiators initiators in the corresponding source Masking View associated with the source Storage Group, are present in the Login History Table for the port on the target array. Ports are selected by default based on the Utilization value. The number of default selected ports is equal to the number of ports in the source port group or the number of ports still available in the original list. You are able to override these selections, but you must select at least one port. l . Click NEXT l Summary On the page, review the details. The summary includes information any port group(s) and ports that you selected. There is also a suitability score for the entire migration request indicating the expected impact of the migrated application on the target array's front end ports. A message, indicating whether or not the selected front end ports have sufficient performance capacity for the incoming load, is displayed. Do one of the following: n Save Migration report to save the report to your chosen Optional: Click location. You need to implement zoning based on the information in the Migration report. You need to implement zoning before running the Create scenario as well as creating the required port groups on the target array in anticipation of the migration creation. If the plan is changed after running the prepare, these port groups need to be renamed or removed. n Click Finish to perform the port group(s) creation (if any) on the target array depending on your selections. Note does not create the migration session. Finish Clicking Creating a non-disruptive migration (NDM) session Non-disruptive migration of storage groups using SRDF is supported between a source storage system running Enginuity 5876 Q3 2016 or higher and a target storage system running HYPERMAX OS 5977 Q3 2016 or higher. When migrating a storage system from HYPERMAX OS 5977 to PowerMaxOS 5978, a create with precopy option is supported by Unisphere. The precopy option allows storage to be provisioned on the target array without making the devices' host visible. This allows the application to continue running on the source array while data is being copied to the target. See Understanding non-disruptive migration (NDM) on page 500 for additional information. 502 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

503 Data Protection There are two paths through the migration creation wizard. The default flow is for creating a migration session between two arrays. The secondary flow will allow the Preparing a non-disruptive migration (NDM) user to prepare for a data migration (see on page 500). session Before you begin: To perform this procedure you must be an Administrator or Storage Admin. The data migration environment exists between two candidate arrays. The selected storage group is a masked candidate storage group. The selected storage group does not contain only gatekeepers. The Initiators in the Storage Groups Masking Views are visible to the target array running HYPERMAX OS 5977 or higher. The local array must have online SRDF ports. You are allowed to select a port group name on the target array to use as part of the migrated Masking View. This port group must exist on the target array. When migrating a storage system from HYPERMAX OS 5977 to PowerMaxOS 5978, a create with precopy option is supported by Unisphere. The precopy option allows storage to be provisioned on the target array without making the devices' host visible. This allows the application to continue running on the source array while data is being copied to the target. To create a migration session: Procedure 1. Select a storage system running Enginuity OS 5876 Q3 2016 or higher. STORAGE Storage Groups . > 2. Select 3. Select a storage group. 4. Click Migrate . and click 5. Select the target storage system. 6. Select the target Storage Resource Pool (SRP). The default SRP is selected on the SRP combo if it can be calculated. Not specifying an SRP is allowed for data migration creation. 7. Select a port group. NEXT 8. Click . 9. Do the following: l Select Create Data Migration l check box to turn off Compression. Compression Optional: Uncheck the Compression is only allowed on All Flash systems running the HYPERMAX OS 5977 Q3 2016 Service Release or higher. l Precopy Optional: Click l Click . NEXT l On the Summary page, review the details. The summary includes information on any Masking View(s) that would be created by this migration and any Port group(s) and Host/Host Group(s) that you selected. l Do one of the following: Creating a non-disruptive migration (NDM) session 503

504 Data Protection n to add this task to the job list, from which you can Click Add to Job List schedule or run the task at your convenience. For more information, refer on page 920 and Scheduling jobs Previewing jobs to on page 920. n , and click to perform the operation Add to Job List Expand Run Now now. Review the contents of the feedback dialog. After successful No migration, a dialog is displayed. Select , Go to Migrations list view Close . or Further action at this time Results If the host can scan the new paths on its own, the migration moves to the CutoverReady state. If a user rescan is needed, the migration state moves to the Created state. Viewing the non-disruptive migration (NDM) sessions list This procedure explains how to view the list of the non-disruptive migration (NDM) sessions. See on page 500 for additional Understanding non-disruptive migration (NDM) information. To view the migration sessions list: Procedure 1. Select the storage system. > 2. Select DATA PROTECTION Migrations Storage Groups tab. 3. Select the The following properties display: l Storage Group —Name of the storage group. l —Migration state. An icon representing the state is also displayed. State Failed states are represented as red, in progress states are represented using the refresh icon and states after successful completion of actions are green. l —Source storage system. Source l —Target storage system. Target The following controls are available: l Viewing migration details on page 504 — l Cutting over a migration session on page 506 — Cutover l Commit — Committing a migration session on page 507 l — Readying the migration target on page 506 Ready Target l Recover on page 509 — Recovering a migration session l — Sync on Synchronizing data after non-disruptive migration (NDM) cutover page 507 l — Cancelling a migration session on page 508 Cancel Migration Viewing migration details This procedure explains how to view the migration details for a specific data migration. 504 Dell EMC Unisphere for PowerMax 9.0.0 Online Help (PDF version)

505 Data Protection See Understanding non-disruptive migration (NDM) on page 500 for additional information. To view the migration details: Procedure 1. Select the storage system. Migrations 2. Select DATA PROTECTION > Storage Groups tab. 3. Select the 4. Select a storage group and click to view the Migrations details view. The following items are displayed: l Storage Group —Name of the storage group. l —Migration state. State l Source —Source storage system. l —Target storage system. Target l Capacity (GB) —Capacity of the storage group in GB. l Synched Capacity (GB) —Synchronized capacity of the storage group in GB. l A storage group table displaying the source status and target status for each storage group associated with the migration. l A masking view table displaying the source status and target status for each masking view associated with the migration. l A Port Group table. Selecting a row in the masking view table populates the Port Group table. The table displays the source status and target status for the selected masking view. l A Host/Host Group table. Selecting a row in the masking view table populates the Host/Host Group table. The table displays the source status and target status for the selected masking view.. Select an item in the Storage Group table to view the following volume information: l Source Volume —Identity of source volume. l —Status of the source volume.. Source Status l Target Volume —Identity of target volume. l Target Status —Status of the target volume. Select an item in the Port Group table to view the following port information: l Symmetrix —Storage system ID. l —Identity of port. Port Name l —Status. Status Select an item in the Host/Host Group table to view the following Initiators information: l —Identity of initiator. Initiator l Source Status —Source status. Viewing migration details 505

506 Data Protection l Target Status —Target status. Readying the migration target Before you begin The migration must be in the precopy state. To perform this procedure you must be a Storage Admin. This operation is used on migrations that are in the Precopy state. When migrating a storage system from HYPERMAX OS 5977 to PowerMaxOS 5978, a create with precopy option is supported by Unisphere. The precopy option allows storage to be provisioned on the target array without making the devices' host visible. This allows the application to continue running on the source array while data is being copied to the target. The Ready Target operation results in the target devices becoming visible to the host and configures the data migration to allow simultaneous access to both the source and target devices. Procedure 1. Select the storage system. > 2. Select DATA PROTECTION Migrations Storage Groups 3. Click the tab. 4. , and click . Ready Target Select the storage group, click 5. Do one of the following: l Add to Job List to add this task to the job list, from which you can Click schedule or run the task at your convenience. For more information, refer to on page 920 and Previewing jobs on page 920. Scheduling jobs l Expand Run Now to perform the operation now. Add to Job List , and click Results If the operation is successful, a success message appears indicating the Ready Target operation was successful and that a host discovery needs to be performed. The state at this stage is Migrating. Once the host discovery has been performed and all data synchronized between the source and target arrays, the migration state changes to Synchronized. If the command was unsuccessful an error message will appear detailing the reason for the command failure. If the Ready Target operation has run to completion with a failed status the migration has a status of 'Ready Target Failed'. Cutting over a migration session The cutover operation results in the storage array running HYPERMAX OS 5977 Q3 2016 or higher becoming the active array. See Understanding non-disruptive migration (NDM) on page 500 for additional information. Before you begin: To perform this procedure you must be an Administrator or Storage Admin. The state of the migration session is CutoverReady. To cutover a migration session: Online Help (PDF version) Dell EMC Unisphere for PowerMax 9.0.0 506