NetApp Load Sharing Mirrors Tutorial

Featured Video Play Icon

NetApp Load Sharing Mirrors

NetApp Load Sharing Mirrors are the focus of this video tutorial. Load Sharing Mirrors are mirror copies of FlexVol volumes which provide redundancy and load balancing.

 

GET YOUR FREE eBOOK

Step by step instructions on how to build a complete NetApp lab, all on your laptop for free.

Sign up to get the eBook and occasional newsletter updates. Your email address will never be shared.

Powered by ConvertKit

 

The load balancing is for read traffic only. Read requests can go to the mirror copies but write requests always go to the source volume to keep one consistent copy of the data.

 

I gave an overview of NetApp Load Sharing Mirrors in my earlier SnapMirror Engine tutorial but I’ll cover them in more detail here.

 

Load Sharing Mirrors are always in the same cluster as the source volume. They provide intra-cluster replication, not inter-cluster.

 

In the diagram below, you’ll see we have a single cluster. We’ve got the source volume, which is the single read/write copy, and then we’ve created a Load Sharing Mirror for that source volume on each node in the cluster.

 

When we create Load Sharing Mirrors for a volume, it’s best practice to create one on every node in the cluster, including on the same node as the source volume. Each mirror copy is created individually.

 

NetApp Load Sharing Mirrors

 

Read Requests

 

Load Sharing Mirror volumes are read-only. The source volume is the only read-write copy. When you create the Load Sharing Mirror, it’s automatically mounted for you, you don’t need to mount it yourself. The Load Sharing Mirror destination volumes are automatically mounted into the namespace with the same path as the source volume.

 

Read requests will be serviced by the same node which the client connects to if that node has a Load Sharing Mirror. If it doesn’t, the client will be serviced over the cluster interconnect.

 

Typically, when we’re configuring Load Sharing Mirrors for a volume, we’ll create one on every single node. That way, no matter which node the client request comes in on, the read requests will be serviced by that node.

 

All read requests are always directed to Load Sharing Mirror volumes, not the source volume. Write requests always go to the source volume.

 

Read requests always go to a Load Sharing Mirror. You should therefore also include a Load Sharing Mirror copy on the same node where the source volume resides. Otherwise, if you only have the source volume on a node, any client read request that came in on that node would be serviced by a different node over the cluster interconnect.

 

Write Requests

 

To make write changes, clients must access the source volume by using a special “.admin” path which is automatically generated when you create the Load Sharing Mirror. You give your clients the normal path to gain read-only access to a volume. When they access that volume, they’re accessing a read-only copy, which would be a mirror on whichever node they actually hit.

 

Load Sharing Mirrors are read-only by default. You can make changes to them, but to make changes, you have to map a drive or mount the special admin path.

 

The diagram below shows all of our read-only mirrors. To mount them in the example, the name of the volume is vol1, so the client maps a drive to the vol1 share as normal.

 

To access the writable copy, they would have to map a drive to the special “.admin” share. For vol1, it would be ‘/.admin/vol1’. You do not need to create this special admin path. It’s created for you when you create the Load Sharing Mirror.

 

The default access for Load Sharing Mirror volumes is read-only, so they’re only suitable for data which does not typically require any changes. Any infrequent changes to the data will usually only come from only a few select people. Give those people the information about the “.admin” path. Everybody else can connect to the normal path and get read-only access.

 

The .admin path

 

Protocol Support

 

Load Sharing Mirrors have full support for CIFs and NFSv3 protocols only. They do not support NFSv4 or SAN protocol clients. NFSv4 clients are compatible with Load Sharing Mirror volumes, but they will always be directed to the source volume for both reads and writes rather than load balanced to the mirror copies for the reads. SAN protocols are not compatible, so do not configure Load Sharing Mirrors for volumes that host LUNs.

 

Redundancy

 

Due to the fact that Load Sharing Mirrors are automatically mounted into the namespace with the same path as the source volume, they provide redundancy with no administrator intervention. If the source volume becomes temporarily unavailable, read access to the volume will still be provided through the Load Sharing Mirror volumes without you having to do anything additional.

 

Changes to the data will not be possible, however, until the source volume comes back online because it’s the only writable copy. If the source volume is permanently unavailable, you can promote one of the Load Sharing Mirror volumes to be the new writable source copy.

 

SnapMirror Promote

 

To do this, we use the “snapmirror promote” command. This performs a failover from the original source volume to one of our destination read-only mirror volumes. The promote command is specific to Load Sharing Mirrors. Data Protection Mirrors and SnapVault do not use this command.

 

When the promote command is run, the new source volume will assume the identity and start the mirror relationships of the old (original) source volume. After you do this, the Load Sharing Mirror application will carry on as before, using the new source volume. You won’t need to re-create the replication configuration again.

 

Any previous client write access will be redirected from the original source volume to the promoted destination volume before the original source volume is destroyed. If the original source volume is still physically in the cluster when you are promoting another volume, it will be destroyed when you run the ‘snapmirror promote’ command.

 

Use Cases

 

Load Sharing Mirrors are useful for frequently read but infrequently updated data such as shared binary files or static websites. They use asynchronous replication, so they’re not suitable for data that is frequently changed.

 

Client read requests can access out of date data between replications. We obviously don’t want that happening, so we don’t want the volume being subjected to a lot of changes.

 

Also, to be able to make changes, your users have to be coming in on that special ‘admin’ path. Normally, we’re not going to be giving that access to many users therefore we’re not going to be having many changes.

 

You can use Load Sharing Mirrors on volumes that have a mix of both NFSv3 and NFSv4 clients. Load Sharing Mirrors are compatible with NFSv4 clients, but those clients will always use the source volume, not the mirrors. Don’t configure Load Sharing Mirrors on volumes that have NFSv4 clients only, as you’ll only be wasting disk space. The mirrors would never be used.

 

SVM Root Volumes

 

Configuring Load Sharing Mirrors for your SVM root volumes is a best practice and officially recommended. If the root volume of a SVM is unavailable, NAS clients can’t access the namespace and therefore cannot access any data in the SVM. They won’t be able to access any of its volumes if an SVM’s root volume goes down.

 

For this reason, it’s a best practice to create a Load Sharing Mirror for the root volume on each node in the cluster to give you good redundancy. This ensures the namespace remains available in the event of a node outage or a failover.

 

You should not store user data in the root volume of an SVM. We’re going to have Load Sharing Mirrors configured there, which are for read-only data, so we don’t want any changes happening in the root volume. It should only be used for junction paths to the other volumes in the SVM, not for any user data.

 

Store your user data in your normal volumes, which are mounted underneath the SVM root volume. SAN client connections such as Fibre Channel, FCoE, or iSCSI do not depend on the SVM root volume, so they’re incompatibility with load sharing volumes is not a problem.

 

A one-hour replication schedule is recommended for your SVM root volumes.

 

Configuration

 

NetApp Load Sharing Mirrors Configuration

 

The first thing that we have to do is create our destination mirror volumes. We do this with a standard “volume create” command as shown in the example below (VServer “DeptA”). The name of the volume we’re creating is “DeptA_Root_LS1”, so we’re going to be creating mirrors for the root volumes of an SVM. The SVM is Department A in this example.

 

We’ve specified the aggregate “aggr1” and a size of 20 megabytes. Ensure the size of the destination volume is the same as the source volume. We then need to specify the extra parameter type “DP”, which says that this volume is going to be used as a mirror destination. It also makes it a read-only volume.

 

Notice when we run the “volume create” command that we’ve specified the type as “DP” even though it’s an “LS” mirror. When we create the volume, we always specify type “DP” for Load Sharing Mirrors, Data Protection mirrors, and for SnapVault mirrors as well.

 

Next we use the “snapmirror create” command to configure the replication using the destination path. The format of this is the SVM name, then a colon, and then the volume name.

 

In the example here, the destination path is “depta:depta_root_LS1”, which is the destination volume that we just created. The source path is “depta:depta_root”, which is the root volume for department A. The type is now “LS”.

 

When you run the “snapmirror” create command, the type will either be LS, DP, or XDP. We’re configuring a Load Sharing Mirror here, so in this case it’s LS.

 

Finally, specify the schedule. The recommended schedule for our SVM root volumes is 1 hour.

 

The “volume create” and the “snapmirror create” commands must be run for every single destination volume. It’s recommended you do this for each node. As an example, if we had a 4-node cluster, our volumes would be depta_root_LS1 on node 1, LS2 on node 2, etc.

 

Once we’ve run the “snapmirror create” command, the replication is configured but it doesn’t actually start yet. We also need to initialize it. The command to do that is “snapmirror initialize ls-set” and then specify the source path. This will kick off the replication for every destination volume for that source volume.

 

Additional Resources

Using a Load Sharing Mirror to Balance Loads from NetApp

 

Want to practice the NetApp Load Sharing Mirror features on your laptop? Download my free step-by-step guide ‘How to Build a NetApp ONTAP Lab for Free’

 

Click Here to get my ‘Data ONTAP Complete’ NetApp Training Course.

 

Text by Alex Papas.

Alex PapasAlex Papas has been working with Data Center technologies for the last 20 years. His first job was in local government, since then he has worked in areas such as the Building sector, Finance, Education and IT Consulting. Currently he is the Network Lead for Costa, one of the largest agricultural companies in Australia. The project he’s working on right now involves upgrading a VMware environment running on NetApp storage with migration to a hybrid cloud DR solution. When he’s not knee deep in technology you can find Alex performing with his band 2am

Leave a Comment