NetApp Fabric Pool Tutorial

NetApp Fabric Pool Tutorial

In this NetApp tutorial, you’ll learn about Fabric Pool, which is an ONTAP storage tiering technology. It keeps your hot recently accessed data on high performance storage, on your primary storage, and it tiers off your cold data to lower performance, but lower cost object storage. Scroll down for the video and also text tutorial.

 

NetApp Fabric Pool Video Tutorial

YouTube video

Charles Lawton

Charles Lawton

Just completed NetApp Storage training! I found the instructor Neil Anderson to be amazing. He’s extremely competent, has spent time in the industry, and knows the platform very well. He colors the materials with some real world examples, which is always helpful to understand the differences between doing something in the lab and doing it in the real world.
I manage several NetApp arrays, and found myself going to the real platform to see how we’ve implemented the concepts presented here. Very happy I picked up this course!

Charles Lawton

Fabric Pool Storage Tiering

 

Fabric Pool was first introduced in ONTAP version 9.2. It is a fully fledged storage tiering technology that performs automated policy-based data movement between your storage tiers, performance tier, and capacity tier.

 

Fabric Pool Storage Tiering

 

It optimizes the performance and reduces costs by storing data in a tier based on whether the data is frequently accessed or not. The frequently accessed hot data stays on a performance tier aggregate. The cold data is moved to an object store capacity tier.

 

Your frequently accessed data gets the high performance with the higher cost, your less frequently accessed data gets that low cost, but the lower performance of the object storage. By doing that, you get a good balance between the performance and the cost.

 

Aggregate and Object Store Support

 

If you're using an AFF or FAS system, then the performance tier aggregates must be SSD only, it cannot use HDD or flash pool aggregates. In Cloud Volumes ONTAP and ONTAP Select, HDD aggregates are also supported, but SSD is still preferred. For the object storage, you can use AWS, Azure, and IBM cloud or StorageGRID for the capacity tier.

 

Aggregate and Object Store Support

 

Fabric Pool Tiering Policies

 

Snapshot-only and Backup tiering policies were available when Fabric Pool first came out in ONTAP 9.2, and the Auto tiering policy became available later, in version 9.4.

 

When you're setting this up, Fabric Pool associates a performance tier aggregate with the object storage capacity tier and the tiering policies are applied at the volume level. So when you first set it up, you associate an aggregate with an object store.

 

You can associate multiple aggregates with the same object store, but you cannot associate a single aggregate with different object stores. You can have multiple different aggregates all coming into the same big, large object store.

 

Fabric Pool Tiering Policies

 

The aggregate is the performance tier, and the object store is the capacity tier. But for configuring your policies, you can get more granular than the aggregate level. Your policies are actually configured at the volume level. Therefore, there are different volumes in the same aggregate. Using the same performance tier, we can have different policies applied which means that they're going to be treated differently.

 

Temperature and Tiering Scans

 

The time to wait before data is marked as cold and moved off to the capacity tier can be configured from 2 to 63 days for the Snapshot-only and the Auto policies. A temperature scan monitors the activity of each block in Fabric Pool aggregates. A tiering scan then collects the cold 4K blocks and packages them into 4MB objects to PUT into the object store.

 

Temperature and Tiering Scans

 

Fabric Pool Tiering Policies: Snapshot-only

 

The Snapshot-only tiering policy tiers cold Snapshot-only blocks to the capacity tier. And Snapshots are marked cold and tiered after two days by default. Now, blocks that are in both the Active File System and in a snapshot remain on the performance tier. This is going to happen in normal operations, right?

 

Say that you're taking a snapshot every few hours, you've got your files in the Active File System. When you take a snapshot, those blocks are going to go into the snapshot, but they're also in the Active File System as well. Maybe a few days later somebody deletes a file, and at that point, the blocks will be deleted from the Active File System and will only be in the snapshot.

 

Now, with your Snapshot-only tiering policy, blocks that are only in the snapshot that are not also in the Active File System, can be marked to be tiered off to the object store. Because of that, you still need normal backups of the performance tier aggregate. You might have those snapshots that have been tiered off.

 

Fabric Pool Tiering Policies: Snapshot-only

 

What you could think is, "Well, it's more than two days now. I'm tiering them every two days, so all snapshots older than two days have been moved over to the capacity tier, and as long as I'm backing up the capacity tier, I don't need to back them up on the performance tier.”

 

Well no, it doesn't work like that. Even if a snapshot is saved a week old, the blocks that are still in the Active File System are still on the Active File System. They're not over in the capacity tier therefore, you do need still to do your normal backups of the performance tier as well.

 

Fabric Pool Tiering Policies: Backup

 

The next policy we've got is the backup tiering policy. That is for SnapMirror or SnapVault target volumes only. On the destination side with the backup policy, the temperature and tiering scan does not apply to those volumes. All data is directly tiered to the capacity tier.

 

Fabric Pool Tiering Policies: Backup

 

Fabric Pool Tiering Policies: Auto

 

All data in an Auto policy volume is eligible to be tiered to the capacity tier. Data is marked cold and tiered after 31 days by default. Any blocks that haven't been touched for 31 days are going to get tiered off, but you can change that timer if you want to.

 

Now I said, "All data in the volume." Actually, metadata always remains on the performance tier. So again, you might see a file that has been tiered off, and think, "Oh, well I don't need to include that in my backup set on the performance tier." But you do, because the metadata still there. Fabric Pool is not any kind of backup solution. You still need to do your normal backups and Fabric Pool does not replace that.

 

Fabric Pool Tiering Policies: Auto

 

Cold Blocks Retrieval

 

We've talked about when blocks are going to be tiered off to the object storage, but maybe a client needs to retrieve those blocks later. Maybe we've got a file that has not been touched for a while. It's gone off to object storage, and then somebody needs to retrieve that file again.

 

Well, obviously it's going to need to be fetched from the object storage. When that happens, it's going to be marked as hot again, because it's been frequently accessed, and it's going to be copied back onto the performance tier again. There are some times when it's not going to end up back on the performance tier though.

 

If the policy on the volume is auto and the read is sequential, not random, then it will stay marked as cold. If it is the backup policy, it makes sense there as well that it's going to stay on the object store. If the performance tier is over 70% full, we want to maximize the capacity on the SSDs.

 

Obviously, if it hadn't been touched for a whale and it just gets retrieved once, there's a good possibility it's not going to be retrieved again for a while. To maximize the performance capacity on the performance tier, it's going to remain on the object store.

 

Cold Blocks Retrieval

 

Fabric Pool

 

When we do the configuration, it is the aggregate that gets associated with the object store. It’s quite likely that it'd be multiple volumes in that aggregate. The different volumes can have a different policy applied but one sort of policy that wasn't mentioned here is you can also turn off the tiering.

 

Maybe you've got one volume that you want to have the Snapshot-only policy, another volume that you want to have auto turned on, and another volume in the same aggregate that you don't actually want to use Fabric Pool. You can do that by just turning it off for that particular volume.

 

Fabric Pool

 

Deduplication, compression, and compaction savings are replicated from the performance tier, if they're enabled, to the capacity tier. Some tools are available to help you decide whether enabling Fabric Pool on an aggregate is a good idea or not.

 

There is an Inactive Data Report that you can turn on, which will show how much data in an aggregate is cold. You can see if there's a lot of data that would be marked as cold, then that would be a good candidate for enabling Fabric Pool there.

 

There’s Object Store Profiler as well which gives performance stats of object storage. You want to check that the object storage is going to give you decent performance before you enable this.

 

NetApp Fabric Pool Configuration Example

 

YouTube video

 

Additional Resources

NetApp FabricPool: https://docs.netapp.com/us-en/ontap/cloud/fabricpool-concept.html

FabricPool Best Practices: https://www.netapp.com/media/17239-tr4598.pdf

FabricPool Requirements: https://docs.netapp.com/us-en/flexpod/hybrid-cloud/cloud-fabricpool_fabricpool_requirements.html

 

Want to practice NetApp storage on your laptop for free? Download my free step-by-step guide 'How to Build a NetApp ONTAP Lab for Free'

 

Click Here to get my 'NetApp ONTAP 9 Storage Complete' training course, the highest rated NetApp course online with a 4.8 star rating from over 1000 public reviews.

Libby Teofilo

Text by Libby Teofilo, Technical Writer at www.flackbox.com

With a mission to spread network awareness through writing, Libby consistently immerses herself into the unrelenting process of knowledge acquisition and dissemination. If not engrossed in technology, you might see her with a book in one hand and a coffee in the other.