This video covers the benefits of external SAN Storage Area Network and/or NAS Network Attached Storage storage, as compared to DAS Direct Attached Storage which is dedicated to one particular server. That could be internal hard drives or an external enclosure. External SAN and NAS storage is provided by companies such as NetApp, Dell EMC, IBM and Hitachi Data Systems. Scroll down for the video and also text tutorial.
Want the complete course? Click here to get the Introduction to SAN and NAS Storage course for free
The 11 Top Benefits of SAN and NAS Storage – Video Tutorial
Fantastic course for someone who is already in IT and looking to get involved with storage networks. I didn’t realize how much there was to know, and how different it was from your typical networking. If you’re a network or system administrator and are looking to experiment with something a bit different, this course will make sure you know the fundamentals of how a SAN is supposed to work.
The 11 main benefits of external SAN and NAS storage are:
Most people see this as being the main benefit of SAN and NAS storage. If you're using traditional direct attach storage, you'll maybe get utilisation of around 30%. When we use centralised storage, we can get a figure which is closer to 80%. Let's say that we have 50 servers which we expect will each require 300GB of storage space. Well if you're using DAS, you're not going to put in exactly 300GB of disk capacity in each of those servers, you're probably going to put in 500GB because you want to leave a bit of room for unexpected growth. Fitting larger disks in a server will require an outage and is really inconvenient.
With centralised storage, we don't have that problem. We have one centralised pool of storage, and we can slice it up and distribute it to the different servers exactly how we want to and easily change it on the fly. So if I've got some servers that require 300GB disk space, I give them 300GB disk space. If later on it turns out that they need more space then I can easily give them it when they need it. Typically I can do this non-disruptively. And I move from 'Just In Case' to 'Just In Time' - saving money because I don't need to buy the physical disks until they're actually required.
This is directly related to disk utilisation. Let's say that I've got 10 servers, and I think that they might need up to 500GB of storage space each. 10 servers x 500GB storage = 5TB of total capacity required. In a traditional distributed storage environment, I would need to buy the full 5TB of disk space that might be needed in the future right now. But when I'm using centralised storage, I can implement thin provisioning.
Let's say that the servers are only going to initially be using 200GB of storage space each. 10 servers x 200GB storage = 2TB capacity. With thin provisioning, I buy only the 2TB of storage that will be used now, but I make it look to the servers as if they actually have the full 500GB each. It looks like there's 5TB storage space there, but I've only actually paid for 2TB, and when they do need the additional space I buy the disks at that time. I'm moving from a 'just in case' model of purchasing storage space, to a 'just in time' model. This gives me cost savings on hardware, rack space, power and cooling, and the savings are multiplied as hard disk cost tends to come down over time.
Deduplication and Compression
This gives additional storage efficiency. I've got multiple servers all using the same centralised storage, if there's any blocks on disk that are repeated, I can remove those duplicate blocks and just keep one copy. Similarly I can use compression at the file level to reduce the amount of space used and get the same benefit. For workloads which have high amounts of duplication and compressible data (such as virtualised environments where multiple virtual machines have the same operating system, patches and applications) this can give huge savings in the amount of disk space required.
Centralised storage systems are always built to have very high degrees of resiliency because they will almost always be mission critical systems for the enterprise. If a disk fails then that's taken care of by RAID, if a disk shelf fails that's taken care of by mirroring, if a controller fails we have a redundant peer controller which can take over, and we can replicate our data between storage systems in different sites which give us backup in case we lose the entire data center.
If we've got 50 servers, it's much easier to manage the storage for them all if that's on one centralised system, rather than distributed individually to each of those 50 servers.
Managing backups is very inconvenient and time consuming if we have 50 different tape drives on our 50 different servers and we're managing them all individually. If we're consolidated on centralised storage then we can centralise our backup solution as well which is much easier to manage. Storage systems can also backup to remote disk (rather than tape) which reduces space requirements and backup windows and doesn't require loading/unloading of physical media.
Snapshots are a point in time copy of the file system which can be used as a convenient short term backup. Snapshots consist of pointers to the original blocks on disk rather than being a new copy of the data, so they initially take up no space and occur nearly instantaneously. If data gets corrupted or somebody accidentally deletes a file then we can very quickly recover. Snapshots do not replace a long term offsite backup solution, but they're great for short term very quick and convenient backups and restores.
We can replicate data from our main site to a disaster recovery site giving us a backup if the main site fails. We can also load balance incoming client requests for read-only data between the different sites. (We can't do this for writable data as we need to maintain one consistent copy of the data).
Software such as VMware and Hyper-V allows us to run multiple virtual servers on the same underlying physical hardware server. We can have a Linux web server, Exchange mail server and SQL database server all running on the same physical box for example, and this is transparent to each of those virtual servers. The killer feature of virtualisation software is the ability to move virtual servers between physical servers on the fly while they are still running. This mean that the virtual servers can keep on running with no outages even if it's underlying physical server fails or is taken down for maintenance. External storage is a requirement for this feature.
If I'm using a SAN protocol, I can have my servers boot up from disks on the remote storage. They don't even have to have a single disk drive in the servers themselves. This is a very popular option with blade servers. Again this gives savings in hardware costs, rack space, power and cooling.
We can have disks in the system with differing performance, such as high performance SSD drives, and high capacity (but lower performance) SATA drives. We can keep hot data which is accessed frequently on the SSDs, and older archived data on the SATA drives. Most storage systems have features which can automatically move the data from high performance to lower performance disks as it's being accessed less frequently over time.
Free ‘Introduction to SAN and NAS Storage’ training course
The video is an excerpt from my Introduction to SAN and NAS Storage training. You can get the entire series for free here:
When you're ready to take your storage knowledge to the next level, you can get my 'Data ONTAP Complete' NetApp Training Course.