Follow me through this journey of installing our NetApp A200. We just purchased this as a replacement for our NetApp FAS8020 and all of our servers are going to benefit from this 12TB all flash array. The hardware outside of the NetApp is as follows: 2x Fabric Interconnect, 7x Cisco UCSB-B200-M4, and 2x Cisco Nexus 9K switches. The pictures and details to follow will be showing my configuration from the ground up including diagrams on where I have cables ran and how all the systems are connected. (White board diagram coming, I just need to snap a picture of it)
Here are the current connections. You can follow a few of the cables in this photo: (The blue go to our Nexus switch for management)
…and the rest hook up just above the A200 to our fabric interconnect in a X-crossed pattern:
Since the NetApp is hooked directly into the Fabric Interconnect (from here on out, know as the F.I.) there were a few configuration changes in there that needed to be made. First of all the ports needed to be set into appliance mode. They also needed to be given the correct VLAN privileges. In my case, we created a VLAN 10 to connect to the initial NetApp out of the box network IP’s and then moved the addresses to VLAN 126 for NFS.
This screenshot shows the setting to VLAN 10, this was later changed to VLAN 126 for NFS to the ESXi Hosts
Once the F.I. was configured, and VLAN’s were configured on the Nexus for the e0m ports (default from NetApp was a 10.10.10.0/24 network I called VLAN 10) then it was time to hop into the System Manager of the NetApp! Inside here the first thing that needed to be done was change the password and then re-ip each of the e0? ports. e0a and e0b are used by the cluster so those were left alone but e0c through e0f needed to be moved from the default 10.10.10.x network on each controller to something more usable by my infrastructure.
Each port was assigned an IP address in the VLAN 126 subnet with the exception of e0m, which was put on my management VLAN 2. With each port being recognized, then we moved to creating interface groups. We grouped them in a way so that between the 8 connections, we could effectively lose 1 F.I. and 1 Storage Controller with the NetApp being active/active.
This next screenshot show how the fail over group is set:
So with 2 of those active and ready for failover, we are ready to serve up some data… well, almost. Need to create some volumes. The new SVM was created to house the NFS and CIFS data and in my case, we made 3 volumes, 2 of which lived in the same aggregate. In the first aggregate there was a volume for servers (VMDK’s and general VMware storage) as well as the volume for the CIFS share, and in the second aggregate we placed the volume for the VMware VDI storage. (ISO’s was a test that may be used for its name one day)
To get these into VMware on VLAN 126, we setup an export policy so that the hosts could have the required access to the volumes.
Like a glove, we were able to add these new volumes to each of the ESXi hosts and begin moving data… or not! Right before I started migrating I was informed that ONTAP 9.2 was out and that it offered deduplication on the aggregate level… OMG, have to have this and since I was going to move data, lets do it in-line with the migration. So in the traditional manner, I started down the path of myautosupport to get the step by step guide on how to update my NetApp. Well I decided to see what this “cluster update” option was that I saw under configuration during my setup and browsing of the NetApp. To my great surprise, this was the update “easy button.” I followed the auto support instructions all the way to the point where I needed to download the 9.2 image and then I closed that document. I moved the downloaded image file to a HTTP virtual directory internally so that the NetApp could see it, then continued down the update path:
After the validation of the image, I was then prompted to verify that I intended on continuing with the upgrade. It outlined the things I needed to be aware of and gave me a check list very similar to the auto support guide to make sure that I was squared away to complete a non-disruptive upgrade to the system. Well since I was not housing any data, in a guns blazing fashion, I clicked next (forgot to grab a screenshot I was so excited) One the 3rd screen I chose to update and once more, the NetApp did a pre-screen to ensure success of the upgrade.
All in all the whole upgrade took about 45 minutes but ONTAP 9.2, here I am! With that I no longer needed to wait. I instantly began moving data, test data of course. I created an additional VMware desktop pool (or 2) and moved them into the new data store. I changed from Full Clones to Linked clones for this since I now have what seems like unlimited I/O and overall it has saved me a ton of space. I verified that the pools created were fully functional with a UAT session, and then rolled out the desktop pools overnight to all the users! So with a total of 55 linked clone desktops and 1 sql/sage server and no snapshot backups (in this screenshot anyway,) the numbers are in:
Yes! These are more than I expected to see and proves even more that the decision for the size of NetApp I started with here will be more than enough to last and grow on for a few years (ended up with 17TB usable before dedupe and compresssion). This was a very successful deployment and I am more than pleased with the performance increase I have seen in our environment, and that is just with VMware View on it. I will slowly move more of the production servers on there, but not before they get cleaned up a little bit. We tested the fail over of our interface groups and I was successfully able to drop 3 of the 4 connections (turned off the F.I. ports) without causing any issues with VMware seeing the data store and with 10g throughput on each of those connections, I feel it is a very safe place to keep my data with outstanding speeds. (I mean just crazy speeds, I can recompose my entire desktop pool in minutes!)
Also, we use Veeam to backup our environment since we are 100% virtual, it added and started backing up this new NetApp without a hiccup. Of course with storage snapshots, they were already fast but now VMware doesn’t even have time to create the snapshot before it has to remove it LOL!