Thursday, December 20, 2012

Configuring Backup Exec 2012 to backup to a DataDomain 670 using the OST plugin

DataDomain Configuration

First things first, let’s verify that we have the DD Boost is correctly configured on the DataDomain
  1. Log into your DataDomain Appliance and navigate to System Settings – Licenses and verify the correct licenses, it should look like this:
    image
  2. Navigate to Data Management – DD Boost – Settings and verify that DD Boost is enabled, like so:
    image
  3. Now click on the + sign to add your BackupExec host.  Enter the hostname and click OK
    image
  4. Next we need to create a DD Boost Storage Unit.  Navigate to Data Management – DD Boost – Storage Units and click Create.  Enter a descriptive name and configure any quotas, if desired.  Click OK
    image
  5. Now we need to enable an interface for DD Boost.  Navigate to Data Management – DD Boost – IP Network.  Highlight and edit an interface group and tick the “Enabled” check box in the resultant dialogue box and click OK.
    image
That takes care of the configuration on the DataDomain side of things.  Let’s move over to Backup Exec

Backup Exec 2012 Configuration

  1. Download and install the latest version of the EMC DataDomain Boost for Symantec OpenStorage (Version 2.5.0.3-314845) at the time of writing.  This file is available from the DataDomain support site (Powerlink Login required)
  2. Open up the Backup Exec console, select the Storage tab and click Configure Storage
  3. The type of storage is OpenStorage
    image
  4. Enter a name and description for the DataDomain
    image
  5. Select the DataDomain provider (the DataDomain only shows up once you completed the OST plugin installation as per Step 1)
    image
  6. Enter the connection details for the your DataDomain device
    image
  7. Enter the Storage Location configured on the DataDomain (BackupExec in our case)
    image
  8. Select the amount of concurrent operations allowed to the DataDomain
    image
  9. Click Finish on the Summary screen and confirm that you would like to restart the Backup Exec services
    image
With that completed you’ll be able to select the DataDomain as a deduplication target (as opposed to B2D device).

Friday, December 14, 2012

Rebuilding the Exchange 2010 Offline Address Book (OAB) from scratch

IT peeps supporting Microsoft Exchange can be divided into two groups – those that have experienced problems with the OAB and those that will.  If you’re in the first camp – I feel your pain.  Those of you lucky enough to be in the second camp – file this away, it might come in handy…no I’m not being facetious.
The process below will blow away your OAB and create a new one, so be mindful.  FWIW I’ve never had any issues with this process.  This is especially effective if you’ve screwed up on the public folder replication during Exchange Migrations (don’t ask).
We rebuild the Exchange 2010 OAB like so:

Create a new Offline Address Book object

  1. Open the Exchange Management Console (EMC) and navigate to Organisation Configuration – Mailbox
  2. Click the Offline Address Book tab.  Right click in the blank area and click New Offline Address Book
  3. Give your OAB a different name than the existing one
  4. Select your Exchange 2010 MBX server as the OAB generation server
  5. Check Include the default Global Address Lists option
  6. Check Enable Web-based Distribution as well as the Enable public folder distribution option
  7. Finish the wizard

Restart Exchange Services

  1. Restart the Microsoft Exchange System Attendant service
  2. Restart the Microsoft Exchange File Distribution service

Update and set the OAB as default Offline Address Book

  1. Right-click your newly created OAB and click Update.  This can take a couple of minutes, confirm successful completion via your Application log
  2. Right-click the OAB from step 1 and and click Set as default

Assign the OAB to the affected users databases

  1. Open the Exchange Management Shell (EMS)
  2. Execute the following:
    Get-MailboxDatabase |  set-MailboxDatabase  -OfflineAddressBook "%your new OAB%"
  3. Wait for Outlook to complete it’s OAB download cycle (can be as much as 24 hours)
You can safely delete the old OAB once you’ve verified that your clients are successfully downloading the newly created one.

Wednesday, December 12, 2012

Performing a Non-Disruptive Disk and Shelf Firmware upgrade on a NetApp FAS2040

I received a mail from NetApp this morning, pointing my attention to KB ID 7010014.  In a nutshell, there is a drive firmware upgrade available which the lowers the drive failure rates.  AutoSupport has also been nagging me about out of date DS4243 shelf firmware, so I thought this would be a perfect opportunity to upgrade it all in in one go.  It goes without saying that the upgrades must to have zero impact on client access.  The process below was run on Data Ontap Release 8.1 7-Mode.

Update the Disk Qualification Package

  1. Download the latest DQP from the NetApp support site
  2. Extract the files and copy it to the /etc folder on your filer, overwriting the existing files

Update the Disk Shelf Software

  1. Download the appropriate disk shelf software upgrade from the NetApp support site
  2. Extract and copy it to the /etc/shelf_fw folder on your filer
  3. Run the options shelf.fw.ndu.enable command and verify it is set to on
    • If not, enable it with the options shelf.fw.ndu.enable on command
  4. Execute the storage download shelf command to update the shelf firmware and enter yes when prompted

Update the Disk Firmware

  1. Download the latest disk firmware from the NetApp support site
  2. Verify the following, otherwise you will not be able to do a non-disruptive upgrade
    • Aggregates need to be RAID-DP or mirrored RAID4
    • You need to have functioning spares
  3. Run the options raid.background_disk_fw_update.enable command and verify it is set to on
    • If not, enable it with the options raid.background_disk_fw_update.enable on command
  4. Extract and copy the disk firmware to the /etc/disk_fw folder on your filer
  5. The upgrade should start automatically in a couple of minutes
  6. Repeat for both controllers

Verifying the upgrade

Execute the sysconfig –v command to verify successful installation
And there we go, we have non-disruptively upgraded the firmware and disk drives in our filer!

Saturday, December 8, 2012

Migrating CIFS Shares to a new NetApp Filer

What better way to kick off the festive season than a with a storage migration (only being slightly ironic!).  A customer uses their existing NetApp kit to provide block storage to vSphere hosts and CIFS shares to Windows clients and they wanted me to do a swap out upgrade.  Migrating the vSphere data is a cinch nowadays, what with Storage vMotion and all, so I’ll just document the CIFS stuff.

  1. First you’ll need to setup a SnapMirror relationship of the CIFS volume between the source and destination filers (no faffing around with robocopy and the like)
  2. Make a backup copy of the /etc/cifsconfig_shares.cfg file
  3. Execute cifs terminate on the source filer (downtime starts here)
  4. Update (quiesce if necessary) and break the SnapMirror relationship
  5. Take the source filer offline
  6. Assign the source filer’s IP to the new filer
  7. Reset the source filer’s account in Active Directory (if applicable)
  8. Execute cifs setup on the new filer
    1. It goes without saying that you will assign the source filer’s hostname to the destination filer, as well as join it to the AD (assuming the source filer was joined)
  9. Execute cifs terminate on the destination filer and replace the cifsconfig_shares.cfg with the backup copy you made in step 2
  10. Execute cifs restart on the destination filer
  11. Test client access

Tuesday, October 30, 2012

Creating a WinPE boot disk for SCCM with integrated drivers

I’m busy with desktop deployment project, utilising SCCM’s OSD functionality.  This required creating a WinPE-based boot image which contains the necessary network and mass-storage drivers to allow the bare-metal deployment process to connect to my SCCM server.

Here is how I did it:

Download and Install the Windows Automated Installation Kit (WAIK)

  1. Download the WAIK from here
  2. Strictly speaking not necessary, but you can download the SP1 supplement from here
  3. Install it – installation is straight-forward, next – next – done kind of thing

Creating the WinPE boot disk

I used c:\dellwinpe as my wim staging folder, you can change it to suit your environment.

  1. Launch an administrative WAIK command prompt (normal command prompt won’t work)
  2. copype x86 c:\dellwinpe
  3. copy the Dell WinPE driver cab to the c:\dellwinpe folder and extract so that the extracted WinPE folder is under the root of your c:\dellwinpe folder
  4. dism /Mount-Wim /WimFile:winpe.wim /index:1 /MountDir:Mount
  5. dism /add-driver /driver:"winpe" /image:"mount" /recurse
  6. dism /unmount-wim /mountdir:"mount" /commit
  7. dism /mount-wim /wimfile:"winpe.wim" /index:1 /mountdir:"mount"

Now we add a couple of packages necessary for SCCM integration and deployment

  1. dism /image:"mount" /add-package /packagepath:"C:\Program Files\Windows AIK\Tools\PETools\amd64\WinPE_FPs\WinPE-Scripting.cab"
  2. dism /image:"mount" /add-package /packagepath:"C:\Program Files\Windows AIK\Tools\PETools\amd64\WinPE_FPs\WinPE-WMI.cab"
  3. dism /unmount-wim /mountdir:"mount" /commit

There will now be a file called winpe.wim in your c:\dellwinpe folder, ready to be imported into SCCM as a boot image.

Wednesday, October 10, 2012

Directly Connecting a Brocade 815 HBA to a EMC VNX5300

I’m busy with a project which involves getting two ESXi hosts hooked up to a VNX5300 configured in block mode.  The order we placed with Dell specified Emulex 12000 HBA’s, but Dell got creative and shipped Brocade 815’s instead.  Only problem was that they didn’t work when directly connected to the front-end ports on the VNX.  I’m documenting the symptoms here as well, so that the next person does not have to battle for two days.

The Symptoms

When directly connecting the HBA’s to the VNX fiber ports the following events pop up in the SP event logs

  • EV_VirtualArrayFeature::_mergeInternalObjects() - No parent for HBA,
  • EV_TargetMapEntry::GetHostInitiatorPort() - NULL HBAPort pointer

Running NaviSECCli.exe -Address 172.20.10.27 port -list –sfpstate outputs the following:

SP Name:             SP A
SP Port ID:          1
SP UID:              50:06:01:60:BE:A0:72:F9:50:06:01:61:3E:A0:72:F9
Link Status:         Up
Port Status:         Online
Switch Present:      NO
SFP State:           Online

This tells us that things are fine on a physical layer, but not much else is happening higher up the stack.

The Fix

First we need to upgrade the HBA firmware to version 3.1.  There are various OS specific ways to do it, easiest is probably to download the livecd from Brocade.  Since this HBA is not on the ESXi 5.1 HCL we need to install the driver.  You need to install at least the v3.1   I include the steps for the sake of completeness

  1. Enable SSH on your ESXi host
  2. Use scp for Windows or the following command from a linux / max host:  scp brocade_driver_esx50_v3-1-0-0.tar root@<ip address>:/tmp
  3. SSH into your ESXi host and navigate to the /tmp folder with cd /tmp
  4. Execute tar xf brocadedriveresx50_v3-1-0-0.tar
  5. Execute ./brocade_install_esxi.sh
  6. Wait for the installation to finish (takes about 1 – 2 mins) and reboot host once done

Now we need to configure the HBA for direct connection, or more technically,  FC-AL mode

  1. SSH into your ESXi host and navigate to /opt/brocade/bin/ by entering cd /opt/brocade/bin/
  2. ./bcu port --topology 1/0 loop
  3. ./bcu port —disable 1/0
  4. ./bcu port —enable 1/0
  5. ./bcu port --topology 2/0 loop
  6. ./bcu port —disable 1/0
  7. ./bcu port —enable 1/0

Your ESXi host should now show up as a host on the VNX where you can add it to a storage group and assign LUNs.

Sunday, July 29, 2012

Setting up vSphere Active / Active iSCSI connections to a NetApp FAS2040

I recently had the opportunity to architect a solution consisting of 3 vSphere 5 boxes connecting to a NetApp FAS2040.  Storage connectivity would be via iSCSI.  The storage network would be running off of 2 Cisco 2960G switches, soon to be replaced by stacked Cisco 3750’s. 

The requirements were stock standard, as high a throughput as possible, with as much redundancy as possible.  This meant going active active on the iSCSI links.  Here is how I did it.

NetApp FAS2040 Configuration

This little SAN has 8 1GB Ethernet ports.  Due to the fact that the Cisco 2960G switches does not support multi-link switch aggregation (this is where the 3750’s will come in) I had to come up with a simpler design – what NetApp terms a Single-Mode design.  My design allows for:

  • Two active connections to each controller, thus a total of four active sessions
  • Storage path HA
  • Load balancing across links
  • Uses vSphere storage MPIO as opposed to switch-side configuration

Virtual Interface (VIF) Configuration:

All Vif's are single-mode / active passive
Cont1_Vif01 - e0a/e0b (e0a will be active, connected to switch 1 / e0b passive connected to switch 2) IP – 192.168.1.1
Cont1_Vif02 - e0c/e0d (e0c will be active, connected to switch 2 / e0d passive connected to switch 1) IP – 192.168.2.1
Cont2_Vif01 - e0a/e0b (e0a will be passive, connected to switch 1 / e0b active connected to switch 2) IP – 192.168.1.2
Cont2_Vif02 - e0c/e0d (e0c will be passive, connected to switch 2 / e0d active connected to switch 1) IP – 192.168.2.2

This image, courtesy of NetApp, explains it infinitely better than my wall of text:-)

image

I also configured partner takeover for all VIF.  In case of controller failure it allows the remaining controller to take over the VIFs.

Ethernet Storage Network Configuration

On the storage network I had to configure 2 critical settings:

  • Spanning Tree Portfast
  • Jumbo Frames

When connecting ESX and NetApp storage arrays to Ethernet storage networks, NetApp highly recommends configuring the Ethernet ports to which these systems connect as RSTP edge ports.  This is done like so:

Switch2960(config)# interface gigabitethernet2/0/2
Switch2960(config-if)# spanning-tree portfast

Next up, Jumbo Frames:

Switch2960(config)# system mtu jumbo 9000
Switch2960(config)# exit
Switch2960# reload

vSphere Configuration

I am in love with vSphere 5, and one of the biggest reasons for that is the fact that a lot of the configuration parameters that used to be command-line only has been moved into the GUI.  Another reason is Multiple TCP Session Support for iSCSI.  This feature enables round robin load balancing using VMware native multipathing and requires a VMkernel port to be
defined for each physical adapter port assigned to iSCSI traffic.  That said, let’s get configuring:

  1. Open your vCenter Serve
  2. Select an ESXi host
  3. In the right pane, click the Configuration tab
  4. In the Hardware box, select Networking
  5. In the upper-right corner, click Add Networking to open the Add Network wizard
  6. Select the VMkernel radio button and click Next
  7. Configure the VMkernel by providing the required network information.  NetApp requires separate subnets for active/active iSCSI connections, therefore we will create two VMkernels, on the 192.168.1.x and 192.168.2.x subnets respectively.
  8. Configure each VMkernel to use a single active adapter that is not used by any other iSCSI VMkernel. Also, each VMkernel must not have any standby adapters. If using a single vSwitch, it is necessary to override the switch failover order for each VMkernel port used for iSCSI. There must be only one active vmnic, and all others should be assigned to unused
  9. The VMkernels created in the previous steps must be bound to the software iSCSI storage adapter. In the Hardware box for the selected ESXi server, select Storage Adapters.
  10. Right-click the iSCSI Software Adapter and select properties. The iSCSI Initiator Properties dialog box appears
  11. Click the Network Configuration tab
  12. In the top window, the VMkernel ports that are currently bound to the iSCSI software interface are listed
  13. To bind a new VMkernel port, click the Add button. A list of eligible VMkernel ports is displayed. If no eligible ports are displayed, make sure that the VMkernel ports have a 1:1 mapping to active vmnics as described earlier
  14. Select the desired VMkernel port and click OK.
  15. Click Close to close the dialog box
  16. At this point, the vSphere Client will recommend rescanning the iSCSI adapters. After doing this, go back into the Network Configuration tab to verify that the new VMkernel ports are shown as active, as per the image below.

image

Congratulations, you now have active / active, redundant iSCSI sessions into your NetApp SAN!

Saturday, July 14, 2012

Credibility, Ethics and Bias

 

I've been working in IT for the best part of a decade, but only got into blogging and the whole social media thing in the last year or so.  I really love doing what I do and sharing it with others, but in putting yourself out there you begin to realise how important ethics are.  There is absolutely no difference between me and the next blogger, apart from the quality of the content one puts up and credibility.

Then something dawned on me, credibility is not just something that should shine through in what you put out there for the public to consume, it is even more important to apply those principles in your day to day dealings.  It was about at that time when I realised that true credibility is something that is exceedingly rare in IT, in my experience.

In my universe, a very quick way to loose credibility is to shoot down and bad mouth a product, vendor or technology you know nothing about.  An example - I am in the somewhat unique situation where my job involves presales and architecting products from the two biggest storage vendors out there, namely EMC and NetApp.  As if that's not enough, I also do the HP EVA portfolio.  The storage field is hugely competitive, and this shows.  I take my job seriously, so I make it my business to know the products I work with as well as possible. 

For me it really is all about analysing the customers technical and business needs and consequently the application of the best technology for their given needs.  And believe me, there is enough key differentiators between the various vendors, that when combined with the customers budgetary requirements that you will be able to determine a best-fit solution, and not this one-size-fits-all that most vendors fixate on.  Unfortunately the amount of FUD and misinformation I've heard from people who really should know better is absolutely astounding.  It gets to the point where the vendors are *actively* just advancing their own best interests with the client and their interests a distant second (or maybe I'm just naive, and that is how its supposed to work?).

As if that's not bad enough, I also do the entire lifecycle of both vSphere and Hyper-V, from pre-sales through to implementing and supporting.  The amount of garbage I hear sprouted is enough to fill a landfill.  Admittedly most of it comes from the vSphere-supporting side of the fence, but the Microsoft partners are quickly catching up.  a Couple of examples I've heard is "Hyper-V does not do the equivalent of vMotion" or "the ESXi hypervisor is 50% faster then Hyper-V".  Complete and utter bollocks in other words.  As I said, the MS camp is quickly catching up and with the confidence and maturity that Hyper-V 3 will bring we'll see the MS guys giving as good as they get.

That being said, there will always be a bit of bias inherent in everyone.  You will develop bias through your career, naturally leaning towards the solutions that you sell and implement.  That is normal and there is nothing wrong with it.  By all means do challenge the opposition's claims, ask them to backup their statements, ask for facts, see through the normal sales BS and question their value propositions.

What is not right is the stuff I was talking about earlier.  At the risk of repeating myself we should all try and avoid spreading FUD intentionally.  Spreading it unintentionally is only slightly worse, because one should always verify claims before repeating it as gospel yourself.  If NetApp, for example, tells me they scored eleventy billion marks on some benchmark whilst EMC flunked out I will investigate.  EMC does knows a thing or two about storage - so there is bound to be a story behind the story.  Conversely, if I hear a EMC partner starting with "No one can touch our Avamar / DataDomain dedupe / our ease of management / etc" my BS detector goes into overdrive.

The ultimate loser here is the customer who gets bombarded with noise and misinformation from all sides, whose job hinges on making the correct decision, who ultimately needs to put his trust in a vendor who is more interested in pushing a brand or technology which might or might not solve a problem and who needs to explain when a solution does not deliver. 

We need to start putting the customer first in everything we do.  In the short term it might not seem the easy / profitable thing to do, but in the long term you will be rewarded.  Credibility is truly priceless, and once you give it up it is very, very difficult to regain. 

Do The Right Thing.

Sunday, June 3, 2012

Cluster Shared Volume stays in redirected mode

I recently had a perplexing problem on one of my lab servers, which took a lot of head-scratching to solve.  Fortunately I had some time to burn so I managed to get to the bottom of it.

Symptom

If I moved a disk or a CSV to a specific node in my Hyper-V failover cluster it would put the CSV in redirected mode and log the following to the System log

Log Name:      System
Source:        Microsoft-Windows-FailoverClustering
Event ID:      5125
Task Category: Cluster Shared Volume
Level:         Warning
Keywords:     
User:          SYSTEM
Description:
Cluster Shared Volume '\\?\Volume{0bf0b229-9b0e-11e1-8a3a-e4115ba98410}\' ('') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared Volumes.
Active filter drivers found:
aksdf (Encryption)

Cause

After a fair bit of head-scratching, rolling back actions and research with Sysinternals Process Monitor I pinpointed the problem to NetApp Single Mailbox Restore for Exchange.  During installation it installs the aksdf.sys device driver.  A quick google search showed it to be a driver used for USB dongle licensing.  Weird, since SMBR does not require a dongle.  Anyhow, this device driver conflicts with the CSV and forces it to run in redirected mode

Solution

The solution is simple – navigate to the HKLM\SYSTEM\CurrentControlSet\Services\akdsf registry key and set the Start key to have a value of four (4), as per the below screenshot.

Image

This is not documented anywhere on the NetApp support site, so I will file a bug report.  In mitigation, I cannot see that one will actually run SMBR on one of your production cluster nodes.  Still, it should be trivial for NetApp to patch their installation routine to not install the aksdf.sys device driver.

Wednesday, May 23, 2012

NetApp Single Mailbox Restore (SMR) for Exchange for Virtualised Exchange Servers

NetApp Single Mailbox Restore for Exchange 2010 is, well, a snap to use when your Exchange server is running in the “NetApp way”.  What is the NetApp way you ask?  Well, in a nutshell, it is when you have your physical Exchange box hooked up to your SAN via iSCSI or FCP.  If you are virtualised then you’ll need to present your disks via RDM (vSphere) or pass-through if you live in MS land.

What I address here is the case where you have an Exchange server virtualised with Hyper-V, with your hard drives attached as VHD’s.  Even though this example uses Hyper-V, the principles are also applicable to a vSphere environment.

Mounting the NetApp Snapshot

  1. Open NetApp SnapDrive on a host connected to your Filer via either FCP or iSCSI
  2. Navigate to the Disks node and expand the LUN containing the VHD which in turn contains your Exchange DB’s.
    image
  3. Under Snapshot Copies, right-click point in time snapshot that you wish to restore and select Connect Disk
    image
  4. The Connect Disk Wizard will start.  Click Next
    image
  5. Select the appropriate snapshot and click next.
    image
  6. Click Next on the the “Important Properties…” screen (Don’t change anything here)
    image
  7. Set the LUN type as Dedicated and click Next
    image
  8. Assign a Drive Letter and click Next
    image
  9. Select your initiators and click Next
    image
  10. Select Manual on the Initiator Group Management Screen and click Next
    image
  11. Select the appropriate iGroup and click Next
    image
  12. Click Finish to complete the SnapDrive Connect Disk Wizard
    image

Your NetApp snapshot should now be mounted as a drive accessible through Windows Explorer, If you browse to it it should contain the VHD hosting your Exchange DB.  The next step is to mount the VHD so that it is accessible to SMR.

Mounting the VHD

  1. Open Server Manager.  Navigate to Storage – Disk Management.  Right-click Disk Management and click Attach VHD
    image
  2. Browse to the VHD from the previous section and click OK
    image

Your VHD will now be mounted with the next available drive letter and accessible via Windows Explorer.  The next and final step will be to mount our mailbox with SMR and get restoring!

Restoring with SMR

  1. Open Single Mailbox Recovery and click File – Open Source
    image
  2. Browse to your source EDB file, ignoring any warnings about missing log files (Hooray for application-aware snapshots!) and click OK
    image
  3. SMR will now process your database and allow you to restore a mailbox, folder or item to PST or an Exchange Server.
    image

Awesome, but once done we have to clean up after ourselves by dismounting the VHD and disconnecting the temporary NetApp SnapShot LUN.

Cleanup

  1. Open Server Manager. Navigate to Storage – Disk Management. Right-click Disk Management and click Detach VHD
    image
  2. Take care to *not* check the “Delete…” box and click OK
    image
  3. Open SnapDrive and go to the Disks node.  Right click your temporary SnapShot LUN and click Disconnect Disk.
    image

Thursday, May 10, 2012

Configuring NetApp SnapManager for Hyper-V (Part 2) – Adding your Hyper-V Failover Cluster


Part one of our little tutorial dealt with correctly setting up and sizing the Snapinfo LUN.  Part deux will show you how to add and configure your cluster for SnapManager for Hyper-V.  Let’s dive in.

Configuring a Hyper-V Failover Cluster

  1. Open up SnapManager for Hyper-V, click the Protection node – Hosts tab and click Add Host.  Enter your host name.  NB! Only enter the NetBIOS name, not the FQDN***
    image
  2. Click Next.  Answer Yes to the dialog box asking you to start the configuration wizard.image
  3. The configuration wizard will pop up
    image
  4. Click Next.  Enter the report path location (or choose the default).image
  5. Enter the correct notification settings for your environmentimage
  6. Click Next. Select your Snapinfo path.
    image
  7. Click Next. Admire the exquisitely formatted summary.image
  8. Click Finish.  The configuration wizard will now do the necessary to configure your Hyper-V failover cluster.
    image
  9. Once you click close you can start configuring your Hyper-V protection.

***If the Fully Qualified Domain Name (FQDN) is used, SMHV will not be able to recognize the name as a cluster. This is in view of the manner in which the Windows Failover Cluster (WFC) returns the cluster name through WMI calls. Consequently, the host will not be recognized by SMHV as a cluster and will fail to use a clustered LUN as the SnapInfo Directory Location.

Configuring NetApp SnapManager for Hyper-V (Part 1) – Creating the Snapinfo LUN

Simple as this sounds I found that the process is not as simple and as well documented as it could be, especially with regards to creating the clustered SnapInfo LUN and folders.  Consequently I decided to document it with (a first for this blog) screenshots.

I am going to assume that you have already hooked up your hosts to your NetApp system, and that you’ve installed SnapDrive and SnapManager for Hyper-V.

The steps, in a nutshell, are:

  1. Create the Snapinfo LUN
  2. Make the Snapinfo LUN a highly available clustered resource
  3. Configure SnapManager for Hyper-V

Creating the SnapInfo LUN

  1. Create a volume to host your Hyper-V SnapInfo LUN
  2. Open up Snapdrive on of your Hyper-V cluster nodes, go to the Disks node, and click Create Disk.  This launches the Create Disk Wizard.image
  3. Click Next.  Now highlight the volume you created in step 1, enter a LUN name and description:image
  4. Click Next.  Very Important – select Shared (Microsoft Cluster Services Only)image
  5. Click Next. The following list should list the active nodes in your Failover Cluster.image
  6. Click Next. Select the appropriate options and size for your environment***image
  7. Click Next. Select the initiators to be mapped to the LUNimage
  8. Click Next.  Select whether you want to manually select the igroups (collection of initiators) or whether you want the filer to do it automatically.image
  9. Click Next. Choose the option to create a new Cluster Group to host the LUNimage
  10. Click Next and click Finish to exit the wizard.

image

To recap, the above will:

  • Create a LUN on the volume of your choosing
  • Format the LUN with the NTFS filesystem
  • Add the disk to your Failover Cluster as part of a Cluster group
  • Assign a driveletter to the disk.

***SnapInfo LUN Size Provisioning:  The NetApp filer will store about 50KB metadata per VM per snapshot.  Due to the way Hyper-V snapshots work it will store two snaps per snapshot, therefore if we backup 20 VM’s once per day our sizing will be as follows:  20 * 50KB = 1MB * 2 = 2MB per day.  NetApp allows us to store 255 snapshots per volume so we should cater for 510 MB total.  I give it 10GB just because I can.  And because thin provisioning works.