Sunday, November 13, 2011

Problem accessing CSV from passive Cluster Node

I recently had a perplexing problem where Cluster Shared Volumes in a Hyper-V cluster were not working correctly.  The volumes were only accessible from the node currently owning the volume.  Attempts to access the volumes from any of the other nodes resulted Windows Explorer hanging indefinitely.  Enabling maintenance or redirected mode made no difference.


Event ID 5120 was logged:  Cluster Shared Volume 'Volume1' ('Cluster Disk 1') is no longer available on this node because of 'STATUS_BAD_NETWORK_PATH(c00000be)'. All I/O will temporarily be queued until a path to the volume is re-established.
Event ID 5142 also occurred:  Cluster Shared Volume 'Volume1' ('Cluster Disk 1') is no longer accessible from this cluster node because of error 'ERROR_TIMEOUT(1460)'. Please troubleshoot this node's connectivity to the storage device and network connectivity.


I could ping all nodes over both the Production and Heartbeat network links, and I could access file shares from any node on any node.


The Problem
After much troubleshooting I realised I disabled both File and Print Sharing and Client for Microsoft Networks on the Heartbeat NIC on all nodes.  This is a best practice drummed into me since working on Microsoft Clustering when it was still code-named Wolfpack.


The Resolution
I enabled File and Print Sharing and Client for Microsoft Networks and immediately afterwards all my Cluster Shared Volumes started functioning as expected.


The Explanation
It’s documented in MS KB Article 2008795.  When accessing a CSV volume from a passive (non-coordinator) node, the disk I/O to the owning (coordinator) node is routed through a 'preferred' network adapter and requires SMB be enabled on that network adapter. For SMB connections to work on these network adapters, the aforementioned protocols must be enabled.  Ugh.

Saturday, November 12, 2011

Setting up a KMS server on Server 2008 R2

Today we’ll deal with setting up a Microsoft Key Management Server (KMS).  a KMS is used to activate Microsoft Volume Licensed products such as Windows 7, Office 2010 and Windows server 2008 R2, amongst others.

a KMS server activates a client for a period of 180 days.  The activated machine will communicate with the KMS every 7 days to renew it’s activation information.  It then resets the license counter back to 180 days if successful.  If not it attempts to background connect to the KMS every 2 hours.

If, after 180 days, the machine has not been able to contact the KMS it will go into the 30 day grace period and notify the user.  After that the machine will enter a reduced functionality mode until it can again connect to a KMS.

That was quite a mouthful – so let’s get down to setting up a KMS on a Windows 2008 R2 host.  In addition we’ll also set it up so that it can activate Office 2010 clients.

Setting up a KMS

  1. Activate Windows with a KMS key.  This will automatically configure the server as a KMS
  2. Download the Office 2010 KMS Host License Pack
  3. Enter your KMS host key when prompted (you will get this key from your Microsoft Volume Licensing website)
  4. Make sure to allow the Key Management Service through the Windows Firewall

Verify that KMS is published in DNS

nslookup -type=srv _vlmcs._tcp.<your DNS domain>

Checking the KMS status on your KMS

From an elevated command prompt, type SLMGR.vbs /dlv

Checking the license and activation status on a client

slmgr.vbs –dli

a KMS goes a significant way to easing administrative burden, so go ahead and set it up, it’s as easy-peasy!

Friday, September 30, 2011

Handy naviseccli Commands

I have been meaning to document this for ages.  I often find myself supporting clients who are located on the other side of a horribly slow WAN / VPN / 2 Cans and a piece of string link.  Slow as in even Navisphere Express times out in the web browser.  That’s when a ninja-admin such as myself whips out his command-line fu.  All commands below are to be entered on a single line and substitute %username% and %password%

Physical Container-Front End Ports Speeds

naviseccli –h <ip address> port –list -sfpstate
naviseccli –h <ip address> –set sp a –portid 0 2
naviseccli –h <ip address> backendbus –get –speeds 0

SP cache details
naviseccli -scope 0 -user %username% –password %password% -address <ip address> getcache

Get all the details of the LUN’s on the array
naviseccli -scope 0 -user %username% –password %password% -address <ip address> getlun

Review IO Ports on an array
naviseccli -h <ip address> -user %username% –password %password% -scope 0 ioportconfig -list |more

All details from the Array
naviseccli -scope 0 -user %username% -password %password%-address <ip address> getall

SP Reboot and Shutdown GUI
naviseccli –h <ip address> rebootsp
naviseccli –h <ip address> resetandhold

Apart from being faster than the GUI, knowing the naviseccli commands also allows you to incorporate them in scripts, pipe the output etc.  In other words it’s a very nice string to have in your bow!

Hyper-V 3.0 is full of win!

There has been a lot of movement in the virtualization space recently, what with the release of vSphere 5 and Microsoft giving us a sneak peek at the upcoming Hyper-V 3.0.  All in all it seems that Hyper-V is a very rapidly maturing product, and Microsoft is adding the features and scalability so craved by enterprises.  Indeed, it also appears that they are pulling ahead of vSphere in certain areas.

The table below highlights some key performance maximums

  vSphere 5 Hyper-V 3
Max Nodes per Cluster 32 63
Max VMs per Cluster 3000 4000
Max CPUs per VM 32 32
Max RAM per VM 1 TB 512 GB
Max VM Disk Size 2 TB 12 TB
Max Processor Cores per Host 160 160
Max RAM per Host 2 TB 2 TB

In addition to that Hyper-V 3.0 will also bring the following to the table:

  • Live storage migration
    • This allows you to move you VHDs to another volume whilst the VM is online.  The volume need not reside on shared storage
  • Hyper-V Replica
    • This allow replication via LAN, for incredibly easy and cost-efficient DR
  • Native NIC teaming support in Windows Server 8
  • Storage De-Duplication
  • Offloaded Date Transfer (ODX), which basically offloads the grunt storage work to an ODX-enabled SAN
  • Virtualization-Aware Domain Controllers
    • You now can now make and revert to snapshots of a virtualized domain controller
    • Domain controllers running as a VM can also be cloned

This is some of the more important features Hyper-V 3.0 has to offer.  It is quickly turning into a very viable and cost-effective alternative to vSphere.

Wednesday, August 24, 2011

Start-up Sequence for VM’s in a Hyper-V Environment

I’ve had customers phone me several times with regards to powering up their servers after an outage.  If not done properly you can run into a situation whereby you cannot log onto a host to start a VM, because the virtualised DC is still down.  Not good.

So, if me or my clients are recovering from a shutdown I usually do the following:

  1. Power on the Hyper-V server hosting the VM that holds the PDC emulator FSMO role.  The VM should preferably also be a GC and DNS server and refer to itself for either primary or secondary DNS
  2. Sign on to the Hyper-V host using a Local Administrative Account.
  3. Start the VM.
  4. Log out of the Hyper-V host once the VM has booted successfully.
  5. Log into the Hyper-V host using a “proper” Domain Account.
  6. Start any other virtualised DC’s / GC’s you might have.
  7. Ensure you have at least one DC / GC running in each AD site.
  8. Boot up the rest of your environment

Another option (pointed out to me in the comments section), is to set startup delays on your VM’s so that your DC’s starts up before your other servers.  This seems very basic, but it’s amazing how quickly common sense goes out of the window when the pressure is on to get an environment up and running. 

Another way of dealing with this (my preferred way) is to have the PDC FSMO role on a physical machine.  If you go this route also ensure that your physical machine is a GC.

Friday, July 22, 2011

Publishing Remote Desktop Gateway (RDG) with TMG 2010

I recently had the pleasure of creating an Remote Desktop Services (RDS), Remote Desktop Gateway (RDG) and RemoteApp environment for a client.  This was a bit more technical and involved than I originally envisioned, no thanks to the scant documentation that exists.  I will detail all that in a later blog post, for now I will focus on publishing your RDG, RDS and RA environment through a Microsoft TMG 2010 Firewall.

First we have to create an SSL Listener
  1. Specify an IP address for the Listener
  2. Enable both HTTP and SSL connections
  3. For HTTP to HTTPS redirection select redirect all traffic from HTTP to HTTPS
  4. On the Certificates tab select “Use a single certificate for this web listener” and select an appropriate certificate
  5. Authentication should be set to “No Authentication”
Now we create the actual publishing rule:
  1. Allow
  2. From Anywhere
  3. To – Your RDSG IP or Host – Forward original host header – request appears to come from TMG
  4. Traffic HTTPS
  5. Listener – Select the one we created earlier
  6. Public name – This is the Public DNS name
  7. Paths should be /rdsweb/* and /rpc/*
  8. Authentication delegation – “No Delegation, client may authenticate directly”
These were the steps I had to take to successfully and securely publish the client's RDSG to the internet.  Once again I found the existing documentation to be lacking in the extreme.  Hope this helps someone out there.

Friday, July 15, 2011

How to Prepare an Offsite replica with DPM 2010

Sometimes the need will arise to backup your DPM replicas to removable storage for whatever reason.  It might be so that you can recover your DPM server in case of a disaster or you might even want to use the replicas to seed another DPM server in a DPM 2 DPM 4 DR scenario.  Here is an extremely simple and effective way to accomplish that:

  1. On your DPM Server open an Admin Command Prompt
  2. Navigate to the DPM bin folder (usually C:\
    Program Files\Microsoft DPM\DPM\bin\)
  3. Execute dpmbackup -db
  4. Execute dpmbackup -replicas
  5. I prefer using robocopy to copy the data to USB (or any alternate) Storage like so: robocopy C:\Program Files\Microsoft DPM\DPM\Volumes\ShadowCopy\ %destination% /e /b
What the above does is creates snapshots of the replica volumes and then mounts those read-only snapshots under the \Program Files\Microsoft DPM\DPM\Volumes\ShadowCopy folder.  It is therefore a point in time replica which you can copy wherever you wish, be it disk or tape.  You can, of course, also use your favorite backup software to backup your replicas, just be sure to configure it to traverse mount points.

Tuesday, June 28, 2011

Installing Windows 7 / 2008 R2 from a USB Stick

Too much of my life is spent staring at install screens - no more I say!  That time is better spent looking at the blinkenlights, drinking coffee or browsing slashdot.  One way of significantly speeding up install time is using a USB device instead of CD/DVD media.  Here is how to create bootable USB Installation media for Windows 7 and Server 2008 R2.
  • Launch the DiskPart utility by typing diskpart at the Start Menu.  If your system does not have diskpart you can download it from here
  • Run the list disk command to, surprisingly, list the disks in your system
  • Now run select disk % where the "%" is actually the number of your USB drive (obtained above).
  • Run clean.
  • Now run create partition primary.
  • Let's set the partition to active by entering active
  • Now we format the drive with the FAT32 filesystem via the format fs=fat32 quick command
  • Run the assign command to assign a drive letter to our USB device
  • Copy the entire contents of the installation DVD to your USB device (a simple drag and drop will do)
  • Plug the USB drive into the target system and proceed with the (hopefully faster) installation.
Configuring the relevant BIOS options is left as an exercise to the reader. 

Thursday, June 23, 2011

Trend Micro OfficeScan Uninstallation

If you, like me, have ever been faced with having to uninstall Trend OfficeScan, only for it to ask for a long-lost password, you were pretty much out of luck.  Trend does have a KB article with manual un-installation steps, but that is an exercise in frustration.

Fortunately I recently happened across a "hidden" command that will take care of business.  You need to do the following:
  1. Open a command prompt and navigate to the folder where your Trend AV client is installed, usually c:\Program Files\Trend Micro\*****
  2. Execute the following command: NTRMV.exe -331
This takes care of all the nitty-gritty stuff, stopping the services, removing files and folders and removing all the relevant bits from the registry.
Another great thing about this approach is that it's much easier to script and / or deploy with your favourite management app.

Monday, March 7, 2011

Repurposing an old Cisco PIX to provide secure public WiFi on a corporate LAN

I am in the planning stages of a fun little project, whereby the clients goal is to provide secure wireless access to guests over a ADSL link dedicated to this purpose.  Simple enough, but this traffic will travel over the same edge to core switches that carry business traffic, so we will have to set up some VLAN’s.

This client recently retired their ageing PIX firewall and replaced it with new Cisco ASA’s.  So instead of chucking the PIX we will press it into service as the secure gateway / firewall for the public ADSL Internet breakout.

The PIX in this case has two physical interfaces named ethernet0 and ethernet1.  ethernet0 will be connected to the ADSL and ethernet1 to the LAN.  ethernet1 will be configured in a VLAN, and the switch ports to which the guest AP’s connect will be configured to do the appropriate VLAN tagging.

Here is how to configure the PIX

  1. interface ethernet0 auto
  2. interface ethernet1 auto
  3. interface ethernet1 vlan1 physical
  4. interface ethernet1 vlan10 logical
  5. nameif ethernet0 outside security0
  6. nameif ethernet1 inside security100
  7. nameif vlan10 guest_wifi security10

Most client devices nowadays expect DHCP and since they don’t logically touch the corporate network we’ll have to make do with running DHCP off the PIX:

  1. dhcpd address 192.168.202.100-192.168.202.200 guest_wifi
  2. dhcpd dns 192.168.0.1 (substitute this with your ISP’s DNS Server)
  3. dhcpd lease 3600
  4. dhcpd ping_timeout 50
  5. dhcpd enable guest_wifi
  6. ip address wifi 192.168.202.1 255.255.255.0

And that should work rather brilliantly – I will know for sure in about a weeks time when I implement.

Friday, March 4, 2011

Creating a Hyper-V Cluster after the fact, or, how to preserve and add existing VMs to a Cluster.

I was faced with an interesting challenge recently. a Client was running two standalone Hyper-V hosts with about 4 VMs each running on a Storage Area Network(SAN).  I installed the SAN previously to provide increased IO performance for their SCADA (Citect, for those taking notes) system. 

This was essentially a very effective proof of concept as far as the client was concerned and they wished to take advantage of the more advanced features offered by Clustered Hyper-V (stuff like live migration etc.).

This posed a challenge, because we needed to convert the LUNs occupied by the VM’s to highly available Cluster Shared Volumes (CSV’s).  In Hyper-V, a VM needs to be hosted on a CSV in order to be made highly-available.  So off I went trying to figure out a non disruptive way to convert all my LUNs to CSV, without losing any data.  This is what I came up with.

  1. Shut down your VM(s)
  2. Open Disk Management on your Hyper-V host and remove drive letter from LUN hosting the VM
  3. Open Failover Cluster Manager (FCM) –> Storage –> Add Disk –> Select Disk from Step 2 –> Click OK
  4. Still in FCM - Go to Cluster Shared Volume - Add Storage – select the disk you added in Step 3
  5. Open up Hyper-V Manager on the same host (notice the VM status is critical because you removed the drive letter).  Remove the VM
  6. Create a new VM, opt to store it under the %systemdrive%\ClusterStorage folder which was created automatically when you performed Step 4.  VERY IMPORTANT – Do not add any disks to the VM!
  7. Right click the VM you created in Step 6 and choose Edit Settings.  Add the original VM’s disks (boot drive to be added to IDE controller 0).  The existing VHD’s will be found in %systemdrive%\ClusterStorage
  8. Open FCM – Go to Services and Applications – in the Action pane select Configure a Service or Application –> select Virtual Machine –> Check the VM created in Step 6 –> Complete Wizard
  9. Ensure that the VM is connected to the correct network in Hyper-V Manager
  10. Because we are connecting a new NIC to the VM you will have to re-specify the IP address inside the VM once the VM has started up

Rinse and repeat for all existing VMs you want to make highly available.  The Microsoft way would be to export the VM’s and import it again.  Nothing wrong with that, apart from the fact that it takes a lot of time and storage to do, depending on the size of the VM.  My way is quick and easy and it works!

Wednesday, February 23, 2011

Partial/No Redundancy on iSCSI Datastores

Expensive fiber SANs are not price-compatible with a lot of my clients, therefore a lot of my time is spent in iSCSI environments.  I’ve noticed in all instances that the Multipathing Status for all my iSCSI datastores are Partial/No Redundancy when viewed on the Storage Views tabs in vCenter.  This bothers me because I always go to great lengths to ensure that I set up my iSCSI multipathing correctly.

I therefore breathed a big sigh of relief when I discovered that this behaviour is a bug as confirmed by VMware Technical Support. The rule for displaying the “Multipathing Status” is as follows:

Full Redundancy – If you have 2 separate adapters and 2 separate paths to the datastore
Partial/No Redundancy – If there is one path which is Up
Unknown – If there is at least one path with an “Unknown” status
All Paths Down – No way to reach the datastore

You will always only have one adapter when using a software iSCSI Initiator, this implies a single point of failure which gives us the dreaded “Partial/No Redundancy” status.  So as things stand now software iSCSI will always be displayed with a degraded status.  Methinks VMWare should develop separate rules / algorithms for fiber and iSCSI SANs…

Saturday, January 22, 2011

HP Proliant NIC Teaming with Windows Server 2008 R2 Hyper-V

Installing a new HP Proliant Server typically involves booting the server from the HP SmartStart DVD and using that to do a guided OS installation.  This, quite handily, installs the HP Proliant Support Pack (PSP) which takes care of all drivers for you.
Turns out that if you follow this process and then install the Hyper-V role later on there is a possibility of running into problems.
In a nutshell, you need to install the OS, Hyper-V role and all the latest MS updates and Hotfixes.  Only then should you install the HP NIC teaming software
This HP Whitepaper (Using HP Proliant Network Teaming Software with Microsoft Windows Server 2008 Hyper-V or with Microsoft Windows Server 2008 R2 Hyper-V) explains in detail what the proper procedure is, and some of the issues you might run into.  As an extra it also delves into setting up VLAN’s.