Tuesday, October 30, 2012

Creating a WinPE boot disk for SCCM with integrated drivers

I’m busy with desktop deployment project, utilising SCCM’s OSD functionality.  This required creating a WinPE-based boot image which contains the necessary network and mass-storage drivers to allow the bare-metal deployment process to connect to my SCCM server.

Here is how I did it:

Download and Install the Windows Automated Installation Kit (WAIK)

  1. Download the WAIK from here
  2. Strictly speaking not necessary, but you can download the SP1 supplement from here
  3. Install it – installation is straight-forward, next – next – done kind of thing

Creating the WinPE boot disk

I used c:\dellwinpe as my wim staging folder, you can change it to suit your environment.

  1. Launch an administrative WAIK command prompt (normal command prompt won’t work)
  2. copype x86 c:\dellwinpe
  3. copy the Dell WinPE driver cab to the c:\dellwinpe folder and extract so that the extracted WinPE folder is under the root of your c:\dellwinpe folder
  4. dism /Mount-Wim /WimFile:winpe.wim /index:1 /MountDir:Mount
  5. dism /add-driver /driver:"winpe" /image:"mount" /recurse
  6. dism /unmount-wim /mountdir:"mount" /commit
  7. dism /mount-wim /wimfile:"winpe.wim" /index:1 /mountdir:"mount"

Now we add a couple of packages necessary for SCCM integration and deployment

  1. dism /image:"mount" /add-package /packagepath:"C:\Program Files\Windows AIK\Tools\PETools\amd64\WinPE_FPs\WinPE-Scripting.cab"
  2. dism /image:"mount" /add-package /packagepath:"C:\Program Files\Windows AIK\Tools\PETools\amd64\WinPE_FPs\WinPE-WMI.cab"
  3. dism /unmount-wim /mountdir:"mount" /commit

There will now be a file called winpe.wim in your c:\dellwinpe folder, ready to be imported into SCCM as a boot image.

Wednesday, October 10, 2012

Directly Connecting a Brocade 815 HBA to a EMC VNX5300

I’m busy with a project which involves getting two ESXi hosts hooked up to a VNX5300 configured in block mode.  The order we placed with Dell specified Emulex 12000 HBA’s, but Dell got creative and shipped Brocade 815’s instead.  Only problem was that they didn’t work when directly connected to the front-end ports on the VNX.  I’m documenting the symptoms here as well, so that the next person does not have to battle for two days.

The Symptoms

When directly connecting the HBA’s to the VNX fiber ports the following events pop up in the SP event logs

  • EV_VirtualArrayFeature::_mergeInternalObjects() - No parent for HBA,
  • EV_TargetMapEntry::GetHostInitiatorPort() - NULL HBAPort pointer

Running NaviSECCli.exe -Address port -list –sfpstate outputs the following:

SP Name:             SP A
SP Port ID:          1
SP UID:              50:06:01:60:BE:A0:72:F9:50:06:01:61:3E:A0:72:F9
Link Status:         Up
Port Status:         Online
Switch Present:      NO
SFP State:           Online

This tells us that things are fine on a physical layer, but not much else is happening higher up the stack.

The Fix

First we need to upgrade the HBA firmware to version 3.1.  There are various OS specific ways to do it, easiest is probably to download the livecd from Brocade.  Since this HBA is not on the ESXi 5.1 HCL we need to install the driver.  You need to install at least the v3.1   I include the steps for the sake of completeness

  1. Enable SSH on your ESXi host
  2. Use scp for Windows or the following command from a linux / max host:  scp brocade_driver_esx50_v3-1-0-0.tar root@<ip address>:/tmp
  3. SSH into your ESXi host and navigate to the /tmp folder with cd /tmp
  4. Execute tar xf brocadedriveresx50_v3-1-0-0.tar
  5. Execute ./brocade_install_esxi.sh
  6. Wait for the installation to finish (takes about 1 – 2 mins) and reboot host once done

Now we need to configure the HBA for direct connection, or more technically,  FC-AL mode

  1. SSH into your ESXi host and navigate to /opt/brocade/bin/ by entering cd /opt/brocade/bin/
  2. ./bcu port --topology 1/0 loop
  3. ./bcu port —disable 1/0
  4. ./bcu port —enable 1/0
  5. ./bcu port --topology 2/0 loop
  6. ./bcu port —disable 1/0
  7. ./bcu port —enable 1/0

Your ESXi host should now show up as a host on the VNX where you can add it to a storage group and assign LUNs.