If you perform a hardware change that affects a VM guest machine you may find that you have non-present network adaptors configured with existing IP addresses that won’t load when you boot the machine. When you try to configure another network device with an existing IP address you may be notified that the address is already attached to a network card that is no longer present in the system. Follow this article to show ghosted non-present devices so that you can remove them:
I’ve spent the last few days working on setting up a split physical-virtual Windows Server 2003 cluster in order to run SQL Server 2005. The plan was to move a physical Windows 2003 cluster running SQL Server 2000 onto the new environment so that we could use eventually use VMware SRM to replicate the virtual host to the DR environment. In the event of a whole site failure we could use the replicated SAN storage and the SRM VM copy in the DR site to bring up the database.
The construction of the cluster is actually quite simple. We built the physical host and configured the SAN storage and pathing so that it could see all of the LUNs and then initialised all of the disks and formatted them. We left the VMware virtual node switched off at this point until we had built the Windows cluster using Cluster Administrator and had all of the disks online in the default cluster groups. We then added the storage to the VMware machine’s configuration using virtual SCSI adaptors and RDM disks (raw device mapping) in physical compatibility mode. When we booted the second server (virtual) we could see all of the LUNs in Disk Management but they were all uninitialised/unknown disks.
The next step was to run the Cluster Administrator on the second node and add it into an existing cluster (i.e. the one belonging to the physical node). This worked, almost, apart from the fact that the quorum disk has a different path due to the virtual SCSI adaptor and disk names – so Cluster Administrator doesn’t know whether you have access to the same quorum disk. This is easy to fix, just use the Advanced – minimal option during the Add Node setup to disable the disk heuristics and allow you to regain control of the installation.
Get this far and the cluster is probably built successfully – but you still won’t see the disks being recognised correctly until you begin to fail them over from the physical to virtual node. As you do this the storage will be recognised automatically and will assume the correct drive letters. Nice!
Having struggled to save the configuration in SANSurfer for the QLogic HBA I was led incorrectly down the path of thinking that the default password ‘config’ was incorrect. In fact, I came across the article below that made me realise that it was driver related. The default Microsoft drivers being used by the QLogic card were dated 2002, and after downloading and installing the later drivers the problem was resolved. But not so fast, the Qlogic advisory for the latest STOR drivers require you to install the STOR miniport drivers and a hotfix before they will work.
Link to Qlogic forum page that solved the issue with “Failed to save configuration” in SANSurfer:
Link to the Microsoft fix for the Windows Server 2003 STOR miniport drivers:
Link to the Microsoft hotfix for the Windows Server 2003 STOR storage drivers:
I spent a while looking for the right CDs until a colleague mentioned it. Windows Server 2003 R2 comes as 2 CDs, however only the second actually contains the R2 code. The first CD is identical to the original version of Windows Server 2003 (Ent or Std). Therefore, if you want to build an R2 server you can use the original CDs if you can’t find the R2 CD1 and then just continue the installation with the R2 CD2.