Virtual machine drops ISCSI drives during Vmotion

celerra 1Recently during maintenance downtime early one Sunday morning, our VMware administrator was v-motioning numerous VM’s to patch the host environment blades on the Cisco UCS. One of the VM’s was a Windows 2008 R2 server that contained over 10 iSCSI connections to targets residing on an EMC Celerra NS-40G NAS. All migrations were going well until this particular VM was moved: all of the iSCSI drives disappeared from the Windows OS. A vMotion process is a transparent process and this should not have happened, but this time it had a negative effect on the VM: an unexpected glitch. This happens time to time. The iSCSI initiator in the Windows Server OS still registered the connections as connected and online, but the OS disk management would not see the drives.

celerra iSCSI targets

I disconnected and then re-connected the targets in the iSCSI client, but the Windows OS would still not see the drives. I restarted both the iSCSI initiator and Server service in the Windows OS with the same result. Re-scanning for storage in Windows disk management did not help. EMC tech support ran a check on the NAS end and came up with nothing. Eventually, I went into Celerra Manager and deleted the LUN masking for the target and then re-added it. The Windows Server OS was then able to see the drives. I was hesitant to do his at first, as I was unsure what effect it would have on the drive letter assignments on the server: it had no effect on that – all drives were reestablished as the previously were.

celerra iSCSI target LUN masking

Advertisements

Importance of disk offset

SAN Guy at CLARiiON BLOGS has a good article that explains disk alignment. A very important subject regarding performance, this issue has recently come up at my data center: a EHR application is experiencing performance issues due to fragmentation (of course), and improper disk alignment on the LUN. Our DBA just ran a SQL 2005 best practices on the database that resides on this particular LUN, and it spit out the recomendation of a 64k offset. SAN Guy taught me a few things on this important subject.

An overview of MAID

ubuntu-serversSGI, a technical computing manufacturer, produces a line of storage systems that incorporate green technologies. The COPAN line of storage products use the MAID storage methodology that presents the ability of powering down unused disks in the array while not in use, conserving power as a result. This is managed via internal “Power Managed RAID software” (Sgi, 2010). MAID, the acronym for Massive Array of Idle Disks, is suited for the environment that consists of long-term storage that requires write-once, read-occasionally data (WORO). MAID can be used in disk (virtual) libraries, also known as EDL (Enterprise Disk Library). This is energy efficient storage. SGI’s COPAN solution could be some tough competition against the Centera line from EMC. As of the time of this writing, I have not yet seen MAID technology on the archive systems from EMC.

Massive Array of Idle Disks (MAID) consists of a large disk group, consisting of hundreds or thousands of disks configured into RAID groups. Through internal power management code, only drives that are needed are activated at a time. This reduces drive wear and power consumption.

Reference:

Sgi. MAID vs. Enterprise MAID. (2010). Retreived November 2, 2010 from http://www.sgi.com/products/storage/maid/what.html

Thesis research: SSD vs. FC drive benchark tests – part I

typewriterI am writing my graduate thesis on the subject of Solid State Drive (SSD). By the way, D stands for DRIVE, not DISK, as SSD does not use disk. Now, with that out of the way…

I have been benchmarking a new SSD array that I have added to my companies SAN: an EMC CLARiiON Cx4-480 system running on 4Gb/s fiber. It will be 8Gb/s soon, but we are waiting on the NAS code (an EMC NS-40G) to catch up so it will support 8Gb/s: the firmware on the NAS only supports up to 4G currently. The SAN is held together with two Brocade 4900 FC switches.

About the disks that I will be testing and comparing:

Disks used: (5) EMC 70GB SSD and (5) 300GB FC disks.

SSD:

66.639 raw capacity -FC SSD – Manufacturer: STEC – Model: ZIV2A074 CLAR72 – Serial: STM0000E9CFD – 4Gbps

FC:

268.403 raw capacity – FC – Manufacturer: SEAGATE – Model: STE30065 CLAR300 – Serial: 3SJ09XWW – 4Gbps

  • Created RAID5 (RAID group 100) on five SSD model ZIV2A074, 66.639GB each.
  • Creating RAID5 (RG 101) on five 300GB FC disks: Seagate 15K.7 ST3300657FC
  • LUN 104 is assigned drive letter W: (disk 3) (RG100) and named “SSD
  • LUN 108 is assigned drive letter X: (disk 4) (RG101) and named “FC

The test server was installed and set up with one dual-port 4Gb/s HBA. Windows Server OS Standard with 1GB RAM.

SAN Management: EMC Navisphere 6.28.21.

Network: 4Gb/s fiber. 2-Brocade 4900B FC switches. Host HBA: Emulex LightPulse FC2243.

Host connection via EMC PowerPath v5.2

Test I/O is generated by Microsoft SQLIOSim; I/O generation utility to similate I/O patterns found within versions of Microsoft SQL server. The versions simulated are Microsoft SQL Server 2005, SQL Server 2000, and Server 7.0. Brent Ozar, a SQL expert, has a good video on using SQLIO on his web site at brentozar.com. I have learned some things from him and am using the tips on SQLIO for my benchmarking.

The monitoring will be done with EMC Navisphere Analyzer and SUN StorageTek Workload Analysis Tool (SWAT).

Here is a preliminary test on SSD vs. FC data rates using SQLIOSim to generate I/O and SWAT to record the results:

SSD:

SSD performance survey data rate

FC:

FC performance survey data rate

So far there is not much of a difference. The Fiber Channel drives are keeping up with the SSD. Of course, this is a preliminary test and other tests at this time are giving similar results. I continue to plan my testing methodology.

Measuring disk performance

Here is a video on SearchDataCenter that I found that explains that noise can effect disk vibration:

About EMC Celerra NAS checkpoints

Here is a good intro on EMC Celerra NAS checkpoint technology. Checkpoints, also known as snapshots, are point-in-time images of a file system. These can be used for a quick system recovery in the event of file system corruption or loss:

EMC CLARiiON Cx4 integrated thin provisioning

virtualizationWe just upgraded to the new EMC CLARiiON Cx4 from the Cx3 over the weekend. The technician who arrived had done a good job: in about seven hours, the SAN was fully functional and we were back in business. We took this downtime to do some patching on host systems. The Cx4 has 8Gb fiber ports as compared to the Cx3 4Gb ports; however we could not use the 8Gb modules as the NAS, an EMC NS40g, contains code that is not 100% compatible with the Cx4 fiber module code. We will run it all on the 4Gb/s fiber modules for now until the NAS code catches up, and then the 8Gb modules will be installed. This is not a problem, as the 4Gb modules are still very fast and provide sufficient throughput for our electronic health records system (EHR).

One useful improvement about the Cx4 is the addition of thin pools – one can create RAID groups that use minimal storage and can expand dynamically when the need for more storage arises. See the white paper from EMC on virtual (thin) provisioning here.