Creating highly resilient Software Defined Storage pools by implementing a Direct Attached Storage over Fibre Channel environment

In today’s datacenters for small and medium sized businesses, there is a constant buzz about using the latest and greatest storage appliances to create the highest IOPS ever seen by man, or to create the most amazing throughput for your applications and containers.  For the average company of less than 5,000 employees the ongoing pressure to procure and implement name brand and expensive solutions is a constant drain upon the IT administrators to assess current usage and balance potential performance and productivity gains with the costs and ongoing support for these solutions.  Add into the mix the constant pressure to utilize the next big thing, and many administrators and CTOs see a non-stop barrage of ads, cold calls, and high-pressure sales tactics designed to scare or influence them to spend copious amounts of money, which are often more hype than need.  Sometimes simple is better.  Combine that with the current environment of uncertainty and thrifty investments that leverage what a company currently has at its disposable and allowing scale as needed without vendor lock is critical for companies. 

There are many options for Software Defined Storage vendors out there, but one of the earliest and consistently high performing has been DataCore.  DataCore’s SANSymphony software provides block storage via iSCSI or Fibre Channel to consumption servers such as Windows, Linux, Hyper-V or VMware hosts. SANSymphony offers many desirable features that benefit this type of environment.  Storage pooling is important when using existing storage devices. Caching, deduplication and compression all conserve storage space.  Auto and manual tiering options, QoS, mirror pathways and load balancing all help to optimize storage pools and gain the maximum performance from them.

To create the highest performance from SAS SSDs possible, use direct access of disks either internally or through Direct Attached Storage JBOD.  There are some disadvantages to this design model. In the event of a storage node or a cable failure, storage capacity decreases significantly. Storage that is within the node or attached to the node is no longer usable as the node is down. The data itself can be susceptible to additional failures, until the node failure is resolved and data resynchronization occurs.  Reduced performance also occurs, as there are less storage nodes to provide the block storage to the consumption nodes.  In real world scenarios, it can take several hours for the storage node to receive spare parts and repairs to be completed. This is avoidable by having additional storage nodes available in a purely redundant form, but is rarely cost effective.  ATTO Technology’s XstreamCORE Intelligent Bridges can help a perceptive engineer design highly resilient models, which reduce risk and alleviate performance bottlenecks during outages.

The ATTO XstreamCORE is a line of intelligent protocol bridges that take external block storage SAS such as JBODs of SSD/HDD or RAID arrays and presents them as either Fibre Channel LUNS or iSCSI/ISER targets. The Fibre Channel option can create a bonding between FC initiators and the SAS drives. This creates an exclusive access for the drives that the nodes can use as if they are Direct Attached Storage, hence the term DAS over FC. Fibre Channel extends the distance and allows for full disaggregation. If we now look at the environment above where a node fails, we have options. Because SANSymphony writes a unique signature to the disks it is managing, we can quickly and easily remap those drives to a new DataCore SANSymphony node, even a freshly spun up VM template.  This VM running SANSymphony can ingest the disks, configure to join the storage node cluster and immediately start its data synch within minutes as compared to hours.  Data is now fully protected, risk to the data is minimalized, and performance optimal. 

The diagram below shows the design along with a potential failover design.  The representations of JBODS could also be RAID arrays as well.  Presenting RAID volumes to the XstreamCORE initiators ensures they will remain available to the FC initiators of your choosing. You will note that for lower numbers of storage nodes no FC switching is required.

Primary Design:

Failover scenario:

Humbled to be named a vExpert

So many people to thank. Duncan, Richard, Tom, and many more.

I will be expanding this quite a bit and revamping to help encourage this process but for now:

Media Prep for creating a virtual iso usable by ESXi for a MacOS installation

Again please note these instructions are for use on virtualized Apple hardware only!

In a previous blog we noted how to virtualize a MacPro 6,1 or a Mac Mini with a few set of steps. A large part of that was how to create a VMware bootable iso to be able to install the MacOS onto (and how to generate it from the Mac system. Here is an update that pertains to Mojave as the process is slightly changed

hdiutil attach /Applications/Install\ macOS\ Mojave.app/Contents/SharedSupport/InstallESD.dmg -noverify -mountpoint /Volumes/Mojave

hdiutil create -o ./MojaveBase.cdr -size 7316m -layout SPUD -fs HFS+J

hdiutil attach ./MojaveBase.cdr.dmg -noverify -mountpoint /Volumes/install_build

asr restore -source /Applications/Install\ macOS\Mojave.app/Contents/SharedSupport/BaseSystem.dmg -target /Volumes/install_build -noprompt –noverify –erase

rm -rf /Volumes/OS\ X\ Base\ System/System/Installation/Packages

mkdir -p /Volumes/OS\ X\ Base\ System/System/Installation/Packages

cp -R /Volumes/Mojave/Packages/* /Volumes/OS\ X\ Base\ System/System/Installation/Packages/ hdiutil detach /Volumes/OS\ X\ Base\ System/

hdiutil detach /Volumes/Mojave/ —mv ./MojaveBase.cdr.dmg ./BaseSystem.dmg

hdiutil create -o ./Mojave.cdr -size 8965m -layout SPUD -fs HFS+J

hdiutil attach ./Mojave.cdr.dmg -noverify -mountpoint /Volumes/install_build

asr restore -source /Applications/Install\ macOS\ Mojave.app/Contents/SharedSupport/BaseSystem.dmg -target /Volumes/install_build -noprompt –noverify –erase —cp ./BaseSystem.dmg /Volumes/OS\ X\ Base\ System/

hdiutil detach /Volumes/OS\ X\ Base\ System/

hdiutil convert ./Mojave.cdr.dmg -format UDTO -o ./Mojave.iso

mv ./Mojave.iso.cdr ~/Desktop/Mojave.iso

rm ./Mojave.cdr.dmg

Virtualizing a MacPro and creating MacOS virtual machines

My personal compiled list of how to accomplish this.  Please note this compilation is intended for legal use of hardware and software and should be used within the Apple EULA.

Primary credit goes to William Lam @lamw and the resources he created and linked on Virtually Ghetto. Most of this is simply a summary of many of his blogs on the topic.

Step 1: Strong Recommendations to consider before you start:

This is pretty important and when I first started this project I skipped right to making an ESXi host without thinking “what do I need from this host BEFORE I overwrite the SSD”.  It was a huge mistake and added quite a bit of time to my overall discovery of what worked for me.  I would strongly recommend that you back up the MacOS on your MacPro before installing ESX onto the SSD.  I would also STRONGLY recommend you create your MacOS Boot media ISO that you will use for your VMs (setting that as next step) if this is the MacOS version you will install, prior to installing ESXi.

Step 2: Media prep for bootable MacOS media

You will need to log into an MacOS system which still has its install dmg on the system.  Where you see a name such as Sierra.app in the script below, you can change if you are using a different version of MscOS (ElCapitan.app as an example).  Run all of these commands in a terminal window.  Once complete, you MAY have some issues with the ISO not moving cleanly to a USB stick due to size and formatting issues of the USB (It will falsely report the iso being too large for the USB stick).   To resolve, format the USB stick as exFat with Rufus and this should allow you to move the ISO from the Mac to the machine you are using as a host iso file for the MacOS on a VM install.

hdiutil attach /Applications/Install\ macOS\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify -nobrowse -mountpoint /Volumes/install_app

hdiutil create -o /tmp/Sierra.cdr -size 7316m -layout SPUD -fs HFS+J

hdiutil attach /tmp/Sierra.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build

asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase

rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages

cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/

cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist

cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg

hdiutil detach /Volumes/install_app

hdiutil detach /Volumes/OS\ X\ Base\ System/

hdiutil convert /tmp/Sierra.cdr.dmg -format UDTO -o /tmp/Sierra.iso

mv /tmp/Sierra.iso.cdr ~/Desktop/Sierra.iso

 

Step 3: Media Preparation:

I personally download the ISO I wish to use as the install media from the VMware download site.  I use ESXi-Customizer to embed the drivers I want or need into it.  I would recommend once you download ESXi-Customizer (which is a dos program) make a script to call it up.  MUCH easier that using a DOS window!  Depending on the version of ESXi you want to use and the MacPro you may need to embed a SATA driver or worst case scenario is the need to embed an ethernet driver.  If you are going to create your ESXi media with drivers such as the ATTO Thunderlink Fibre Channel adapter, so that you could install to a remote LUN, you would want to do that in this step.  Once I have the ISO I will use, I use RUFUS to create a bootable USB stick to boot the host from.  NOTE if you are installing ESXi 6.5 or a version that does NOT prefer the vmklinux drivers you may have to do some extra steps.  See my caveats below.

 

Step 4: ESXi Installation:

Hold the Alt key if you are using a non Mac keyboard as you power on the MacPro with the USB stick inserted into a USB slot.  This lets you choose what device to boot the host with.  Follow the prompts and install as you would any other host.  HINT that the host nic 0 is the nic on the right and nic 1 is the nic on the left. I would recommend using vCenter and adding the host to your datacenter.

Step 5: Creating the VMs

VM creation is normal except select “Other” for the OS in the VM Creation and select the exact version of MacOS you will be creating or the highest version of MacOS listed if you are installing a higher version.

Step 6: Installing MacOS

I have uploaded the iso to a datastore and mounted it to the VM or I have used a local ISO mounting of a CD-ROM to the VM to install the MacOS instance.  I typically do this from the vCenter VM console viewer for that VM than attach media as a part of the VM creation.  It’s easier for me to disconnect and remember.  I usually download the vmware tools from the repository and manually use the dmg to install after the vm is up and running.

Caveats

  1. No Directpath IO  Sorry it is a limitation in the ability for ESXi to push it directly to the MacOS.  I have tried everything I can think of to make it work without success.
  2. Sometimes Apple changes drivers. When the ESXi installer cannot see the sata drive or the NIC on your initial install, it’s likely that the base drivers with ESXi cannot see the hardware.   Troubleshooting this is a bit of a pain.  My PERSONAL step is to try a different (newer or older) version of ESXi and when the problem is resolved simply upgrade to the newer version.
  3. ESXi 6.5 and the vmklinux issue. If you try to install on 6.5 where the vmklinux is not enabled by default (meaning the installer wants to use all native drivers) you will need to tweak your install media.  How do you know this is happening?  You’re your install fails at different points usually between the initial load and the actual install, then its probably this issue.  Think about it from the installer perspective.  I see most of the hardware but I cannot interact with some of it. What I usually do is the following (again TY William):  *** NOTE that I did NOT have to do this with 6.5U1 release.
    1. Use an unzip (I use 7Zip) program to open the .iso installer
    2. There will be 2 files to edit. They are in the root of the iso boot.cfg and /efi/boot/boot.cfg
    3. In these files will be a line (close to the top) that says kernelopt=runweasel You will need to append this line so that it says kernelopt=runweasel preferVmklinux=True
    4. Once you have saved the changes re-make the iso with an iso creator and you should be able to boot using this iso file and the previously mentioned steps.

 

 

Links to the above mentioned files and stuff you may need.  NOTE many of these tools are unsupported but simply make life easier for the non-powerCLI folks (like me)

 

ESXi-Customizer (A few caveats are on the page itself such as no Win 10 support)

https://www.v-front.de/p/esxi-customizer.html#download

 

RUFUS  I recommend the portable version

https://rufus.akeo.ie