In today’s datacenters for small and medium sized businesses, there is a constant buzz about using the latest and greatest storage appliances to create the highest IOPS ever seen by man, or to create the most amazing throughput for your applications and containers. For the average company of less than 5,000 employees the ongoing pressure to procure and implement name brand and expensive solutions is a constant drain upon the IT administrators to assess current usage and balance potential performance and productivity gains with the costs and ongoing support for these solutions. Add into the mix the constant pressure to utilize the next big thing, and many administrators and CTOs see a non-stop barrage of ads, cold calls, and high-pressure sales tactics designed to scare or influence them to spend copious amounts of money, which are often more hype than need. Sometimes simple is better. Combine that with the current environment of uncertainty and thrifty investments that leverage what a company currently has at its disposable and allowing scale as needed without vendor lock is critical for companies.
There are many options for Software Defined Storage vendors out there, but one of the earliest and consistently high performing has been DataCore. DataCore’s SANSymphony software provides block storage via iSCSI or Fibre Channel to consumption servers such as Windows, Linux, Hyper-V or VMware hosts. SANSymphony offers many desirable features that benefit this type of environment. Storage pooling is important when using existing storage devices. Caching, deduplication and compression all conserve storage space. Auto and manual tiering options, QoS, mirror pathways and load balancing all help to optimize storage pools and gain the maximum performance from them.
To create the highest performance from SAS SSDs possible, use direct access of disks either internally or through Direct Attached Storage JBOD. There are some disadvantages to this design model. In the event of a storage node or a cable failure, storage capacity decreases significantly. Storage that is within the node or attached to the node is no longer usable as the node is down. The data itself can be susceptible to additional failures, until the node failure is resolved and data resynchronization occurs. Reduced performance also occurs, as there are less storage nodes to provide the block storage to the consumption nodes. In real world scenarios, it can take several hours for the storage node to receive spare parts and repairs to be completed. This is avoidable by having additional storage nodes available in a purely redundant form, but is rarely cost effective. ATTO Technology’s XstreamCORE Intelligent Bridges can help a perceptive engineer design highly resilient models, which reduce risk and alleviate performance bottlenecks during outages.
The ATTO XstreamCORE is a line of intelligent protocol bridges that take external block storage SAS such as JBODs of SSD/HDD or RAID arrays and presents them as either Fibre Channel LUNS or iSCSI/ISER targets. The Fibre Channel option can create a bonding between FC initiators and the SAS drives. This creates an exclusive access for the drives that the nodes can use as if they are Direct Attached Storage, hence the term DAS over FC. Fibre Channel extends the distance and allows for full disaggregation. If we now look at the environment above where a node fails, we have options. Because SANSymphony writes a unique signature to the disks it is managing, we can quickly and easily remap those drives to a new DataCore SANSymphony node, even a freshly spun up VM template. This VM running SANSymphony can ingest the disks, configure to join the storage node cluster and immediately start its data synch within minutes as compared to hours. Data is now fully protected, risk to the data is minimalized, and performance optimal.
The diagram below shows the design along with a potential failover design. The representations of JBODS could also be RAID arrays as well. Presenting RAID volumes to the XstreamCORE initiators ensures they will remain available to the FC initiators of your choosing. You will note that for lower numbers of storage nodes no FC switching is required.