DGM

Home / About DGM / Virtual Data Center / Features and Benefits

Features and Benefits

List of unique features implemented in our Virtual Data Center.

High Speed Caching

  • Speed up performance
  • Accelerates disk I/O response from existing storage
  • Uses x86-64 CPUs and memory from DataCore nodes as powerful, inexpensive “mega caches”
  • Anticipates next blocks to be read, and coalesces writes to avoid waiting on disks

In the process of virtualizing disks, DataCore software speeds up reads and writes, with the acceleration being largely attributable to the big memories and powerful processors of the x86-64 servers on which the software runs. Up to 1 Terabyte of cache may be configured on each DataCore node resulting in electronic memory speeds of handling disk requests. Caching basically identifies I/O patterns thus being able to foresee which blocks to read next into RAM from the back-end disks. The result is speedy fulfillment of the next request from memory absent mechanical disk delays.

Cashing on a DataCore node is essentially a level 1 cache with response time of less than 20 microseconds compared to response time of some hundreds of microseconds for the cashes on the disk array. The aim of both caches is to hide the much longer delay of the physical disk I/O which is in the range of 4000 to 6000 microseconds.

The performance of the Storage Tiers in the Virtual Data Center after the DataCore Software High Speed Caching is used as follows:

Tier Disks RAID Peak IOPS
Physical Disks
DataCore
Performance Boost
Peak IOPS
Virtual Pools
Tier 0 SSD RAID 1+0

200 000 IOPS

Approx. 10%

220 000 IOPS

Tier 1 FC 15K RAID 1+0

10 000 IOPS

Approx. 200%

20 000 IOPS

Tier 2 SAS 10K RAID 1+0

10 000 IOPS

Approx. 200%

20 000 IOPS

Tier 3 SAS 7.2K RAID 1+0

4 200 IOPS

Approx. 300%

12 000 IOPS

Tier 4 SAS 7.2K RAID 5

1 500 IOPS

Approx. 300%

4 500 IOPS

Sync Mirroring (High Availability)

  • Real-time I/O replication for High-Availability
  • Architect N+1 redundant grids for continuous availability
  • Eliminate SAN or storage as a single point of failure
  • Enhance survivability using physically separate nodes in different locations
  • Mirrored virtual disks behave like one, multi-ported shared drive, while automatically updating the two copies simultaneously

DataCore Sync Mirroring allows us to provide, on top of the standard RAID protection, every bit of data to be double-protected by SAN-based synchronous data replication between two independent and separate disk arrays/controllers.

As far as non-stop storage access is concerned, synchronous mirroring reaps the laurels. It handles the real-time replication of I/Os for the ultimate in continuous availability. Single points of disruption or failure are eliminated by having two nodes store the data simultaneously in conjunction with the host’s multipath I/O (MPIO) or Asymmetric Logical Unit Access (ALUA) drivers.

Policy Based Auto Tiering

  • Access frequency determines which disk blocks should be moved into a different tier
  • Adapts to provide most demanding workloads with speediest response
  • Senses sustained hot spots within files or databases
  • Works at sub-LUN level for best granularity

Automated storage tiering is basically the monitoring of I/O behavior, determining frequency of use and then moving blocks of data to the most appropriate tier or class of storage device. DataCore Auto-Tiering automatically “promotes” most frequently used blocks to the speediest tier (Enterprise SSD HDDs) to ensure top performance, while least frequently used blocks are being moved to the slowest tier (10/15K FC HDDs or 7.2K SAS HDDs).

Online Snapshots

  • Quick capture of point-in-time images
  • Quick recovery at disk speeds to a known good state
  • Back-up window elimination
  • Provision of “live” copy of environment for analysis, development and testing
  • Trigger from Microsoft VSS-compatible applications and VMware vCenter

You would love online snapshots once you have tried them. Snapshots capture a known good point-in-time that may be used for a number of purposes without scheduling lengthy back-up windows. It may provide a recovery point to undo a patch or file deletion. Or it may be used for feeding business intelligence analysis. Snapshots are also frequently used for verification of new software upgrades in the testing and development phase, prior to production.

Snapshots are extremely useful in cloning working system images to provision identical new servers or new virtual desktops. Although snapshots utilities are commonly found in operating systems, server hypervisors, backup software, and disk arrays, capturing them at the SAN level brings some major benefits. There is no consumption of host resources, no dependency on host software and no need of mutually compatible disk arrays.

Continuous Data Protection (CDP) & Recovery

  • Return to an earlier point-in-time without taking explicit backups
  • Dial back to restore an arbitrary point-in-time within a period of 48 hours
  • Logs and timestamps all I/Os to the designated virtual disks
  • No need to quiescent or interrupt applications
  • Easy to turn on and revert from

Companies often have to undo data modifications that had adverse effect on their business. The changes may have been in error or they may be byproducts of malware.

Falling back to the latest snapshot or backup could mean losing many updates that transpired before the occurrence of the problem. Continuous Data Protection (CDP) is the smart choice for restoring a point in time between the longer interval covered by your snapshots and backups.

Acting like an undo button, CDP continuously logs and timestamps I/Os written to selected virtual disks allowing you to go back to a time of your choice within 48 hours.

High Availability (HA)

Continuous monitoring of servers in a pool and automated restart of virtual machines on alternate servers in the event of hardware failures. Automated restart of virtual machines in the event of OS failures. Provides Increase availability of applications

Distributed Resource Scheduler (DRS)

DRS continuously monitors virtual machines and physical servers to optimally align compute capacity to application requirements based on business priorities.