This post is my personal technical note for Cisco HyperFlex Hyperconverged Solution drafted during my personal learning. This post is not intended to cover all the part of the solution and some note is based on my own understanding of the solution. My intention to draft this note is to outline the key solution elements for quick readers.
The note is based on Cisco HX Data Platform Release version 3.5. Some figures in this post are referenced from Cisco public documents.
Solution Components:
The HyperFlex solution is composed by below components:
- Hardware: Cisco HX Data Platform Server nodes (2-64)
- Hardware: Cisco UCS Fabric Interconnect (Except HX Edge)
- Software: Customized Hypervisor (VMware/Hyper-V)
- Software: CVM
- Software: vCenter for VMware
HX Server Node
Each HyperFlex HX node is composed by below hardware parts and varied by models
- CPU
- Memory
- Physical disk (HDD/SSD, 1 dedicated SSD for housekeeping and 1 dedicated SSD for cache)
- Network adapter
Each HyperFlex HX node includes by below software parts
- Customized Hypervisor (VMware or Hyper-V)
- IOVISOR (VIB for VMware) – Help to direct the IO to the required nodes. Independent from CVM failure
- Hypervisor API – Utilize Hypervisor API (for example VMware VAAI) to provide efficient storage operation, like File System based cloning (copy the index table for fast cloning).
- Controller Virtual Machine (CVM)
- CVM perform data search/dedup/compression/finding data/cache/log and etc.
- Hypervisor pass through data to CVM and CVM pass data to physical disk.
- CVM provide NFS datastore to hypervisor
Option to have converged node and computer only node
- Converged Nodes or Computer-only Nodes options
- Computer-only nodes still CVM and customized Hypervisor for IOVISOR
- CVM in Computer-only nodes need much less CPU/Memory resources
- Since 3.5, Computer-only nodes number could be double than converged nodes.
Cluster and File System
Key information about the cluster and file system is as per below
- 2-64 nodes expandable in single cluster
- Support Stretched Cluster
- Converged HX nodes are combined to form the distributed file system StorFS (Technology from SpringPath, acquired by Cisco at 2017)
- StorFS is a log-structure file system
- RF2 (Redundancy Factor 2) or RF3 are available to choose during installation and can NOT be changed without redeployment.
Disk based cache is available in each of HX server node
- Where is cache drive? One SSD for each HX node for caching.
- Read Cache – Only for Hybrid models (Most Frequently Use and Most Recently Used Caching Method)
- Write Cache – Same usage for both Hybrid and All-flash models
- Active (serving) / Passive (staging) cache
- Data staging is triggered by active cache (When the active cache is full, the active and passive cache are swapped.)
- There are several levels in write cache determined by RF level
- 2 levels in RF2 write cache (Primary/Secondary)
- 3 levels in RF3 write cache (Primary/Secondary/Tertiary)
Write IO Data Flow (RF3 as example)
- IOVisor will choose the primary node to write node, then choose the secondary and tertiary.
- IO will be confirmed after all three copies written to the cache.
- Staging the data from cache to physical disks when active cache is full or other triggers under certain scenarios.
Read IO Data Flow
- All Flash model will always read from primary copy.
- Hybrid model will read data evenly from all the data copies.
VMware VM_DIRECT_PATH feather is used to accelerate the read and write
……Continue read on second part as below link
Pingback: Cisco HyperFlex Personal Technical Notes (Part 2) | InfraPCS
Pingback: HP SimpliVity Personal Technical Notes (Part 2) | InfraPCS