EMC VMAX3 ARCHITECTURE
The V-MAX3 architecture builds on the older DMX architecture, but has some fundamental differences. A traditional storage subsystem consists of discrete components: device adaptors that connect to the outside world, disk adaptors and enclosures for the data storage and a large memory cache to speed up access. A VMAX puts all of these items together into an 'engine', so the architecture is 'engine based'. Each V-MAX engine contains two directors and each director contains Host and Disk device adaptors, a CPU complex, and cache memory. The engine also includes cooling fans and redundant power supplies. If more capacity is required, the VMAX can be upgraded by simply adding another engine, up to a maximum of 8, depending on the VMAX model. Engines are connected together by a Virtual Matrix, so each engine has redundant interfaces to the Dynamic Virtual Matrix dual InfiniBand fabric interconnect.
A VMAX3 Hypervisor enables Microsoft Windows and Linux/UNIX clients to share multi-protocols in NFS, CIFS, and SMB 3.0 environments while supporting Fibre Channel access for high bandwidth, latency-sensitive block applications. It also allows the VMAX to achieve SRDF consistency across multiple arrays without needing an external host to manage that consistency and the cycle switching. The Internal eNAS Datamover runs on the VMAX3 hypervisor as a VM container, with a virtual version of the control stations and the datamovers. The active data movers and control stations run on different directors from the standby to ensure the highest availability.
VMAX models
EMC now offers three main models of VMax -- the 10k, 20k and the 40k, which have a maximum of 1080, 2400 or 3200 drives, respectively (up to 1.5 PB, 2 PB or 4 PB). This capacity is determined by the number of Drive Array Enclosures or DAEs each model can have. The 100K can have two DAEs, the 200K four and the 400K can have eight. The 200k can only have 2 DAEs if 2.5” and 3.5” drives are mixed.
The VMAX 100K is the entry model of the VMAX3 systems. It can scale up to 4 VMAX3 engines with up to 96 CPU cores per array. It can be configured with up to 128 front-end ports and a 1 PB usable capacity. With a VMAX3 engine and up to 720 drives in a single rack, the box has a low footprint for the capacity. It can use FAST.X to take advantage of VMAX3 data services on externally tiered workloads such as EMC XtremIO all-flash array or non-EMC storage. It has the option to use EMC ProtectPoint software for direct backup to a Data Domain system, and also can optionally use data at rest encryption.
The EMC VMAX 200K, can contain up to 2PB of data. Apart from increased capacity, the features are similar to the 100K.
EMC VMAX 400K has the option of 2.5" SAS drives and flash drives and it comes with 32 x 2.8 GHz Intel Xeon 6-core processors, up to 2 TB of mirrored RAM, and up to 4 PB of usable capacity. File services are embedded on the array, so it’s easy to converge block, file and mainframe workloads. EMC VMAX 400K can scale up to 8 VMAX3 engines with up to 384 CPU cores, 256 front-end ports and up to 5,760 drives. It can be used to consolidate OLTP, mainframe, Big Data, and block/file workloads.
Cache
The Cache memory is mirrored between directors, and in configurations with 2 or more engines, it is mirrored between engines.
Internally, the engine components communicate locally, so cache access is local. However, the engines must communicate with each other and also support the Enginuity global memory concept. To achieve this, the cache is virtualised, and each engine communicates with other engines using fiber connect and RAPIDIO technology. When a director gets a cache request it then checks the location, and if it is local it is served at memory bus speeds. If it is remote, then the request is packaged up and sent off to the remote director for processing.
Internally, the engine components communicate locally, so cache access is local. However, the engines must communicate with each other and also support the Enginuity global memory concept. To achieve this, the cache is virtualised, and each engine communicates with other engines using fiber connect and RAPIDIO technology. When a director gets a cache request it then checks the location, and if it is local it is served at memory bus speeds. If it is remote, then the request is packaged up and sent off to the remote director for processing.
Device Adaptors
Internally, the VMAX uses 4Gb/s communications end to end, with support for 8 Gb/s FICON or Fibre Channel host connections, internal connectivity and Fibre Channel Drives. The backend architecture is FC-AL.
To achieve this each director within a V-Max Engine contains two Back End I/O Modules and two Front End I/O Modules.
The Back End I/O modules provide access to four Drive Enclosures using a single Quad Small Form-Factor Pluggable (QSFP) connector. The QSPF contains 4 smaller cables which connect to the disk drives
The Front End I/O Modules can be configured for Fibre Channel, iSCSI and FICON
To achieve this each director within a V-Max Engine contains two Back End I/O Modules and two Front End I/O Modules.
The Back End I/O modules provide access to four Drive Enclosures using a single Quad Small Form-Factor Pluggable (QSFP) connector. The QSPF contains 4 smaller cables which connect to the disk drives
The Front End I/O Modules can be configured for Fibre Channel, iSCSI and FICON
Virtual Matrix
The VMAX architecture extends the direct matrix principle used in the older Symmetrix subsystems, but now the matrix is virtual. Each engine has 4 virtual matrix ports, 2 on each director. They are used to connect to other engines with two MIBE (Matrix Interface Boards).
System Bays and Storage Bays
Similar to DMX3 and DMX4 arrays,Vmax has two types of bays
The System bay contains all Vmax engines and also the system bay standby power supplies(SPS), Uninterrupted Power Supply(UPS), Matrix Interface Board Enclosure (MIBE), and a Server (Service Processor) with Keyboard- Video-Mouse (KVM) assembly. Each system bay can support up to 720 2.5" drives or up to 360 3.5" drives, or a mix of the two.
The Symmetrix V-Max array Storage Bay is similar to the Storage Bay of the DMX-3 and DMX-4 systems. It consists of eight to sixteen Drive Enclosures, 48 to 240 drives, eight (8) SPS modules, and unique cabling when compared with the DMX Series. The Symmetrix V-Max array Storage Bay is configured with capacities of up to 120 disk drives for a half populated bay or 240 disk drives for a fully populated bay. Drives, LCCs, power supplies, and blower modules are fully redundant and hot swappable and are enclosed inside Disk Array Enclosure(DAE).One DAE holds 15 physical disk drives and one storage bay has total 16 DAEs(hence a storage bay has maximum of 240 disk, 16*15)
The V-MAX starts with the single-cabinet, entry-level device that can hold 120 disks. This can be extended by adding up to 10 more frames, each holding 240 disks. The latest release can consist of two disconnected frames.vOne of the difficulties in machine hall design is leaving room for various frames to grow as cabinets are added to increase capacity. The V-MAX can now be split into 2 frames, where the system bays can be up to 25m apart.
More on Architecture
The Symmetrix V-Max family includes 2 options for
scalability and growth. The V-Max series scales from 48 to 2,400 disks and provides
2 Peta bytes of usable protected capacity when configuring all 1TB SATA disks.
The V-Max SE scales from 48 to 360 disks and is intended for smaller capacity
needs that require Symmetrix performance, availability, and functionality.
The V-Max architecture is comprised of up to 8 engines. Each
engine is a pair of directors. Each director is a 2-way quad-core Intel Xeon
5400 system with up to 64GB memory.It provides support for Fibre Channel,
iSCSI, Gigabit Ethernet, and FICON connected hosts. Front-end and back-end
connectivity has doubled over the DMX-4 with up to 128host ports and 128 disk
channels. The V-Max also leverages 2.3 Gigahertz multi-core processors. The new
Virtual Matrix provides the interconnect that enables resources to be shared
across all V-Max engines to enable massive scale out
The Virtual Matrix Architecture replaces individual,
function-specific directors with Symmetrix V-Max Engines, each containing a
portion of Global Memory and two directors capable of managing front end, back
end, and remote connections simultaneously.. Scalability has improved in all
aspects: front-end connectivity, Global Memory, back-end connectivity, and
usable capacity. The increased usable disk capacity is the result of an
increase in Global Memory combined with a significant reduction in metadata overhead
allowing 2400 devices to be configured with RAID types other than RAID 1
resulting in a dramatic increase in usable capacity. The Virtual Matrix is
redundant and dual active and supports all Global Memory references, all
messaging, and all management operations including internal discovery and
initialization, path management, load balancing, fail over, and fault isolation
within the array. The Symmetrix V-Max array is comprised of 1 to 8 V-Max
Engines. Each V-Max Engine contains two integrated directors .Each director has
two connections to the V-Max Matrix Interface Board Enclosure (MIBE) via the
System Interface Board or SIB ports. Since every director has two separate
physical paths to every other director via the Virtual Matrix, this is a highly
available interconnect with no single point of failure
Each director also has 8 back-end 4Gb/s FC ports (comprised
of quad-port HBAs) and various options for the front-end including 8 4Gb/s FC
ports. In the full configuration of 128 4Gb/s FC ports on the front and back
ends, the expectation is that this system could deliver 40GB/s if there a no
bottlenecks in the system architecture.
V-Max Engine Architecture
The full VMax system comprises 11 racks!
The center rack is for the VMax engines, the other 10 are
storage bays. Each storage bay can hold up to 240 drives. There are 160 disk
array enclosures, 64 directly connected, and 96 daisy chained. There are 8 VMax
engines, as denoted by each color.
When configuring the Symmetrix, there are different types of
Hyper devices that can be configured.For example:
Standard devices (STD) are configured for normal production
operations
Business Continuance (BCV) devices are configured for
TimeFinder/Mirror replication
Virtual Devices (VDEV) are configured for TimeFinder/SNAP
local pointer-based replication
Dynamic Reallocation Volumes (DRV) devices are configured
for Symmetrix Optimizer hyper re-location
TDEV devices are virtual cache-only devices that can grow in
capacity
Save Devices are configured for Time Finder/SNAP and/or TDEV
devices
R1 and R2 for remote replication
Virtual devices can reduce wasted disk space because the
actual data is kept in a common pool ;only what is used is allocated in the
common pool and the pool is shared by many TDEV devices. Eg: the host has a 100
GB Virtual TDEV device, the TDEV device uses no disk space, the Save pool
contains the actual data, and only 20 GB is allocated until more space is
required. The allocated capability is managed by EMC software
No comments:
Post a Comment