Celerra
Celerra is a discontinued[1] NAS device produced by EMC Corporation, available either as an integrated unit or as a NAS header which can be added to an independent EMC storage array such as a Clariion or a Symmetrix. It supports the SMB, NFS, FTP, NDMP, TFTP and MPFS protocols. A Celerra Unified Storage device use Clariion storage array as its storage layer and also provides iSCSI and Fibre Channel block-level storage.
It was introduced in October 1996 for the NAS market as "Symmetrix Network File Storage" and later renamed to Celerra.[2]
Celerra was promoted as a platform for virtualization[3]
Optional features included de-duplication, replication, NDMP and storage tiering.
Celerra runs on real-time operating system called Data Access in Real Time (DART). DART is a modified UNIX embedded kernel (just 32Mb) with additional functionality like Fibre Channel driver for HBA and Bonding For Ethernet added to operate as a file server.
Celerra is based on the same X-blade architecture as the Clariion. It is available with a single data mover X-blade or with multiple data movers in an active-passive N+1 configuration.
Comparable to Celerra products are products from NetApp which offer similar features and protocol support, apart from the ability to use block-level Fibre Channel.
In 2011, EMC introduced the new VNX series of unified storage disk arrays intended to replace both Clariion and Celerra products.[4] In early 2012, Clariion and Celerra were discontinued.
Data Access in Real Time
Developer | EMC Corporation |
---|---|
OS family | Unix-like |
Working state | Current |
Source model | Proprietary |
Kernel type | Real-time kernel |
Data Access in Real Time (DART) is a real-time operating system used by EMC Celerra. It is a modified UNIX Kernel with additional functionality.
DART is an embedded, real-time, operating system comprising a modified UNIX kernel and dedicated file server software that together transfer files and multimedia data across a network using a variety of network protocols.
In Summary
DART file server software executes entirely in the kernel—a real-time kernel, the design of which is based on monotonic scheduling. The DART kernel environment is not a generic user-application environment. Instead, the DART kernel provides a run-time environment for the file service; an embedded, dedicated application. The DART file server software is linked with the kernel into a single system image that is loaded for execution at boot time.
The DART Software Architecture
DART is organized into seven layers. These layers handle all the data movement in DART, and all functionality in DART is directed at implementing the processes contained in these layers. Starting with the layer closest to the hardware and ending with the layer closest to the user interface, DART’s functionality is organized as follows :
- Layer 1: Operating system, consisting of the kernel and the kernel debugger.
- Layer 2: Hardware device drivers, consisting of media, network, and SCSI driver components.
- Layer 3: I/O layer, consisting of Continuous Media Network (CMNET), UDP, TCP/IP, CAM, Storage and components.
- Layer 4: File systems layer, consisting of Virtual File System (VFS), Security, and shared files within a Cluster.
- Layer 5: Programming Interfaces, consisting of Remote Procedure Call (RPC), Common File System (CFS), UNIX File System (UFS) and Continuous Media File System (CMFS). The Uthread (UNIX-like thread) component overlaps and interfaces with this layer and with Layer 6.
- Layer 6: Application layer, consisting of Continuous Media Stream (CMSTREAM), Common Interface File System (CIFS), PAX, NDMP, NFS, File Transfer Protocol for DART (FTPD), ONCRPC, HTTP, NIS, and SNMP components.
- Layer 7: System management and control, consisting of Management and Configuration and System components.
Layers 1 – 4 comprise functions performed within the kernel. The programmer typically uses components limited to the higher numbered layers (5 and 6) to add data moving applications.
DART’s RPC framework component provides both client and server facilities, which are implemented over STREAMS, not sockets. DART acts as an RPC server in the context of NFS, and as an RPC client in the context of NIS.
DART External environment. The DART is designed to provide rapid data movement and information sharing across a variety of hardware platforms in a networked environment. All data transfers are either request-response (pull-type for file transfers) or streaming (push-type for multimedia data) with time-sharing, real-time, or isochronous characteristics.
RPC on DART
Two types of RPC are implemented on DART: traditional RPC, and ONC RPC. Due to DART’s multi-threaded architecture, traditional RPC has been implemented on DART with several modifications. For example, on a SUN OS, the RPC code generator, RPCGEN, assumes a single-threaded UNIX process that calls into the library directly. However, DART doesn’t have UNIX processes; but instead uses true multi-threading—that is, n threads pick up RPC messages as they come in. Therefore, when using traditional RPC in DART, an application must perform such functions as registering with the portmapper and extracting credentials and other security information from the message (or checking security without extracting).
Most of the responsibility falls on the application developer—thread creation, initialization, end point creation, initial create main loop, processing stream, reading message, and so on. The Collector, a general synchronizer (not particular to RPC) needs to be declared, whereas in ONCRPC, the collector is built in.
In addition, traditional RPC has a basic structure for receiving the message, but then the application developer must take it apart by means of xdr format and analyze it. In addition, the client-side is not fully implemented. Finally, while RPC is primarily a synchronous protocol, it is used asynchronously in DART in conjunction with mutexes, condition variables, and other types of locks.
DART implements RPC over both the User Datagram Protocol (UDP) and the Transport Control Protocol (TCP). Each application has a single UDP stream and one TCP stream per connection, with the common IP module acting as a multiplexer.
See also
References
- "EMC Discontinues Clariion, Celerra Storage Lines". Archived from the original on 2012-07-12. Retrieved 2011-08-24.
- Press, Gil (September 6, 2016). "A Very Short History Of EMC Corporation". Forbes Magazine. Retrieved December 13, 2017.
- Celerra: Ideal choice for VMware Archived 2009-10-26 at the Wayback Machine, VMware.com
- EMC unveils new VNX unified storage EMC Press release
External links
- Celerra family at EMC
- EMC homepage