^

Technology

The advantages of Dell's distributed architecture

- The Philippine Star

MANILA, Philippines - The way information and traffic travel around an enterprise network today is radically different from what was typical even a decade ago. Several factors account for this change.

First, the workforce has become extremely disperse and mobile. To become productive, mobile workers need quick, multi-point access to data anytime, anywhere.

Second, the adoption of virtual computing (and its extension, cloud computing) has resulted in much higher server-to-server traffic flow than before.

Third, enterprises now have vastly larger volumes of data to process, store, and analyze than previously. As with virtual and cloud computing, this involves the movement of data to and from servers (or server clusters) and storage assets.

Many enterprises, however, operate networks that do not cater to these factors and the changes in network traffic and volume that they bring about. The result: bottlenecks that show up as sub-optimal network performance, a level of availability lower that what end-users expect, and application delays.

These sub-par networks typically use a traditional three-layer (access, distribution/aggregation, core) monolithic design with a chassis-based switch as its core.

Chassis-based vs distributed core

In any network, overall bandwidth is limited by the bandwidth available at the core. In traditional networks scaling up involves the adding of switches from the same vendor within the chassis or when all slots have been taken up, performing a painful, expensive and potentially disruptive rip-and-replace forklift upgrade.

Core switches in such networks usually contain the vast majority of the intelligence needed to orchestrate-view, monitor, correlate, manage, etc. the network so enterprises invariably end up being locked into their switch supplier for the operational life of the equipment.

Traditional networks excel when traffic patterns are predominantly north-south (i.e. in and out of the data center) as what happens in a traditional client/server environment with dedicated application servers.

Modern-day network traffic flow is more horizontal: from server to server, virtual machine to virtual machine, and server cluster to server cluster. In a traditional network that cannot be scaled up further, such east-west flows result in compute nodes located across different physical switches not having full bandwidth; as a consequence, QoS drops and latency increases.

Clearly, alternatives to the traditional monolithic network design for the data center are needed. In response, networking vendors are promoting distributed core architecture.

Commonly referred to as “leaf-spine architecture,” this design employs two types of nodes: one that connects servers or top-of-rack elements (leaf node) and the second that connects switches (spine node). The spine nodes serve as a switch fabric, the leaf nodes as line cards, and the interconnect cabling as the backplane.

Most chassis-based core switches are not suited to a distributed core design because they are simply too large and expensive to use in leaf-spine configurations at scale.

Advantages of distributed architecture

Compared to the traditional design, the distributed core architecture offers significant advantages in the three critical areas in data center switching:

• Scalability. The distributed core fabric can be massively and cost-effectively scaled through the use of multiple, low-cost Ethernet switches vs. traditional and expensive chassis-based systems.

• Performance. The distributed architecture provides up to 1:1 bandwidth by equally distributing the total uplink capacity of a leaf-layer node to all of the spine-layer nodes. This makes it ideal for any-to any traffic flow.

• Resilience. Because each of the elements in a distributed architecture is free standing, there is no single point of failure. This makes the distributed core architecture inherently more resilient that that centered on a chassis-based core.

While vendors are promoting distributed core solutions to their customers, they have taken different approaches. Most of the large pure-play networking vendors have taken a monolithic approach to a distributed architecture, using extended standards and incorporating proprietary protocols, operating systems and even proprietary silicon into their products.

So while the core fabric is distributed it must be managed as a complete entity using the vendor’s own management solution and doesn’t interoperate with other vendors’ products.

In contrast Dell has taken a truly open standards approach for its distributed core solutions. An open systems infrastructure relies on open, standard-based technology for interfaces, interconnect, control plane, and other aspects of network operations.

The network thus supports any computing (hypervisors, Ethernet Virtual Bridging, vSwitch) or storage (NAS, NFS, CIFS, iSCSI, Fibre Channel) solution that also supports open standards. Dell’s open systems approach lets data center owners mix and match components, giving them unrivalled flexibility in choosing switches that suit their specific needs.

Dell distributed core systems

Using the Dell Force10 Z9000 fixed configuration (i.e. non-chassis-based) systems that joined Dell portfolio of networking products late last year, enterprises have a choice of three distributed core deployment models, depending on the port counts and performance levels they wish to achieve.

Featuring 32 40GbE and 128 10GbE ports, the Z9000 serves a primary building block in a leaf-spine data centre fabric. Purpose-built for leaf-spine networks, the two-rack 800-watt device provides ultra-fast, ultra-low latency connectivity at a fraction the cost of competing chassis-based products. At scale, the Z9000 can support up to 64 spine nodes and 128 leaf nodes to create a massive 160 Tbps core in an extremely small footprint.

For those with less demanding requirements, Dell offers the S4810, which serves up industry-leading 1/10GbE top-of-rack densities combined with 40GbE capabilities.

Both the Z9000 and the S4810 come with redundant power supplies, consume little power, and incorporate future support for the Data Center Bridging (DCB) and Transparent Interconnection of Lots of Links (TRILL) standards that are being developed.

Both (and other Dell systems) also run the same market-proven Dell Force10 Operating System (FTOS), a powerful and robust Layer 2 and Layer 3 operating system. 

Based on NetBSD, FTOS is designed for high performance and resiliency. It includes a hardware abstraction layer that makes it easy to port applications across product lines.

Like all Dell Force10 products, the Z9000 and S4810 can be managed using the Dell Force10 Management System for intuitive, GUI-based control. Dell Force10 equipment can also be managed by a range of third-party management platforms through standard management interfaces.

Complementing FTOS and Dell Force10 Management System is Open Automation Framework, a suite of open standards-based automation tools for data center operations.

Using these tools together or independently, enterprises can simplify operations in their data centers, big or small, virtual or conventional, while increasing operational efficiency and deployment speed.

To know more about Dell’s distributed architecture, log on to www.dell.com.

vuukle comment

ARCHITECTURE

BASED

CENTER

CHASSIS

CORE

DATA

DELL

DISTRIBUTED

LEAF

MANAGEMENT SYSTEM

NETWORK

  • Latest
Latest
Latest
abtest
Are you sure you want to log out?
X
Login

Philstar.com is one of the most vibrant, opinionated, discerning communities of readers on cyberspace. With your meaningful insights, help shape the stories that can shape the country. Sign up now!

Get Updated:

Signup for the News Round now

FORGOT PASSWORD?
SIGN IN
or sign in with