lundi 16 juillet 2012

The evolution of the data center: Abstraction layers, STP and TRILL

A lire sur:  http://www.techrepublic.com/blog/networking/the-evolution-of-the-data-center-abstraction-layers-stp-and-trill/5784?tag=nl.e102

Takeaway: Scott Lowe discusses the changing face of the data center, and how abstraction technologies are revolutionizing our old architectures.
The days of the multilayered, severally-tiered data center network are numbered. As organizations have moved further and further into implementing various abstraction technologies, these same organizations have been able to rethink how their data centers are designed and move to flat, simple networks that support very different traffic patterns than those of the past.

Abstraction

You might be wondering what I mean by the phrase “abstraction technologies.” Well, there are a few and they all revolve around various virtualization technologies available on the market.

Server virtualization

The rise of the hypervisor was the first development that has led us to where we are today. Data centers today look radically different than the data centers of just 10 years ago. Whereas legacy data centers often operated with hordes of staff nearby to handle physical tasks such as adding new hardware, deploying new operating systems, and cabling new systems, these tasks don’t take place nearly as often as they used to. Sure, organizations still need to deploy new hardware from time to time, but from an ongoing operational perspective, service deployment has migrated from a hardware-intensive task to being primarily software-driven. At the same time, the amount of work performed by each physical server has increased, thus reducing the amount of physical servers needed to run workloads.
However, this workload abstraction has created other challenges in related data center resources including storage and overall I/O.

Storage virtualization

For many organizations, gone are the days when individual servers carried mass storage in order to meet applications’ needs. We’ve ushered in a new era of massively shared storage, with many different storage tiers operating in tandem in an attempt to balance workload and cost. In order to bring some order to storage chaos, we’ve also seen efforts to abstract storage management duties away from the individual array level and to a higher level that includes pooled storage from across the organization. When you look at storage virtualization from a high level, it looks just like server virtualization. The individual workloads are abstracted away from the underlying hardware while complex software systems make decisions about where a workload belongs.

I/O virtualization

How many of you out there have masses of cables carefully tucked away in your server cabinets? You may have cables that are intended to allow you to connect your servers to many different data networks, additional cables to connect to storage systems — whether they’re iSCSI or Fiber Channel — and a number of other kinds of connections. With the rise of stupendously fast network adapters and switching fabrics that can handle massive workloads, we’re seeing vendors arise that can combine these previously disparate connectivity methods into single cables to host servers that can carry all of the traffic necessary to support the workload needs of that server. Called I/O virtualization, companies such as Xsigo claim that they’re able to reduce by up to 70% the number of cables, interface cards, and switch ports that are necessary to support a complex data center environment.
Continuing with Xsigo as an example, their solution moves to a software layer the majority of the effort that’s necessary to support server communications needs, allowing administrators unprecedented granularity and flexibility when it comes to managing data center networks. I clearly remember a System Engineering job I was in 12 years ago in which I had to run six separate network cables to each server in three full racks of servers and connect each cable to a network switch port. Today, I could accomplish the same goal with just a single cable, a 10 GbE connection and switch port, and the ability to slice and dice that 10 GbE connection any way I want. That’s power.

The results

All of this abstraction, while it has enabled all kinds of new opportunities for efficiency and availability, has also significantly changed the way that traffic runs in the data center. Whereas the flow of traffic used to be generally in-to and out-of the data center, internal traffic within the data center has increased exponentially. Although applications have always had to communicate with one another and different app tiers have had to chat on the network from time to time, when you start to consider running a lot of workloads on a single host and then constantly moving those running workloads around the data center at will, things change.
Simply put, tools are becoming increasingly bandwidth-intensive in ways that couldn’t be anticipated a decade ago.

Spanning Tree Protocol

For years, organizations have sought to control network issues through the use of such protocols as Spanning Tree Protocol (SPT), which has generally worked well, even though it also introduced its own challenges. SPT has been robust and capable.
But, SPT is based on link blocking rather than finding ways to make the most efficient use of available network resources. In these days of high bandwidth needs inside the data center, such brute force methods, while necessary to solve critical networking issues, can result in less than optimal performance and might even increase costs as traffic is forced over what could be expensive links.

Figure A


Spanning Tree is blocking a port.

Transparent Interconnection of Lots of Links (TRILL)

What if, instead of blocking links out of fear of loops, you could constantly make use of all of the network paths at your disposal? By doing so, you wouldn’t be forcing down perfectly good network paths just because of a loop.
This is where an emerging technology known as TRILL-Transparent Interconnection of Lots of Links-comes into play. To put it simplistically, TRILL enables you to use all of your network paths all the time without fear of loops. It effectively eliminates the need for Spanning Tree in the environment.
Through the use of TRILL, all paths become a single large mesh in which all paths are valid. By enabling the possibility for all paths to remain available at all times, organizations can better support high bandwidth, low latency workloads.

Figure B


TRILL uses all available links all the time.

Debate ensues

As you might imagine, the thought of throwing away a technology that has performed well for years does not sit well with everyone. There is debate in the networking community about just what, if anything, needs to be done to adjust to emergent traffic patterns in data centers.
Defenders of Spanning Tree indicate that the protocol has been around for a long time and companies need to be careful before jumping on what’s new and shiny. Protocols come and protocols go, but it’s the rare one that can last decades, making the jump from the earliest days of Ethernet all the way up to today’s modern 10 GbE speed demons.
On the other side of the debate are those that feel that every protocol has its day and Spanning Tree is due to die in favor of something a bit more flexible and efficient.
Still others advocate for a combination of the two, replacing a central core network with a TRILL-based mesh core and joining smaller STP domains at strategic locations to reduce the impact of an outage.

Reducing tiers

Finally, let’s talk about the need for three networking layers. Historically, we’ve seen networks designed with three tiers in mind:
  • Core
  • Distribution
  • Access
These three tiers were designed with performance, security and scalability in mind. Further, it allowed the deployment of networking technology that may not have the horsepower to combine multiple tiers into a single unit.
In a two-tier network, the distribution layer is eliminated, with its services being rolled up into the core or down into the access layers. With modern equipment that can more than keep up and with vendors selling equipment that can handle these newly converged tiers, it’s become a more common practice.
Two tier networks are easier to make redundant and are just as scalable as their three tier counterparts, too.

Summary

Time changes and today’s data centers are adjusting to the new realities that they face. With changing traffic and usage patterns come new ways to think about data center architecture.
What are you doing in your own organization? Are you reducing network tiers? Are you looking at tools such as TRILL to either supplement or supplant STP?

Aucun commentaire:

Enregistrer un commentaire