Eric's Technical Outlet

Learning the hard way so you don't have to

10GbE is Certainly in Your Future, But Must It Be In Your Present?

It’s interesting to watch how people will become accustomed to a certain standard to the point that they have difficulty imagining any other standard. One fairly recent trend in some computing circles is the insistence that all new server systems come installed with 10GbE adapters by default. While their positions have merit, it’s not a good rule… yet.

I’m never shy about criticizing “advice” given by amateurs-posing-as-professionals, but this particular bit of guidance is often given out by respectable, intelligent, experienced, and well-meaning individuals. The problem is simply that they’re used to working at or with fairly large installations where gigabit has been ubiquitous for such a long time that they simply cannot conceive of anything less.

Usually, I see this recommendation being made by speakers, trainers, company spokespersons, and others I have no interactions with. I was, however, able to talk to one individual about it. He was truly shocked to learn that my last two positions, which ranged from 2008 through 2012, were both in facilities that used 100base-TX Ethernet for hundreds of people and neither were in any hurry to upgrade. The money was a concern, of course, but the other side was that there was simply no way to make the business case to upgrade to gigabit or better. The only gigabit switches I had in the position immediately prior were those connected to the Hyper-V Servers. None of the others had any real use for them.

Of course, it’s annoying to be stuck on a 100Mb line when you want to move an ISO file to a system, but other than that, what common use do your average users and servers have for higher speeds? User files are usually no more than a few megabytes in size, database servers are designed to keep the big data local, e-mail and web servers are sized for Internet connection speeds which are still a fraction of LAN speeds, printers can’t even keep up with 10Mb connections, etc. What you’ll commonly find is that administrators don’t really know what their networks are doing. I’ve used MRTG to watch the traffic loads at a facility serving 200 users and the networking utilization is nearly flat even during peak hours. Any time it isn’t, you can expect to find an over-the-network backup job or a problem of some sort.

Step up to the next tier of network where you have a few thousand users. Here, you’re going to find gigabit everywhere. Your network utilization will be higher, of course, but the network is also ten times faster and it’s been subnetted and controlled for traffic-shaping. What you’ll find is that there’s really not much contention here, either.

So, who is it calling for 10GbE as the new default? It’s people who work with networking jobs that need to move a lot of data in a hurry. One of those is in virtualization. With the amount of RAM you can place into physical computers now, you might be wanting to migrate guests that cumulatively have dozens or even hundreds of gigabytes of RAM. Trying to move all that over a gigabit network can be downright painful. The issue is that these moves are the exception for network traffic, not the norm. Likewise, 10GbE should be the exception.

Of course, there are other uses. For instance, a new feature in Windows Server 2012, provided by the Hyper-V virtual switch and network teaming, is converged fabric. This is a fancy name that means a lot of disparate logical networks can share a single physical link or a group of physical links without being matched in a one-to-one fashion. So, a single 10GbE card can pipe traffic for different VLANs onto and off of a Windows Server 2012 system. This is a great feature. But, what it can also do is aggregate a bunch of gigabit adapters and use those in exactly the same way. So, if you haven’t got dozens of gigabytes to move around, the multiple gigabit path is just as viable as the 10GbE path. The other missing factor many of these experts aren’t really considering is cost.

The response here is usually to look at the cost of adapters. A typical quad-port adapter runs around $4-500. A dual-port 10GbE NIC costs around $700. So, for only a few hundred more, you can have five times the bandwidth capacity. For the institution I work in now, if we get more than a couple of people in a room discussing it, it only takes about 15 minutes before the conversation is no longer worth having. That’s how a lot of these decisions get made: sometimes it’s cheaper to just buy the more expensive system than get a lot of well-compensated professionals talking about it. For smaller organizations, the math is much more complicated. For one thing, a single adapter, multi-port or not, is a single point of failure. That means the prices just doubled. Not only does the adapter cost money, but the infrastructure needs to support it as well. 10GbE infrastructure can get pricey in a hurry; so can gigabit, which is the other reason 100Mb is still so common. And again, are you really going to use all that capacity?

Will prices fall? Certainly. Will data loads increase to need it? For the near-term, not really. Yes, I’m well aware of those who proclaimed we’d never need systems above some now-ridiculously-small number. We’re going to always continue to need bigger systems. But, are we all going to need them and, more importantly, how soon? Some data needs are reaching a plateau. It will never take more bandwidth to send the text in this blog post than it does right now. Will the amount of metadata about blog posts such as this expand? Almost certainly. But will it increase exponentially? Probably not. What about something like audio streams? Will they increase? Perhaps, but only if the cost of transmitting data decreases enough in comparison to processing power to justify reducing compression rates, which is unlikely to happen soon. Video? That will probably continue to increase, but again, exponentially? And soon? What about all this “big data” talk? Most of that “big data” is sitting idle, waiting to be mined, not actively moving across networking interlinks. Just because that super-system has 7 petabytes of data doesn’t mean that it needs to move 7 petabytes of data.

So, if you’re hearing these recommendations and feel pushed to jump to 10GbE before you’re certain, don’t panic. If you truly need 10GbE, you probably already know that you need it and you’ve got it in your datacenter or on the way. If your systems are working well as they are, don’t feel like you have to rush to the next big thing. Odds are high that if you aren’t pushing the limits of your equipment today, it will die of old age before you’ll have a compelling reason to jump to 10GbE. It will be there when you really need it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: