Enterprise Services Outlook Logo

The Evolution of Server Technologies

By Dominic Wellington, Chief Evangelist, Moogsoft

content-image

Dominic Wellington, Chief Evangelist, Moogsoft

Change has always been a constant in IT. We used to know at least the direction that change was going in, but not anymore. How many servers do you have? What do they do? Previously, questions like these were easy to answer. You gave the junior member of the system admin team running shoes and a clipboard, and turned them loose in the datacenter to “count noses.” While servers were occasionally forgotten eventually somebody would stumble over the dust-covered box, follow the cables and bring the prodigal back into the fold.

Technology was equally predictable. Moore’s Law meant that every 18 months CPU speeds doubled. OS upgrades came at a predictable pace. RAM and storage technologies also evolved along their own trajectories, but major discontinuities were rare. Today, however, it is quite a bit more complex. As Moore’s Law starts to show its limitations, evolution has continued, but in different directions. Today a server is no longer twice as fast in clock speed as what it replaces, and looks far more different from its predecessor than used to be the case.

How Servers Have Evolved

The initial change was the move to server virtualization, so one physical server could host any number of virtual servers. Yet many IT organizations were unpleasantly surprised to realize that while virtual servers were very easy to create, nobody was taking care of retiring them when they were no longer needed. This explosion of virtual servers quickly led to a major capacity crunch, resulting in the requirement for better management and planning.

Then came the various flavours of cloud computing. Public cloud meant servers running outside the datacenter, which meant outside the fire wall – outside the perimeter. This of course resulted in all sorts of new security and compliance headaches. However, procurement was out of IT’s control, requiring only that users – often developers – enter a credit card, and adoption happened while IT was still debating about what to do.

Private cloud seemed like a better option, but when fully implemented as a self-service option, it often exacerbated the capacity problems that virtualization had first introduced.

Finally, we are now dealing with containers, which despite some at­tempts to treat them as just another sort of virtual server, represent something rather different, with multiple con­tainers sharing a single underlying operating system. Old-time main­frame and Unixhands tend to scoff at the idea of this sort of partitioning as innovation, but it is the scale of adop­tion that is making the difference.

Why Embrace Evolution?

All of this change has left IT teams sometimes scrambling to keep up with users, especially developers, who want to adopt all the newest technologies right away. Users are also able to access those technologies much more easily than in the past. Previously, the IT department owned the physical data center, and had time to enroll servers in its management processes while space was found in the racks, power and network cables were placed, operating system disks were swapped out to install the system and all the many tasks required a server to start, well, serving were completed.

The strength of the new model of IT is that there is no such bottleneck – but IT management techniques and tools have not kept up. All too often, IT professionals struggle to answer the question at the beginning of this article: “How many servers do you have? What do they do?” The problem is that knowing the answer is the prerequisite to most processes and management solutions that exist today. Without this up-to-date list of systems and dependencies, however, many things no longer work.

“How many servers do you have? What do they do? The problem is that knowing the answer is the prerequisite to most processes and management solutions that exist today”

Monitoring is generally used to be assumed to require knowledge of what was to be monitored. Not having this data, though, leads to silent failures, where some piece of infrastructure that is not monitored goes down, turning out to have been critical to some service or functionality. Troubleshooting these situations is a nightmare, because all of the indicators were green – right up until everything burned down, fell over and sank into a swamp.

Processes do exist to monitor such things – that is not the issue. However, they tend to assume that it takes six to eight weeks to provision a server, so there is plenty of time to register it and enrol it in all the various monitoring and tracking systems. That comfort­ing assumption no longer holds true in the age of VMs, containers, and all kinds of clouds. As the famous adage goes, “servers should be cattle, not pets.” Pets have individual names and are coddled and cosseted, healed when they are sick and documented – perhaps ob­sessively – by their own­ers. Cattle, on the other hand, get numbers, not names, and are con­sidered as entirely i nter-changea­ble. If one gets sick, you just get a new one and keep right on going.

This means that you also have to change your thinking from being the loving, dedicated caretaker of a small number of individual pets, to being a farmer in charge of a large herd.

Evolution is Inevitable

The evolution of IT tools and processes is happening, but the bottleneck is in changing the thinking of the IT organi­zations themselves. Trying to treat a container in a shared service accessed over the Internet in the same way as the 1U server in the rack in the data center downstairs is a recipe for disaster. IT departments now need to accept the new definitions of what a server is if they are to avoid that disaster. The good news is that the rewards are also signifi­cant. Thanks to this evolution in what a server can be, IT has the opportunity to be much faster and more efficient than ever before, delighting users and ena­bling new business success.

New Editions

New Editions

Featured Vendors

Leaders Speak

William Miller, SVP & CIO, Broadcom, Inc.

Confluence of Trends is Changing the Landscape of Industry

By William Miller, SVP & CIO, Broadcom, Inc.

Dr. Cheryl Flink, Chief Strategy Officer, Market Force

Boosting Store- Level Performance through Big Data

By Dr. Cheryl Flink, Chief Strategy Officer, Market Force

Paul Kent, VP-Big Data, SAS

Using Hadoop as an Analytics Catalyst

By Paul Kent, VP-Big Data, SAS

Mark Lilien, SVP & CIO, Things Remembered

18 Tips for Retailers Negotiating Software Contracts

By Mark Lilien, SVP & CIO, Things Remembered

Matt Wolken, VP & GM, Information Management Products, Dell Software

Building an Ethical Big Data Practice

By Matt Wolken, VP & GM, Information Management Products, Dell Software