iApplianceWeb.com

EE Times Network
News Flash Appliance Insights Appliance Directory Standards in IA Webcasts


 

Netcentric View:

RTOSes in the age of Network Processing

By Bernard Cole
iApplianceWeb
(11/13/01, 03:10:52 AM GMT)

 

As network processors become commonplace in switches and routers on the wide area network and within the Internet Data Center, how will operating systems have to adapt and adjust to the new realities of the "network as data-flow processor?"

The communications segment has risen to dominance in terms of total share of market and engineering design focus. Real-time operating system (RTOS) vendors estimate that the communications segment constitutes as much as 50 to 60 percent of their annual business. Because of a new architectural paradigm that requires embedded processor engines to be more focused on data flow than on traditional real-time control, RTOSes must now operate in extremely complex, nuanced, and demanding environments.

First, there is the performance, real-time response and degree of determinism that are required by hardware targeted at different segments of the market. The segments include the network core where data rates are in the 10 to 100 Gbit/sec range Another is the equipment that fits into local telecom switching centers and wireless bay stations that deals with data rates in the 1 to 10 Gbit/sec range. A third segment is from the curb to the premises where data rates range from as low as a few tens of thousands to several millions of bits per second.

Datacom switches and network routers exhibit several areas of differentiation: data plane processors concerned only with moving packets; control plane processors that provide the directions for the data plane processors for packet verification, classification, modification, compression and decompression, encryption and decryption, and traffic shaping; and management plane processors, mostly server-based, providing the backend accounting and billing services.

Depending on the segment and performance requirements, a variety of hardware architectural solutions are being employed to create a range of specialized data flow engines. Examples include totally new highly parallelized processors, multiprocessor arrays of existing sequential-flow RISC processors, and single-chip arrays of multiple processing elements, to name a few.

Onto these architectures RTOS vendors are attempting to impose a range of solutions that, while superficially similar, have subtle and telling differences that could affect their usefulness in the different niches: object versus process kernels; microkernel versus monolithic kernel; direct interprocess communications-based messaging versus indirect message/mailbox mechanisms; coarse- or fine-grained memory protection; task- versus driver-based interrupt structures.

After poring over data sheets and app notes and trying to figure out what is required in this market and what the best OS architecture might be, I finally worked up a common set of requirements across the various applications.

These include extensive multiprocessing support, quick, efficient, direct messaging capabilities between cooperating elements within a specific location and throughout a network; fine-grained memory segmentation and protection; high availability; in-service upgradability; heartbeat services and check-pointing to allow more sophisticated failure detection; and sophisticated failure-recovery mechanisms. The issue of determinism and real-time interrupt response seems to be variable across many applications in this market segment. Nor are they always at the top of the list of specifications the engineer is looking for.

Almost every commercial RTOS I've looked at offers some or all of these features. The big question remains: what is the best way to get these features? Creating a new RTOS from scratch or bolting on the various features on a catch-as-catch-can basis?

To find the answer we need to go back to the basics, in this case back to the OS architecture from which virtually every modern OS claims inheritance, Mach, and then work forward from there and try to determine which type of RTOS comes closest, and retains many of the original features.

There are a number of other seminal OSes, such as Amoeba, the original Chorus, Linda, Munin and AT&T's Plan 9 and its direct descendant, Inferno, which I nominated in my book, (Prentice-Hall), as my personal favorite. Since Mach is the one mentioned to me most by OS architects, let's look at it and see if it tells us where we need to go.

Mach, in common with most of the others mentioned in my list, places a considerable focus on operation in a distributed computing environment. Mach is based on a research effort that grew out of work being done at Carnegie Mellon University in the mid-80s. A key concept in Mach was a limited redundant microkernel to provide only the barest essentials for making a system work -- process and memory management, communications and I/O services -- leaving most other traditional operating system functions such as files and directors in the user space. Mach was focused on exploiting parallelism in both the system and application environment, distributed computing and transparent access to network resources. Great reliance was placed on message passing, in which every machine in a distributed environment performed as a user-level network message server -- a multithreaded process that performed a variety of messaging services both locally and externally throughout a distributed environment.

While most OSes have borrowed extensively from Mach, for the most part these efforts have been in bits and pieces and in the past did not focus on the distributed aspects of this pioneering effort. However, a few commercial RTOSes do seem to draw heavily from Mach and other similar efforts, and seem ideally suited for the highly distributed nature of the network processor-based datacom and networking infrastructure.

These are a handful of commercial RTOSes that are based on direct message-passing-based microkernels of no more than 20k or so bytes of code and which implement only the most basic of services -- interprocess communications, low-level network communications, process scheduling and interrupt dispatching - with a minimum number of kernel calls, usually no more than 10 to 20 at most. All other higher-level services are provided as needed by optional processes that are virtually indistinguishable from user written applications.

Because such microkernels are so small, they can offer a high degree of memory protection and fault tolerance with relatively little overhead, which would seem to be an advantage on many of the multiprocessor-based switch and router designs.

Second, since there is very little code running in the kernel, it is highly unlikely that it will be the source of any failures. Third, individual processes and components can be started and stopped dynamically. This means they can be altered on the fly without shutting down the system, a significant benefit in high availability systems.

Because such kernels are so small, memory protection does not seem to affect performance. Fourth, because of their sophisticated message passing, such RTOSes fit well in both multiprocessor and distributed computing applications because processes can continue to interact no matter where they are located and operations can easily synchronized, a problem all network applications face.

Although such RTOSes may be the best solutions to the set of problems presented in this increasingly net-centric environment, that does not mean any of them will end up the winner. The winner is usually not the best or the fastest, but rather the one that is pretty good at most things in a wide range of environments, is relatively low cost and easy to support, and allows a developer to move quickly from concept to finished product.

At some point along the line, however, whatever RTOS wins will have to morph into something similar to what I have described.

What kind of RTOS do you think is most appropriate for the network market? What other research-oriented distributed operating systems should we look to for solutions? Do we have to look elsewhere for inspiration and solutions? Where? Which of the commercial RTOSes do you think will do the best job in this new environment and why? Will one size fit all? Are there any common characteristics across all the various sub-segments in this market that need to be taken into account? Do you think the RTOS will go away, or take a different form? What form. And why?

This is a topic I want to continue to cover in future columns as this network evolves and it is important that I hear from you.


Reader Feedback

I completely agree with the requirements that you have set aside for a microkernel on an NPU. But given the limitations of the processing elements of some of the modern NPUs (C-PORT's C-5 and Intel's IXP1200 are the two I am most familiar with), I don't see how we can afford to even accommodate a small microkernel on each of these PEs. Both these NPs have less than approximately 10K Instruction Memory per PE. Even a small microkernel (~20K or so) is much bigger than the typical instruction memory sizes.

Most of the NPs seem to provide hardware context and support of hardware context switches within one or two cycles. Most applications that I can think of for a data plane PE (networking protocols are the first that come to mind) typically will not require more than 4 contexts on a PE. Given this, do you really feel that there is a need for a microkernel that provides extensive multiprocessing support?

Continuing with the previous thought, a multiprocessing kernel (however efficient it may be designed) will fare poorly as compared to the single instruction hardware context switches provided by these NPs. If a PE is dealing with data rates of 1 to 10 Gb/s, do you think that the use of microkernel services for multiprocessing (as opposed to hardware multi-tasking offered by the PE hardware itself) is justified?

Gaurav Vaid


Your article was quite interesting. I have a question based on the following text (culled out from the article):

"Because such kernels are so small, memory protection does not seem to affect performance."

This implies that the code required to use the MMU for memory protection is smaller in a microkernel architecture. Can we make such a general assumption? Or will this only be the same amount of code in both cases? Second, the memory protection enforcement will have to be between processes also, and not just between the kernel and a process. Will the size of the kernel make a difference in this case?

Appreciate your insights.

T. Sridhar
Director, Engineering
Future Communications Software

Bernard Cole is site leader and editor of iApplianceweb and an independent high technology editorial services consultant. He welcomes your feedback. Call him at or send an email to .

For more information about topics, issues and technologies mentioned in this story go to the flashing icon in the upper left corner on this page or go to the iAppliance Web Views page and call up the associatively-linked Java/XML-based Web map of the iApplianceWeb site.

Enter the appropriate key word, product or company name to list instantly every news and product story, product review and product database entry relating to the topic since the beginning of the 2002. 



Copyright © 2004 Appliance-Lab
Terms and Conditions
Privacy Statement