Comparison of ATM and TCP Congestion Control
























CSE588 Final Paper

Spring 1999

Steve Sjoquist and Andrew Tucker


This paper compares and contrasts the congestion control strategy of the Transmission Control Protocol (TCP) with that of Asynchronous Transfer Mode (ATM). We begin by giving a detailed overview of the implementation details for each approach, and follow-up with a discussion of the similarities and differences. The final section covers observations and recommendations for applying the techniques to real situations.


1. Overview of TCP Congestion Control


Congestion control for TCP was introduced almost eight years after it had become widely used in the Internet protocol stack. The Internet was nearly collapsing from congestion problems and it was clear that something had to be done. Utilizing end-to-end mechanisms, TCP relies on almost no help from individual routers and assumes only FIFO queuing. Congestion avoidance mechanisms are not used; rather TCP deals with congestion when it occurs. Essentially, TCP tries to discover the bandwidth available and adjust its send rate to an appropriate level. This is sometimes referred to as "self-clocking" since acks are used to pace packet transmission.


There are three main components to TCP congestion control: additive increase/multiplicative decrease, slow start, and fast retransmit and recovery. We will discuss the separately, but TCP relies on each equally to achieve it’s overall congestion control.

Additive increase/multiplicative decrease is implemented in TCP by maintaining an extra window size variable that is separate from the window size advertised in the TCP header. The maximum window size then becomes the minimum of the congestion control window size and the advertised window size. As packets are successfully acked, the congestion window is incremented by

(MSS x MSS)/CongestionWindowSize

bytes where MSS is TCP’s Maximum Segment Size. This effectively increases the window size by 1 when a full window has been acked. This comparatively slow growth of the window is called "additive increase". If a packet is not successfully acked (e.g. the sender times out) the congestion window size is cut in half and the packet is resent – this is "multiplicative decrease". The window size then resumes additive increase and the process continues until all packets have been successfully transmitted.

Contrary to an intuitive interpretation of its name, slow start actually increases the expansion rate of the window size from linear to exponential. The window size starts at 1 and is then doubled if the entire window is successfully acked without any timeouts. This continues until at least one timeout occurs, which is interpreted as a lost packet. The window size is then halved and additive increase/multiplicative decrease is resumed. This is done at the beginning of a connection so that TCP can find the approximate maximum bandwidth available and keep the pipe full as much as possible. It is called "slow start" because before congestion control was added to TCP the entire advertised window was sent right off the bat which caused a lot of congestion problems. Slow start, on the other hand, progressively increases the window size until there is a dropped packet, which avoids the problem. Slow start is also used when a connection is "dead" waiting for a timeout.

Fast retransmit is a strategy that enhances TCP timeouts to sometimes resend a packet before the actual time has expired. If duplicate acks are received fast retransmit assumes that it is because a packet was dropped and retransmits the packet. To avoid situations where duplicate acks occurring when the packet has not been dropped, TCP actually waits for three duplicate acks before resending the packet. Fast recovery takes place when a connection is "dead" waiting for a timeout. Instead of cutting all the way back to a window size of 1 and using slow start, the window is cut in half and additive increase/multiplicative decrease is resumed. This means that slow start will only be used when a connection is initially started.


Although the TCP congestion control mechanisms work fairly well, there are some weaknesses. There is no ability to provide good quality of service control, which is necessary for real-time applications such as video conferencing. The requirement for accurate timeouts to maintain the congestion control rate is a burden for implementing TCP on some platforms. Also, the assumption that all dropped packets are caused by congestion causes problems with "lossy" connections such as wireless links.


2. Overview of ATM Congestion Control


ATM defines five service classes, each targeted for a specific Quality of Service (QoS) level that meets the needs for data transmission of the class. One class, Available Bit Rate (ABR), is similar to TCP/IP protocol, in that it desires to use the available capacity of the network to transfer data. This class of service does not require a specific transfer rate to succeed, such as would be the case with video or audio data, for example. Rather, ABR desires to send data at the maximum rate the network can support, modifying its transmission rate according to network conditions. To do this, ABR includes a feedback mechanism that regularly informs the sender of network conditions, allowing it to adjust its sending rate appropriately.


The ABR congestion control mechanism uses a set of parameters that are initially negotiated at connection setup. These parameters govern the sending rate of the source and the basic parameters include Minimum Cell Rate (MCR), Peak Cell Rate (PCR), and Initial Cell Rate (ICR). During transmission, the sender always transmits cells at an Allowed Cell Rate (ACR), which begins at ICR and changes according to the network conditions. The system uses a closed loop feedback mechanism to regulate the sender’s ACR, always keeping it between MCR and PCR. There are many more parameters used in ABR, some of which are described in the detailed sections that follow.

Resource Management Cells

ABR achieves closed loop control by periodically sending Resource Management (RM) cells from sender to receiver. These cells collect network condition information, and when a destination receives an RM cell, it returns it to the source node. The information that each RM cell returns to the sender provides information that it uses to adjust its ACR. There are three important parts in an RM cell which carry network condition information back to a source. First is the Congestion Indication (CI) bit, which indicates that there is congestion present in the network. Second, each RM cell contains a No Increase (NI) bit, which, as discussed later, is a setting that affects how a sender is allowed to modify its ACR value. Finally, an RM cell contains a field called Explicit Rate (ER), which also affects the allowed setting of the sender’s ACR.

Network Components

There are three basic nodes in an ATM virtual network connection: the source, the destination, and the set of switches that make up the virtual connection. As described below, the congestion control system for ABR involves all of these network nodes.

Feedback Mechanism

General Approach

Initially, the ABR system operates open loop, governed only by the parameters negotiated at connection setup. The sender begins transmitting cells at the ICR, which becomes the current ACR setting. After it has sent a certain number of data cells, it inserts an RM cell, initialized to CI=FALSE, NI=FALSE, and ER=ACR. Each switch along the path from source to destination can set CI and or NI to TRUE if congestion is present. Additionally, a switch can reduce ER if it wants to reduce the source’s transmission rate. When the RM cell reaches the destination, it also can set CI, NI, and ER to values if it knows that congestion is occurring in the virtual circuit. A destination detects network congestion by monitoring a bit known as EFCI, which resides in each data cell header. The EFCI (Explicit Forward Congestion Indication) is a binary setting that any switch in the virtual circuit can set if it detects congestion at its node when the cell passes through. Each time destination receives a data cell, it notes the state of the EFCI bit and records it. When an RM cell arrives, the destination detects congestion if the most recent data cell had EFCI set. The receiver then returns the RM cell to the source, which modifies its ACR setting according to state of CI and NI, and the value of ER. In addition to the regular RM cells that the sender forwards through the network, the destination and each switch in the VC can generate an RM cell and send it directly to the source. This is provided to allow switch nodes or the destination to reduce the source transmission rate without the waiting for the next RM from the sender.

The sections below describe in more detail the behavior of each node type in an ABR connection:

Sender Actions:

There are thirteen rules that ABR source nodes must follow:

  1. Sources must always transmit at a rate less than or equal to their current ACR. Also, ACR cannot exceed PCR and need not ever be less than MCR.
  2. Initial transmit rate is ICR and the first cell that a source transmits is a forward RM cell (FRM). This gets the feedback mechanism started as quickly as possible.
  3. Sources have three different types of cells to transmit: data, forward RM (FRM), and backward RM (BRM). A source is required to send an FRM after every 31 (parameter Nrm) cells or after 100mS (parameter Trm), whichever comes first. There must be at least two data cells in between each FRM cell, however, to ensure that some data is always able to transmit.
  4. All RM cells sent according to rules 1 through 3 are ‘in-rate’ cells and have their cell loss priority bit (CLP) set to 0. This means that the cell has the lowest priority of being dropped during transmission. A source can send other RM cells ‘out-of-rate’ too, which will have their CLP bit set to 1, indicating they have higher drop priority.
  5. The ACR for a source is valid for only about 500mS, a value negotiated during connection setup. If the source does not transmit any FRM cells during this time period, it must set the transmission rate (ACR) to back ICR and re-sense the network by transmitting an FRM cell.
  6. If a source does not receive BRM cells within a timely manner, it must exponentially decrease ACR. This is done by counting missing backward RM cells, relative to the transmitted FRM cells. When the missing BRM count exceeds some value (CRM), the source assumes network congestion and reduces ACR by a factor of CDF (Cutoff Decrease Factor). This is done each time it sends a new FRM cell when the missing BRM count is over limit. CRM and CDR are both parameters that are negotiated at connection setup. This rule results in a rapid decrease exponential decrease in ACR, until it reaches the MCR value.
  7. When a source sends a FRM cell, it includes its current ACR value in the transmitted cell. This allows switch nodes and the destination to know the source rate.
  8. Rules 8 and 9 govern what happens when a source receives a BRM. Basically there are three indicators the source looks at: NI, CI, and ER. The following table shows how the source responds to these indicators.
  9. CI


    Source Response





    No congestion, use additive increase



    MIN(ER, ACR)

    No increase allowed



    MIN(ER, ACR*(1-RDF) )

    Congestion, use multiplicative decrease



  10. Parameter initialization to defaults
  11. A source must limit out-of-rate RFM cells to a rate below the tagged cell rate parameter (TCR). This is typically 10 cells/second.
  12. The source must set the EFCI bit to 0 on every cell sent.
  13. Sources can optionally implement additional ‘use it or lose it’ policies. (alternatives relate to rule 5)

Destination Actions:

  1. Destinations must monitor the EFCI bits on received cells and store the value seen on the most recent data cell.
  2. Destinations must turn around FRM cells with only minimal modifications: Set direction bit to backward, set BN bit to 0 (not generated from a switch), If EFCI is 1, set CI bit in the BRM and clear EFCI. If the destination has internal congestion, it may also modify CI, NI, or reduce ER as desired.
  3. Rules 3 and 4 together specify that the destination should turn around FRM cells as fast as possible. Delays may be used if the reverse ACR is low, however.
  4. A destination may wish to force the source to reduce its rate before it receives an FRM cell. In this case it may generate its own BRM cell and transmit it to the source.
  5. The destination may turn around an out-of-rate FRM cell either as an in-rate or out-of-rate cell.


Switch Actions:

  1. A switch may set CI, NI, or reduce ER if congestion is occurring.
  2. A switch may generate a BRM cell if it is congested and sees not FRM cells from the source. In the BRM cell it generates, it can set CI, NI or reduce the source rate via ER.
  3. A switch may transmit FRM cells out of sequence if it desired. This increases the priority of FRM cells above data cells and is used when congestion occurs to provide faster feedback to the source.
  4. This rule specifies alignment with ITU-T I371 draft and ensures the integrity of the MCR field of the RM cell.
  5. Allows optional implementation of use it or lose it policy (see sender rules 5 and 13).

3. Similarities and Differences

Strengths of ATM

One important strength of the ATM congestion management system is that it provides rate control of packets sent from the source. In addition, the rate is determined, in part, by each of the switches along the ATM virtual connection. This approach leads to better resource fairness since switches can explicitly control the transmission rate of each source connected to it, limiting each source to a fair amount of queue space. This approach to switch buffer management prevents synchronization effects from locking out one or more competing sources and reduces the overall delay through the virtual circuit.

Another strength of the ATM approach is that it provides congestion avoidance as well as congestion control. This is due to the fact that any switch or destination along the virtual connection can affect the sending rate by signaling congestion. This signal may occur whenever the switch or destination decides that congestion may occur. In addition, the ATM resource management cell provides multiple types of congestion indicators. For example, a switch could set the NI (no-increment) bit in an RM cell if it is just noticing that congestion might occur. This is a mild form of congestion management, which might be enough to avoid a fully congested network condition. As congestion increases, the switches can set more serious congestion indications, setting the CI bit or an explicit rate for the source.

In addition to the periodic resource message cell feedback, ATM switches can provide immediate congestion feedback to the source if necessary. Since any switch can generate and send a resource management cell, ATM can respond quickly to an actual or anticipated congestion condition, allowing the network to recover as soon as possible.

The closed-loop mechanism allows an ATM source to optimize its transmission rate and manage congestion without the need of forcing the network continually into a congestion situation. By receiving a continuous stream of resource message cells, a source is provided enough information to set its transmission rate at an optimal level. This approach reduces transmission delay while still providing for adaptability to changing network conditions.

In general, the ATM congestion management approach allows an regulated transmission environment that is relatively fair to all sources and reduces transmission delay. This is a big advantage, especially when Quality of Service (QoS) is important. Since a source can request a minimum packet rate at connection setup, it can be guaranteed its at least the needed QoS, and then use more of the virtual circuit resources if conditions allow.

Weaknesses of ATM

The ATM approach to congestion management is relatively complicated. The algorithm is detailed and involves all participants of the virtual circuit: the source node, the switches, and the destination node. In addition, there are many parameters that need to be established at connection setup and modified when a transmission is taking place. If something goes wrong in this complex system, it could lead to partial or complete failure of the congestion avoidance and control mechanism, causing poor network performance or transmission failure. Since there are many parameters, it could be the case that system failure would occur if one switch is does not have its parameters set properly and exerts either too much or insufficient control on the congestion feedback system. One side-affect to the increased complexity of ATM is that such a system probably requires a good deal of administration setup and maintenance to get and keep the network running smoothly.

Another weakness of the ATM approach is that it requires the switches in the network to be fairly smart and homogeneous. This is because each switch needs to be able to participate in the congestion detection and feedback system, including monitoring for congestion conditions, watching for resource management cells and responding to them, and deciding what type of congestion indications to give. Each switch must do this for the system to work, forcing homogeneity and increasing the cost of the network.

Strengths of TCP

The main strength of TCP’s congestion management system is that it is a simple end-to-end system and requires very little participation from routers. This allows it to work over very heterogeneous networks where the capability of nodes varies from powerful to extremely simple. The routers are not required to detect or avoid congestion, rather the sending node detects it when a packet loss timeout occurs.

Another strength of the TCP strategy is that it is self-pacing. As packet drops are detected, TCP’s transmission rate is adjusted accordingly and thus it is constantly transmitting at the optimal bandwidth for the current connection. Again, the routers are not involved in determining the transmission rate, rather the sending node takes care of it.

Weaknesses of TCP

TCP congestion control has several weaknesses. First of all, since it does not set up a circuit connection between routers, there is no way to gurantee bandwidth which is required for features such as Quality of Service. This also causes TCP to not enforce fairness very well, allowing one connection to hog a large portion of bandwidth and cause jitter for all other connections.

Another weakness of TCP is that it has no congestion avoidance mechanism, rather it relies on reacting to congestion when it happens. In fact, slow start will almost always cause congestion (in the form of a packet drop) while trying to detect the maximal transmission bandwidth.

A third weakness of TCP congestion control is traffic synchronization effects. Since TCP uses dropped packets to measure the congestion level, a burst of dropped packets, possibly caused by small buffers in a drop-tail router, can lead to "cycles of underutilization, window increase, followed by tail drop again" [1]. TCP Traffic burstiness can also be caused if the reverse-path route for acks becomes congested and causes many timeouts and retransmissions.

Similarity and Difference Comparison

Since TCP and ATM both have congestion control mechanisms built in, there are several similarities and differences. The primary similarity is that both protocols have a congestion control mechanism built in to reduce network traffic when the protocol detects congestion. In addition, both congestion algorithms use additive increase and multiplicative decrease for controlling data transmission.

Although the increase/decrease control is similar for TCP and ATM, the entities controlled are substantially different in each case. For TCP, the entity controlled is the size of the transmission window. TCP is allowed to transmit its entire window at one time, which could fill up a router’s input buffer quickly, causing increased delay and perhaps congestion. The congestion control algorithm increase or decreases the window size according to congestion conditions. In ATM the controlled entity is the allowed cell transmission rate. In this case, the algorithm increases or decrease the cell rate. In ATM, then, the network can affect switch input buffer consumption directly by limiting the cell transmission rate.

The congestion detection mechanism is quite different for the two protocols. TCP infers congestion by noticing when packets are dropped. This is an indirect mechanism that is a side affect of the network condition. Sometimes, however, packets are dropped in a network for reasons other than congestion. In this case TCP may determine congestion is present when it is not. ATM, in contrast, uses explicit resource message cells to periodically sense the state of the network congestion. This is an advantage over TCP because it avoids using packet loss for congestion detection. This advantage, however, requires more transmission overhead (RM cells), smarter switches that are homogeneous, and coupling between the source node, switch nodes, and destination node. None of these are required for TCP congestion control.

Another difference between the two protocols is that ATM includes a congestion avoidance mechanism with multiple types of congestion indicators, while TCP does not. Some of the experimental and proposed extensions to TCP do attempt to provide indicators to avoid congestion, such as Sally Floyd’s TCP Explicit Congestion Notification idea and RED gateways, but these are not in widespread usage.

Finally, ATM includes a way for a source to specify a minimum required cell transmission rate. TCP, on the other hand, has no such concept, with all transmissions being ‘best effort’. This means that ATM can support defined Quality of Service needs for network applications, while TCP can’t guarantee anything relative to QoS.

4. Conclusions and Recommendations

The ATM congestion management system provides a finer grained and fairer level for network transmissions. This, coupled with a sources ability to specify a required minimum transmission rate, leads to its being the best choice when a network needs a specific quality of service guarantee. The ATM approach, however, requires homogeneous and relatively complex switches. It also requires a significant amount of administration support, to initially set up the network and establish the best setting for the numerous network parameters involved. These factors increase the costs for an ATM network and limit the flexibility of what types of network it can run on.

TCP/IP congestion control, on the other hand, will work with a wide variety of routers, since it depends on dropped packets as the congestion indication. Also, the network performance of TCP is not guaranteed, since network is continuously driven into congestion, causing delays and unfairness to participating sources. These factors lead to it being the preferred approach if the network is heterogeneous in terms of router type and doesn’t have a need for a defined QoS level. In addition, TCP/IP will be less expensive than ATM since the routers can be simpler and since the administration needs are less due to having fewer parameters involved and no coupling between source node and the routers.


[1] Prasad Bagal, Shivkumar Kalyanaraman, Bob Packer, "Comparative study of RED, ECN and TCP Rate Control," Technical Report, March 1999

[2] Explicit Rate Control of TCP Applications, ATM Forum Document Number ATM_Forum/98-0152R1, February 1998

[3] Shivkumar Kalyanaraman, "Traffic Management for the Available Bit Rate (ABR) Service in Asynchronous Transfer Mode (ATM) Networks," PhD Dissertation, The Ohio State University, August 1997, xxiv + 429

[4] Raj Jain, Shiv Kalyanaraman, Sonia Fahmy, Rohit Goyal, S. Kim, "Source Behavior for ATM ABR Traffic Management: An Explanation," IEEE Communications Magazine, Vol. 34, No. 11, November 1996, pp. 50-55

[5] R.Jain, "Congestion Control and Traffic Management in ATM Networks: Recent Advances and A Survey," Computer Networks and ISDN Systems, Vol. 28, No. 13, October 1996, pp. 1723-1738

[6] Computer Networks: A Systems Approach, Peterson and Davies, Morgan-Kaufman 1996

[7] Internetworking with TCP/IP, Douglas E. Comer, Prentice Hall, 1995