Scribe Notes for Lecture 12: Integrated Services                                            Tue, 26 August 2003
                                                               
                                                                  Name: Arindam Chakrabarty                                                   
             
Stochastic Fair Queue Scheduling (SFQ):
It uses a small number of queues at a router & hashes the flows at that router onto them. Here the scheduling is  primarily concerned with the number of active flows at a router.

Deficit Round Robin (DRR):  It implements Round Robin over all the queues in the router.
For example, say quantum size = 500 bytes.
If a flow gets to send only say 400 bytes (because the next flow is say 200 bytes, leading to overflow), then these 100 bytes will be accrued or credited to that flow in the next round of packet scheduling at that router.

Core Stateless Fair Queueing (CSFQ): Here there are no per-flow states in the core routers & only the edge routers implement fair queueing.
As an example, say total available bandwidth at an edge router A is 100 Mbps & 40 flows pass through that router
=> each flow will be allocated 2.5 Mbps.
But if a flow F sends packets at 5Mbps, then the edge router A will decide on the packet drop rate of flow F.
The information regarding the packet flow rate from a source is provided by that source to the edge router.  As a result, this protocol depends on the reliability of the information provided by the source of the flow F to the edge router A.

Integrated Services Network:
It's advantages are:
     (1) Economies of scale: bandwidth can be distributed as per requirement.
     (2) Ubiquitious usage of the Network: integrating all the services into one makes the usage of the network more obvious, i.e., ubiquitious.

The Architecture of the Integrated Services Packet Network has 4 key components:  
         Type of service provided:  this refers to the nature of commitment made by the network when it promises to deliver a certain quality of service.                                                                  2 types of service commitments are identified: guaranteed & predictive
          
         Service interface
: This refers to the set of paramaters passed between the source & the network, & includes both
                                      (a) the characterization of the quality of service the network will deliver, fulfilling the need of applications to know when their                                                    packets will arrive, &
                                      (b) the characterization of the source's traffic, thereby allowing the network to knowleadgeably allocate resources.
        
         Packet scheduling behaviour of the network switches
: this component includes both the actual scheduling algorithms to be used at the switches /                  routers (the paper discusses a Unified Scheduling Algorithm taking care of all types of traffic: guaranteed, predictive & the traditional datagram                       service), as well as the information that must be carried in the packet headers (for example, the difference between the expected arrival time &  the                  actual arrival time of a packet at a router in the FIFO+ algorithm).
        
         Admission Criteria: Given that the total available bandwidth is fixed, the network, in order to meet the service committments it has made to it's
         clients (sources), needs to have a policy to regulate the admission of new sources. The admission criteria naturally depend, among other things, on
         factors like the available bandwidth, the number of real time flows present & the types of it's service commitements to them, as well as the types                      of service commitments desired by the new sources intending to use the network.

Classification of Real Time Traffic: Real time traffic can be classified along 2 lines:
        
         Tolerant & Intolerant: Certain applications like a video conference allowing one surgeon to remotely assisting another surgeon during an                      operation is intolerant to service interruptions. But most other video applications (including video conferencing in a less time-critical setup) can tolerate some amount of interruptions in service.
               
         Rigid & Adaptive: Rigid applications are those which use a priori delay bound advertised to the network to set the playback point & keep it fixed          regardless of the actual delays experienced. Naturally, this is expected to lead to unutilization of bandwidth by real time traffic since playback will start at a fixed point of time for each packet even if the delay encountered by the packet was low & it reached it's destination much earlier than the playback point.
         In contrast, in other real time applications, the receiver measures the network delay experienced by the arriving packets & then adaptively moves the playback point to the minimal delay that still produces a sufficiently low packet loss rate. Since the post-facto delay computed in this case is likely to be less than the a priori delay bound pre-computed by the network in rigid applications, the bandwidth utilization will likely be more in this case. On the flip side, setting the delay too early may result in higher packet loss & frequent jitters in the data transmitted may lead to brief interruptions in service while the playback point is re-adjusted, thereby requiring the applications involved to be tolerant
       Normally, applications exhibit either intolerance & rigidity, or tolerance & adaptibility.

Types of service commitments:

         Guaranteed: In this case, if the network hardware functions properly & the source conforms to it's traffic characterization, then the network fulfills it's service commitment to the client / source. It uses the Weighted Fair Queueing (WFQ) algorithm. It is more appropriate for intolerant & rigid applications.
         One form of traffic characterization is a traffic filter scheme. Here there is a token bucket filter characterized by 2 parameters: rate r & depth b.  The bucket is filled up with tokens continuously at a rate r, with b being the maximum depth of the bucket. Every time a packet is generated, p  tokens are removed from the bucket, where p is the size of the packet. A traffic source conforms to a token bucket filter if there are always enough tokens in the bucket whenever a packet is generated.
         Parekh-Gallager's result: In a network with arbitrary topology, if a flow gets the same clock rate at every switch & the sum of the clock rates of all the flows at every switch is less than or equal to the link speed, then the queueing delay experienced by that flow is bounded above by b(r) / r, where r is the clock rate allocated to that flow while b(r) is the maximum depth of the bucket corresponding to that flow (say F) & represents the level of burstiness that can be handled by a switch with respect to F.
         The significance of the above result lies as follows: given bandwidth r, b(r) can be found, & then it can be checked whether b(r) / r conforms to the delay requirements of the source. If not, r can be increased i.e. greater bandwidth can be provided to that source to meet it's guaranteeddelay requirements.

         Predictive: Here the network commits that if the past is a guide to the future, then the network will meet it's service characterization. Secondly, the network attempts to deliver service that will allow the adaptive algorithms to minimize their playback points. Clearly, this type of service is more suited for tolerant & adaptive applications.
         This service type makes use of FIFO algorithm which spreads the delays (say introduced by bursty traffic from one source) across all flows evenly. As a result, the playback point is not deferred much for any of the receivers. In contrast, had it used the WFQ algorithm (used in guaranteed service), the delay encountered by packets from one bursty flow would have significantly deferred their playback points at the receiver.
FIFO is efficient for implementing sharing among the flows (required by adaptive applications) while WFQ efficiently implements isolation among the flows (required by rigid applications).

         Datagram service:  This refers to the traditional best-efforts datagram service which is also included in the Unified Scheduling Algorithm, as the lowest priority class among the priority classes corresponding to predictive scheduling, which together in turn form a flow (labelled the 0th flow) along with the guaranteed service flows.