Scribe: Amit Awekar[email@example.com]
Lecture 7 : Fair Queuing
When there are too many packets in the network then network performance degrades.
Because of excessive packets, buffers in routers overflow leading to packet loss. This ultimately causes increased
end to end delay.
This is called congestion.
Initially congstion control was considered to be a problem of avoiding buffer exhaustion. But Nagle discovered
that if routers have infinite memory then congestion gets worse. This is because by the time packets get to front of queue,
they have already been timed out and duplicates are already sent. All these packets wiil be dutifully forwarded to
next router increasing the load all the way to destination.
Optimal strategy for single user i.e.)Trying to use as much bandwidth as possible may not be optimal for whole network.
It deals with following
1.  Allocating bandwidth fairly to users
Out of above three, first two can be conidered as "problem of scheduling" and third one is "problem of queuing".
2.  Achieving promptness
3.  Allocating buffer space properly
Central Idea of Fair Queuing
*  Routers have multiple queues for each output line, one for each user.
*  When line becomes idle, router scans the queues round robin, taking first apcket on next queue.
*  In this way with n users competing for a given output line, each user gets to send one out of every n
With this scheme, user sending large packets will get more bandwidth. To avoid this "byte by byte round robin"
can be simulated by estimating finish tinme of each packet.
But problem with this scheme is that all users get same priority. Where as file server and other servers should have more
bandwidth than clients. This can be achieved by "weighted fair queuing" whre some users have higher priority.
Buffer Allocation Policy
If we allocate equal buffers to every user then some bursts cannot be sent though some buffers are empty
So simple solution is to allocate required buffer space to any user and drop packets from largest queue when needed.
When any flow does not send packet then we build credit for limited time so that inactive flow will get prompt service
But problem with this scheme is that it has large computational overhead. To reduce this overhead we can use "deficit
round robin algorithm". Also to avoid per flow memory state we can use core steteless fair queuing.
Click here to read paper by John Nagle "On packet switches with infinite storage"
Click here to read paper by Shreedhar and Varghese "Efficient fair queuing
using deficit round robin"
Want to do google search on Fair Queuing?