next up previous
Next: Steps in HTTP request Up: Introduction Previous: Introduction

Motivation

Number of users accessing the Internet is increasing quite rapidly and it is common to have more than 100 million hits a day for popular web sites. For example, netscape.com website receives more than 120 million hits a day. The number of users is expected to continue increasing at a fast rate and hence any website that is popular, faces the challenge of serving very large number of clients with good performance. Full mirroring of web servers or replication of web sites is one way to deal with increasing number of requests. Many techniques exist for selection of nearest web server from the client's point of view. Ideally, selection of best server should be done transparently without the intervention of the user.

Many of the existing schemes do only load-balancing. These schemes assume that the replicated site has all the web servers in one cluster. This is alright for medium sized sites, but beyond a certain amount of traffic, the connectivity to this one cluster becomes a bottleneck. So large web sites have multiple clusters, and it is best to have these clusters geographically distributed. This changes the problem to first select the nearest cluster and then do load balancing within the servers of that cluster. Of course, if all servers in a cluster are heavily loaded then another cluster should have been chosen. So the problem is more complex in such an environment.

Designing such system involves making decisions about how best server is selected for a request such that user receives response of request in minimum time and how this request is directed to that server. In most strategies, a server is selected without taking into account any system state information, e.g. random, round robin etc. Some policies use weighted capacity algorithms to direct more percentage of requests to more capable servers. But few strategies select a server based on the server state and very few strategies take client state information into account. There is always a tradeoff between the overhead due to collection of system state information and performance gain by use of available state information. If too much state information (of server or clients) is collected, it may result in high overheads for collection of information and performance gain may not be comparable to overheads. So we must carefully collect only that state information that might improve performance of system as seen by clients but do not result in very high overheads.

In this thesis, we have proposed a new scheme based on collecting information about the load on each server as well as estimating round-trip time between clusters and those clients which make large number of requests.

To study the tradeoffs and impact of different parameters on a web server system, a framework is required. The framework should enable evaluation and comparison of performance of distributed web server systems. The framework should allow easy implementation of any scheme and analyze the performance of web server system with new policies.

In this thesis, we have designed and implemented a test bed to provide such a framework. We have also measured performance of few policies implemented in this test bed through emulation of world wide web scenario.


next up previous
Next: Steps in HTTP request Up: Introduction Previous: Introduction
Puneet Agarwal 2001-05-12