Thursday, February 21, 2008

[Programming] Sockets programming - multiple clients - prologue

Prologue

Previously we only talked about a very simple case: the server processes one client's request before accepting a connection to the next client. If the processing time for each client is long, and if multiple clients are coming, then the new clients have to wait a long time to be served. Obviously, this is not a good design for the case of multiple concurrent clients.

Several solutions are used in practice, such as:
1) Multi-process method (forked server processes)
2) Multi-thread method (threaded server processes)
3) Select
4) Poll

The first multi-process method calls fork() to create a child process. The main server blocks on accept(), waiting for a new connection. Once a new client comes in, the server calls fork(), the new child process will handel the new connection and the main server is able to serve other new incoming requests.

However, this method suffers the disadvantage that sharing information becomes more complex (you need to use message queues, semaphores etc), and it requires more CPU resource to start and manage a new process for each request.

The second multi-threaded solution offers the lightweight advantages of the first multi-process method. However, multi-threaded program is not easy to debug.

The third and fourth approaches, select() or poll(), offer a different method to block execution of the server until a new event occurs. They usually only uese one process to serve multiple client's request. The server will not need shared memory or semaphore for different clients to communicate. For example. select() works by blocing until something happens.

Oh, hold on, please. You interrupted me, because you already got lost by those new terminologies, such as process, thread, shared memory, semaphore. Yes, in order to understand those, we need to first learn somthing about process/thread. We will cover this in the next several posts.

No comments: