The Server-Client Model

An overview of what a server and client are and how they fit together in web programming.

Scroll down...

Content

Resources

Comments

Because we want to make sure you understand all the nuts and bolts of what your web application is up to, it makes sense to start by clarifying what's going on with web servers. Servers, as you're well aware by now, take the browser's HTTP request for a page or asset and send back an HTTP response containing the desired resource.

In this lesson, we'll look at web terminology and what a server actually is, so you can visualize how browser requests arrive at your application in the first place. We'll also clarify the distinction between the web server portion of your web application (which handles the TCP connection and requests) and the portion in which you'll actually be coding to help figure out what to send back.

This is primarily a conceptual lesson, so feel free to skip it and come back later if you don't have time. We'll be diving into the guts of servers. As stated previously, a more high-level look at actually implementing servers is provided in a later section on Deployment.

Clarifying IP Addresses, Ports, Sockets and Connections

A server is basically just an infinite loop that listens to a socket, waiting until a request comes through.

The server lives at a specific IP address on the web (e.g. 0.0.0.0 for your localhost). The IP address only identifies broadly where the server is located -- a port (e.g. 3000 for your localhost) is used to identify the specific endpoint the server is accepting requests at.

Hmm... okay, so what's a port? Ports are like sub-addresses. The IP address sends you to the correct building and the port takes you to the right apartment. Each IP address may have lots of different ports, but we commonly associate certain ones with web HTTP requests (port 80). See all the different port options on Wikipedia here if you're curious.

Each IP address + port combination uniquely identifies an endpoint on the web, which is really just a way of saying "a place you can send your requests". Each endpoint can actually have multiple sockets, though the two terms are often used interchangeably since only one socket (typically your web server), is allowed to "listen" for incoming requests at a time on a given endpoint. You get an error if you try to run multiple servers at the same endpoint.

Using the example above, if the IP address is the building and the port is the apartment, then a socket is anyone inside that apartment. At any given time, only one of those people (your web server) is listening to the doorbell so they can open the door to receive incoming packages.

When a request finds its way to a listening port, a TCP connection is opened up. That connection is specified by the combination of the recipient's endpoint (IP address and port) and the sender's own endpoint (return IP address and port). It's a 2-way street for information to flow and it remains available for use as long as the server keeps it open (usually just for the round trip duration of a single request).

In our example, then, to uniquely specify a connection you'd need to say "Hi, this is Mary Jane calling from 100 Main Street (San Francisco, CA), Apartment B. I'd like to talk to Jimmy Johnson from 1 Infinite Loop Way (Cupertino, CA), suite 401." When Jimmy answers, you've got a connection.

So What IS a Server?

Again, the server is just a code loop that listens to a specific port waiting for a request to come through.

It's easy to create a simple server using most web-based programming languages. It feels a lot like working with File I/O except with different terminology. To do so, you'll instantiate a new server with a given IP and port and then allow it to accept incoming connections. You can read from the connection and send to it as if it was a connection to a file you just opened.

The basic steps from the server's perspective:

  1. Wait for someone to try to open a connection with you...
  2. Someone's here! Open a connection...
  3. Read from the connection to see what it wants...
  4. Pass the request off to the rest of the application and wait for its response...
  5. Write that response back into the connection...
  6. Close the connection and wait again for another connection...

All the mysterious fancy stuff "real" servers use is designed to optimize and enhance this process.

We should note that this type of server actually doesn't need to be pointing at the web. They are used all the time between different pieces of large applications (e.g. in a Service Oriented Architecture) or on internal networks. So don't think that web servers are fundamentally any different from application servers.

What is a WEB Server?

What specifies an HTTP server is the ability to parse and process a request formatted using HTTP. Remember -- everything on the web is basically just a big string, so you need to unpack each one into something your web code can handle. Think of it like opening a file -- it's just a stream of bytes that you need to be able to process. That's typically done with an HTTP parser.

Server Connections Are Like Files

Requesting information from these server connections is very similar to opening and reading from files. In the case of the server, the "file" just happens to intelligently talk back to you!

The workflows of both are quite similar:

  1. You need to tell your program where to find this "file" (to find a server, specify the IP address and port we're looking for)
  2. Open up the file (or the connection to the remote server)
  3. Send your request to start reading the file (or ask the server for a resource)
  4. Read the contents of the file (or read the response from the server)
  5. Close the file (or the connection to the server)

Where The Web Application Fits In

A web server really just wraps the rest of your application by receiving and parsing the incoming HTTP request and then sending back a properly formatted HTTP response. Think of it as the gatekeeper to the rest of your code. It's up to your application to figure out what the requester is asking for and then pass the server an appropriate response to send back.

In the diagram below, we used a Rails app, but it's the same regardless of which stack you're using:

Single Server Architecture

The server may be a separate service from the rest of your application code or it might be included within it. Rails actually ships with a web server of its own, called "Webrick", or you can specify one of your own (see below).

You won't ever touch the web server portion of your applications (other than maybe handling some configuration settings). In fact, most of the code you work with will have nothing to do with HTTP at all. Rails uses HTTP requests behind the scenes to make everything work but you don't need to think about that. You can simply focus on figuring out what the requester wants and then create a response to it.

What a Server Looks Like

Let's look at a real world server. Again, it'll be basically the same regardless of which framework you use.

When you fire up a server in a Terminal window, it loads up your application (which we'll cover later). It then runs as an infinite loop, which waits for requests to the local endpoints 0.0.0.0:3000 or localhost:3000 or 127.0.0.1:3000 (...etc).

When a request is received, the server sends it through the rest of your application and provides certain helpful logging information along the way:

Rails server logs on localhost

Web Servers Are (Usually) Single-Threaded

One thing you might have noticed is that the process is single-threaded, meaning that a request must be fully processed before another request can be dealt with. That's one thing to be conscious of when you start thinking about the speed of your application -- that speed not only affects how long the user waits for a response but also how many requests each server instance can handle.

Luckily for you, you can have (almost) as many instances of your web application running at a given time as you want (and can afford). Typical production servers and hosting environments contain sophisticated load balancing to route requests to the least busy instance and some form of threading to take advantage of application "down time" to do other work.

Your only major limitation, as your traffic increases, might be how many simultaneous application instances are trying to access your database at any given time (since all instances share a single database).

Choosing a Web Server

Don't worry right now about which web server to use -- they all accomplish the same thing. As we mentioned, frameworks usually ship with their own server that you can fire up on the command line. Heroku has its own preference for which server to use.

The point is that you can manually configure your web server, but, for now, don't bother. Again, we'll cover some of this stuff in a later Deployment section.

Wrapping Up

You should now have a pretty good idea of where web servers fit into the picture of your web application. The portion of the web application that you're responsible for isn't actually handling incoming TCP connections -- that's the job for your web server. Your job is simply working with nice code. The frameworks we're using take care of handling the parsed HTTP requests that the server provides and then returning a suitable response for sending back.

That's one of the best aspects of using frameworks -- though it's useful (and interesting) to learn how all this HTTP stuff works, they take care of it all behind the scenes so you can just focus on writing the code that matters.



Sign up to track your progress for free

There are ( ) additional resources for this lesson. Check them out!

Sorry, comments aren't active just yet!