Web server

A web server is server software, or a system of one or more computers dedicated to running this software, that can satisfy client HTTP requests on the public World Wide Web or also on private LANs and WANs.[1]

The inside and front of a Dell PowerEdge server, a computer designed to be mounted in a rack mount environment.

A web server can manage client HTTP requests for Web Resources related to one or more of its configured / served websites.

A web server usually receives incoming network HTTP requests and sends outgoing HTTP responses (one for each processed request), along with web contents, through transparent and / or encrypted TCP/IP connections (See also: HTTPS) which are started by client user agents before sending their HTTP request(s). Web servers may soon be able to handle other types of transport protocols for HTTP requests.

The primary function of a web server is to store, process and deliver web pages to clients.[2] This citation was good a few decades ago but nowadays it is better to use the terms of Web contents and / or Web resources instead of Web Pages because the first ones cover all kind of contents that can be delivered to clients by web server. Examples of Web contents may be HTML files, XHTML files, image files, style sheets, scripts, other types of generic files that may be downloaded by clients, etc.

Multiple web servers may be used for a high traffic website; here, Dell servers are installed together being used for the Wikimedia Foundation.

A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the web server and the website are implemented.

While the major function is to serve content, a full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files.

Many generic web servers also support one or more server interfaces used to generate dynamic content by web applications. Implementations used may vary from server-side scripting (i.e. scripting languages) to external application programs. This means that the behaviour of the web application can be defined in separate files (scripts or programs), while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically or other type of contents ("on-the-fly") as opposed to returning static documents. The former is primarily used for retrieving or modifying information from databases. The latter is typically much faster and more easily cached. For performance reasons, usually websites with dynamic contents also have static contents whenever possible.

Web servers can frequently be found embedded in devices such as printers, routers, webcams and serving only a local network. The embedded web server may then be used as a part of a system for monitoring or administering the device in question. This usually means that no additional software has to be installed on the client computer since only a web browser is required (which now is included with most operating systems).

History

The world's first web server, a NeXT Computer workstation with Ethernet, 1990. The case label reads: "This machine is a server. DO NOT POWER IT DOWN!!"
Sun's Cobalt Qube 3 – a computer server appliance (2002, discontinued)

In March 1989 Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system.[3][4] The project resulted in Berners-Lee writing two programs in 1990:

Between 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among scientific organizations and universities, and subsequently to the industry.

In 1994 Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.

Basic common features

Although web server programs differ in how they are implemented, most of them offer the following basic common features.

  • HTTP: support for one or more versions of HTTP protocol in order to send versions of HTTP responses compatible with versions of client HTTP requests, e.g. HTTP/1.0, HTTP/1.1 plus, if available, HTTP/2, HTTP/3;
  • Logging: usually web servers have also the capability of logging some information, about client requests and server responses, to log files for security and statistical purposes.

A few other popular features (only a very short selection) are:

Path translation

Web servers are able to map the path component of a Uniform Resource Locator (URL) into:

  • A local file system resource (for static requests)
  • An internal or external program name (for dynamic requests)

For a static request the URL path specified by the client is relative to the target website's root directory.

Consider the following URL as it would be requested by a client over HTTP:

http://www.example.com/path/file.html

The client's user agent will translate it into a connection to www.example.com with the following HTTP/1.1 request:

GET /path/file.html HTTP/1.1
Host: www.example.com

The web server on www.example.com will append the given path to the path of the (Host) website root directory. On an Apache server, this is commonly /home/www/website (on Unix machines, usually /var/www/website). The result is the local file system resource:

/home/www/www.example.com/path/file.html

The web server then reads the file, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or is unavailable.

Kernel-mode and user-mode web servers

A web server software can be either incorporated into the OS kernel, or in user space (like other regular applications).

Web servers that run in kernel mode can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereas run-time critical errors may lead to serious problems in OS kernel.

Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean useless buffer copies which are another limitation for user-mode web servers.

Nowadays almost all web server software is executed in user mode (because many of above small disadvantages have been overcome by faster hardware, new OS versions and new web server software). See also comparison of web server software to discover which of them run in kernel mode or in user mode (also referred as kernel space or user space).

Performances

To improve user experience, Web servers should reply quickly (as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g. big files, etc.), also returned data content should be sent as soon as possible (high transfer speed).

For Web server software, main key performance statistics (measured under a varying load of clients and requests per client) are:

  • number of maximum requests per second (RPS, similar to QPS, depending on HTTP version and configuration, type of HTTP requests, etc.);
  • network latency response time (usually in milliseconds) for each new client request;
  • throughput in bytes per second (depending on file size, cached or not cached content, available network bandwidth, type of HTTP protocol used, etc.).

Above three performance number may vary noticeably depending on the number of active TCP connections, so a fourth statistic number is the concurrency level supported by a web server under a specific Web server configuration, OS type and available hardware resources.

Last but not least, the specific server model used to implement a web server program can bias the performance and scalability level that can be reached under heavy load or when using high end hardware (many CPUs, disks, etc.).

Performances of a web server are typically benchmarked by using one or more of the available automated load testing tools.

Load limits

A web server (program installation) usually has pre-defined load limits, because it can handle only a limited number of concurrent client connections (usually between 1 and several tens of thousands for each active web server process, see also the C10k problem and C10M problem) and it can serve only a certain maximum number of requests per second depending on:

  • its own settings,
  • the average HTTP request type,
  • whether the requested content is static or dynamic,
  • whether the content is cached, or compressed,
  • the average network speed between clients and web server,
  • the number of active TCP connections,
  • the hardware and software limitations or settings of the OS of the computer(s) on which the web server runs.

When a web server is near to or over its limits, it gets overloaded and so it may become unresponsive.

Causes of overload

At any time web servers can be overloaded due to:

  • Excess legitimate web traffic. Thousands or even millions of clients connecting to the website in a short interval, e.g., Slashdot effect;
  • Distributed Denial of Service attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users;
  • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them)
  • XSS worms can cause high traffic because of millions of infected browsers or web servers;
  • Internet bots Traffic not filtered/limited on large websites with very few resources (bandwidth, etc.);
  • Internet (network) slowdowns (due to packet losses, etc.) so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
  • Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., database) failures, etc.; in these cases the remaining web servers may get too much traffic and become overloaded.

Symptoms of overload

The symptoms of an overloaded web server are:

  • Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
  • The web server returns an HTTP error code, such as 500, 502,[6] 503,[7] 504,[8] 408, or even 404, which is inappropriate for an overload condition.[9]
  • The web server refuses or resets (interrupts) TCP connections before it returns any content.
  • In very rare cases, the webserver returns only a part of the requested content. This behavior can be considered a bug, even if it usually arises as a symptom of overload.

Anti-overload techniques

To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like:

  • Managing network traffic, by using:
    • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
    • HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
    • Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
  • Deploying web cache techniques.
  • Using different domain names or IP addresses to serve different (static and dynamic) content by separate web servers, e.g.:
    • http://images.example.com
    • http://example.com
  • Using different domain names or computers to separate big files from small and medium-sized files; the idea is to be able to fully cache small and medium-sized files and to efficiently serve big or huge (over 10 – 1000 MB) files by using different settings.
  • Using many web servers (programs) per computer, each one bound to its own network card and IP address.
  • Using many web servers (computers) that are grouped together behind a load balancer so that they act or are seen as one big web server.
  • Adding more hardware resources (i.e. RAM, disks) to each computer.
  • Tuning OS parameters for hardware capabilities and usage.
  • Using more efficient computer programs for web servers, etc.
  • Using other programming workarounds, especially if dynamic content is involved.
  • Using latest efficient versions of HTTP (e.g. beyond using common HTTP/1.1 also by enabling HTTP/2 and maybe, in near future, HTTP/3 too whenever available web server software has reliable support for the latter two protocols) in order to reduce a lot the number of TCP/IP connections started by each client and the size of data exchanged (because of more compact HTTP headers representation, data compression, etc.); anyway even if newer HTTP protocols usually require less OS resources sometimes they may require more RAM and CPU resources used by web server software (because of encrypted data, data compression on the fly and other implementation details).

Market share

The LAMP (software bundle) (here additionally with Squid), composed entirely of free and open-source software, is a high performance and high-availability heavy duty solution for a hostile environment
Chart:
Market share of all sites of major web servers 2005–2018

February 2019

Below are the latest statistics of the market share of all sites of the top web servers on the Internet by W3Techs Usage of Web Servers for Websites.

ProductVendorPercent
ApacheApache44.3%
nginxNGINX, Inc.41.0%
IISMicrosoft8.9%
LiteSpeed Web ServerLiteSpeed Technologies3.9%
GWSGoogle0.9%

All other web servers are used by less than 1% of the websites.

July 2018

Below are the latest statistics of the market share of all sites of the top web servers on the Internet by W3Techs Usage of Web Servers for Websites.

ProductVendorPercent
ApacheApache45.9%
nginxNGINX, Inc.39.0%
IISMicrosoft9.5%
LiteSpeed Web ServerLiteSpeed Technologies3.4%
GWSGoogle1.0%

All other web servers are used by less than 1% of the websites.

February 2017

Below are the latest statistics of the market share of all sites of the top web servers on the Internet by Netcraft February 2017 Web Server Survey.

ProductVendorJanuary 2017PercentFebruary 2017PercentChangeChart color
IISMicrosoft821,905,28345.66%773,552,45443.16%−2.50red
ApacheApache387,211,50321.51%374,297,08020.89%−0.63black
nginxNGINX, Inc.317,398,31717.63%348,025,78819.42%1.79green
GWSGoogle17,933,7621.00%18,438,7021.03%0.03blue

February 2016

Below are the latest statistics of the market share of all sites of the top web servers on the Internet by Netcraft February 2016 Web Server Survey.

ProductVendorJanuary 2016PercentFebruary 2016PercentChangeChart color
ApacheApache304,271,06133.56%306,292,55732.80%0.76black
IISMicrosoft262,471,88628.95%278,593,04129.83%0.88red
nginxNGINX, Inc.141,443,63015.60%137,459,39116.61%−0.88green
GWSGoogle20,799,0872.29%20,640,0582.21%−0.08blue

Apache, IIS and Nginx are the most used web servers on the World Wide Web.[10][11]

See also

References

  1. Nancy J. Yeager; Robert E. McGrath (1996). Web Server Technology. Google Books. ISBN 1-55860-376-X. Retrieved 22 January 2021.
  2. Patrick, Killelea (2002). Web performance tuning (2nd ed.). Beijing: O'Reilly. p. 264. ISBN 059600172X. OCLC 49502686.
  3. Zolfagharifard, Ellie (24 November 2018). "'Father of the web' Sir Tim Berners-Lee on his plan to fight fake news". The Telegraph. ISSN 0307-1235. Retrieved 1 February 2019.
  4. "History of Computers and Computing, Internet, Birth, The World Wide Web of Tim Berners-Lee". history-computer.com. Retrieved 1 February 2019.
  5. Macaulay, Tom. "What are the best open source web servers?". ComputerworldUK. Retrieved 1 February 2019.
  6. Fisher, Tim; Lifewire. "Getting a 502 Bad Gateway Error? Here's What to Do". Lifewire. Retrieved 1 February 2019.
  7. Fisher, Tim; Lifewire. "Getting a 503 Service Unavailable Error? Here's What to Do". Lifewire. Retrieved 1 February 2019.
  8. "What is a 502 bad gateway and how do you fix it?". IT PRO. Retrieved 1 February 2019.
  9. Handbook of digital forensics and investigation. Casey, Eoghan., Altheide, Cory. Burlington, Mass.: Academic Press. 2010. p. 451. ISBN 9780080921471. OCLC 649907705.CS1 maint: others (link)
  10. Vaughan-Nichols, Steven J. "Apache and IIS' Web server rival NGINX is growing fast". ZDNet. Retrieved 1 February 2019.
  11. Hadi, Nahari (2011). Web commerce security: design and development. Krutz, Ronald L. Indianapolis: Wiley Pub. ISBN 9781118098899. OCLC 757394142.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.