Friday, April 21, 2006

AJAX hammers servers?

I read a question posed by James Governor's blog and Tim Bray's always insightful response to it yet was left feeling unfulfilled. It lead me to this blog which provided some further information. Perhaps it is the jetlag or I just haven't abused enough caffeine today but I feel compelled to write more.

The author does not define several things which make his statements relatively un-confirmable. First of all, there is no such thing as a spec for the Web 2.0 so his claim that RIA's are a crucial aspect is a flawed assumption from a pragmatic standpoint, although somewhat orthogonal to the question asked.

So let's look at what AJAX does and why it is favored. In the past, if you wanted to make a webpage that displayed up to date information, you may have to force the page to refresh itself every few minutes or prompt the user to do this. Since HTTP is stateless, that means firing off an HTTP get() request every few minutes to retrieve the entire page. The easiest way to do this was to use the meta-refresh element of HTML 4.0 transitional. Since HTTP server are idempotent, they respond each time they get the request. Tim and I had a conversation about this back in 1999-2000 when he called it TAXI (the father of AJAX). The question posed was "wouldn't it be much more efficient to only refresh parts of a page rather than the whole page). Out of that necessity, AJAX was born. It was actually Microsoft that cemented it by putting the XMLHTTPRequest() object into IE.

Now let's separate business needs from technology. If you have a business requirement to provide current, up to date information to your end users via the internet, you are going to do it, regardless of the underlying technology and the costs on servers. From a pure pragmatic standpoint, one should want to do this in the most efficient manner possible. There are several models available to use.

One is that the server "pushes" information to clients when some event happens (perhaps a stock price changes). The problem with this is that it often means the server has to dispatch a large number of concurrent messages when that event occurs, even if the clients themselves are not requesting it. Suboptimal - this can cause server overload, scare small children and bruise fruit.

Second model is the client side pull. This usually makes more sense given the client controls the nature of the request frequency, although the services architects determine the content size and policies for requests. This makes it much easier on the server to balance its outward messages given not all clients are likely to request at the exact same time.

Reloading only part of a page or even just the data for that part of the page is much more efficient that having to load an entire page or the data and presentation aspects of a component of the page, therefore, I would state that AJAX (or other AJAXian type methodologies) are probably the most efficient way to handle the business requirements that are placed on the web today.

If you were to ban AJAX from the web, we would have to revert to full page reloads which would certainly be more bandwidth and processor intensive than AJAX enables.

My opinion - those who are online gamers, porn surfers, MP3 downloaders are all more likely to cause the scalability problems that AJAX developers. Carefully thought out architecture should be used where possible. AJAX solves more problems that it creates.

Duane

1 comment:

  1. Not to be contrarian...but...

    Sometimes the largest hit on a server is in the connection setup, rather than the actual transfer of information. An AJAX application will make many smaller requests synchronously or asynchronously as opposed to monolithic requests for an entire data set. If there is an extremely large number of connections at once, but not necessarily considerable throughput...the server could appear "hammered", perhaps because there are too many threads open at once, or worse, forked processes.

    Some sites will use PHP in CGI wrap mode, which causes a new process to be spawned for each connection, and if there is process throttling in effect (common in shared web hosting environments) ... a server could become totally unresponsive.

    ReplyDelete

Do not spam this blog! Google and Yahoo DO NOT follow comment links for SEO. If you post an unrelated link advertising a company or service, you will be reported immediately for spam and your link deleted within 30 minutes. If you want to sponsor a post, please let us know by reaching out to duane dot nickull at gmail dot com.