The World Wide Web environment is the resulting illusion of several cooperating applications working together thanks to three simple mechanisms. A minimal client-server protocol, HTTP, allows clients to fetch data from servers, directly or through the activation of remote processes; a simple markup language, HTML, allows data to be collected in pages (or documents) and contains several types of formatting instructions, hypertext links, and form-like interactive objects; a world-wide naming scheme, URI, and its more known subset, URL, provide unique global identification of data collections and other network resources. The protocol ruling client-server interactions is so simple, unsafe, and inefficient that often it is necessary to introduce a proxy, an intermediate filter between WWW clients and servers, collecting all HTTP requests from the clients, and then delivering the results back to the clients. Proxies are used for caching frequent responses or for security reasons.
Essentially, two problems underline the limits of the current World Wide Web implementation:
Ad-hoc extensions are being studied to account for these problems. However, in our opinion a more general software architecture could open the way for the World Wide Web to offer scalable support to innovative applications.