LINK rel=subresource

What is LINK prefetching?

Link prefetching is a browser mechanism to download or prefetch resources.  The server provides hints to the browser and the browser can consult its cache and take action based on these hints.  "Link rel=prefetch" has been implemented in Mozilla, and a writeup of that implementation can be found here:  Mozilla Link Prefetching FAQ

The existing link prefetching uses a standard HTTP link header, and defines semantics for the link relation type "prefetch".  A recent update to how LINK headers could be defined has been proposed as RFC5988


A Link header embedded within HTML content:

        <link rel="subresource" href="jquery.js">

A Link header at the HTTP protocol level:

        Link: <jquery.js>; rel=prefetch

What is LINK rel=subresource?

"LINK rel=subresource" provides a new link relation type with different semantics from LINK rel=prefetch.   While rel=prefetch provides a low-priority download of resources to be used on subsequent pages, rel=subresource enables early loading of resources within the current page.  Because the resource is intended for use within the current page, it must be loaded at high priority in order to be useful.

What are the semantics?

When a browser discovers a rel=subresource link, it can optionally decide to immediately check that resource in the cache and request it from the server if the resource needs to be reloaded.  The subresource should be loaded with the same semantics as if it were a resource discovered within the page- with the same HTTP header construction and the same priority as it would otherwise use.

Once the rel=subresource request has been made, the browser may discover the same resource as it is used within the page.  The browser must not request the resource twice, and must recognize that the resource has already been requested.

Example Response:
 1   HTTP/1.1 200 OK
 2   Cache-Control: private, max-age=0
 3   Content-Encoding: gzip
 4   Content-Length: 13847
 5   Content-Type: text/html; charset=UTF-8
 6   Date: Thu, 13 Jan 2011 17:47:12 GMT
 7   Expires: -1
 8   Server: FastServer1.0
 9   Link: <logic.js>; rel=subresource
11   <html>
12   <body>
13     [ lots of content here ]
14     <script src="logic.js" type="text/javascript></script>
15   </body>

In the above example, line 9 informs the browser of the subresource.  At this point, the browser can request the subresource.  However, the main resource continues downloading, and we now have a race between the browser discovering the resource on line 14 and the completion of the subresource request from line 9.  The browser should recognize the rel=subresource has already initiated the request and not issue a duplicate request.

Further, unlike rel=prefetch links, if the user cancels the page before the page is fully loaded, a link rel=subresource load may be cancelled when the page is cancelled.  This differs from link rel=prefetch resources, which are intentionally loading resources related to future pages; cancelling those requests on page cancellation would defeat the purpose of link rel=prefetch.

Why bother loading resources early?

Web Applications often have long server-think-times while returning the result of a page which can easily take over 100ms.   Because of HTML's serial loading process, even static, always-required resources required for the page can often not be discovered until the dynamic portion of the application has been returned.  For example,  Google searches often report latencies of 150-250ms.  If that dynamic portion of the page refers to static elements, the loading of those elements is delayed to the browser.

Can't Web Applications Accomplish this through other means?

To some degree, yes.   With complex Javascript, web pages can do this.  Server push, for instance, can be implemented by converting images into base-64 and sent to the client as data-urls.  However, this is complicated to the application developer and also requires a 4/3rds bandwidth expansion for the image.  Similarly, application developers could rewrite their webpages to move all resources to the top and then reassemble the pages through javascript.  But this is a difficult optimization for humans to implement.

However link header support for rel=prefetch also enables a new class of optimizations.  With link header support, servers can learn about the content they serve, and automatically insert link rel=prefetch headers dynamically at low cost.  By leveraging rel=prefetched resources this way, application developers can be freed of manually optimizing these resource loads.

What about Cookies?  Couldn't they affect the load of the resource?

If the fetching of a subresource is dependent on a cookie setting within the content, servers should not use the link rel=subresource.  This problem exists in browsers today, as browsers already implement page pre-scanning, where they quickly scan forward through a webpage to find JS files to load, even though the content could alter the cookies. 

Are there any benchmarks?

Since this feature hasn't been implemented yet in any browser, benchmarks will need to be forthcoming.  Google did research on a concept called "X-Subresource" in 2009 which was an almost identical concept to the LINK rel=subresource header.   In those experiments, Google observed 10-25% gains on some pages.  More formal study will be conducted when possible.