The output-cache

Day Communique offers another level of caching for content: the so-called output-cache. It caches already rendered pages and stores them in the CQ instance itself. The big advantage over the already mentioned dispatcher-cache is that the output-cache knows about the dependencies between handles because it’s part of CQ and can access these informations.

You can combine both dispatcher-cache and the output-cache. If no invalidation happens, the dispatcher-cache can answer most requests for static content and doesn’t bother the CQ with them. An invalidation may invalidate a larger part of the dispatcher-cache, but a lot of the requests, which are then forwarded to CQ, can be answered by the output-cache. So it sounds quite good to use both dispatcher-cache and output-cache to cache all content. In some cases it is.

But the output-cache has some drawbacks:

  • All dependencies are kept in memory (in the java heap). If you have a huge amount of content, the dependency tracking can eat up your memory. There is a hotfix available for CQ 4.2, which limits the cache-size, but it has some problems.
  • The rendered pages are kept on disk. So beside the content itself you need disk space for the rendered output.
  • Probably the most important point: The output-cache makes CQ to use only one CPU! Obviously there is a mutex within the output-cache code, which serializes all rendering requests when the results should be cached. I cannot say that for sure, but this is what I observed in CQ instances using the output-cache. After we disabled the output-cache — which means adding a “deny all” rule to its configuration — this phenomenon was gone and CQ used all available CPUs.

Of course you don’t need to configure the output-cache to cache all rendered pages. You may want to cache only the pages which are often requested and also often invalidated on the dispatcher-cache but weren’t changed at all.


One thought on “The output-cache

  1. Pingback: Create cachable content « Things on a content management system

Comments are closed.