Resource path vs URL and rewriting links

Today I want to discuss some aspects of an AEM application, which is rarely considered during application development, but which normally gets very important right before a golive: the path element of a URL, and how it is constructed (either in full version or in a shortened one).

Newcomers to the AEM world sometimes ask how the public URLs are determined and maintained; from their experience with older or other CMS systems pages have an ID and this ID has to be mapped somehow to a URL.
Within AEM this situation is different, because the author creates a page directly in the tree structure of a site. And the name of the page can be directly mapped to a URL. So if an author creates a page /content/mysite/en/news/happy-new-year-2016, this page can be reached via https://HOST/content/mysite/en/news/happy-new-year-2016.html (in the simplest form).

From a technical point of view, the resource path is mapped to the path-element of a URL. In many cases this is a 1:1 mapping (that means, that the full resource path is taken as path of the URL). Often the „many“ means „in development environments“, because in production environments these kinds of URLs are long and contain redundant informations, which is something you should avoid. A URL also contains a domain part, and this domain part often carries information, so it isn’t needed in the path anymore.
So instead of „“ we rather prefer „“ and map only a subset of the resource path.

When mapping the resource path to a URL you must be careful, because the other way (the mapping of URL to resource path) has to work as well, and there must be exactly 1 mapping.

Such kind of mappings (I often call the mapping „resource path to URL path“ a forward mapping and the „URL path to resource path“ a reverse mapping) can be created using the /etc/map mechanisms . In a web application you need to use both mappings:

  1. when the request is received the URL path has to get mapped to a resource, so the sling resource processing can start.
  2. When the rendered page contains links to other pages, the resource path of these pages has to be provided as URL path.

(1) is done automatically by the sling if the correct ruleset is provided. (2) is much more problematic, because all references to resources provided by AEM have to be rewritten. All references? Generally spoken yes, I will discuss this later on.

This mapping can be done through the 2 API methods of the resource resolver:

You might wonder, why you never use these 2 methods in your own code,even if I wrote above, that all the links to other pages need to rewritten. Basically you don’t have to do this, because the HTML created by the rendering pipeline (including all filters) is streamed through the Sling Output Rewriting Pipeline. This chain contains a rewriter rule, which scans through all the HTML and tries to apply a forward mapping to all links.

But it does only run on HTML output, but there are other elements of a site, which contain references to content stored in AEM as well, for example Javascript or CSS files. References contained in these files are not rewritten, but delivered as they are stored in the repository. In many cases the setup is designed in a way, that a 1:1 mapping still works; but that’s not always possible (or wanted).

So please take this as an advice: Do not hardcode a path in CSS or Javascript files if there’s a chance that these paths need to be mapped.
Rewriting other formats than HTML is not part of AEM itself; of course you can extend the defaults and provide a rewriting capability for Javascript and CSS as well, but that’s not an easy task.)

The question is, if you really have to rewrite all resource paths at all. In many cases it is ok just to have the URLs of the HTML pages looking nice (because these are the only URLs which are displayed prominently) . But all the other resources (e.g assets, CSS and Javascript files) don’t need to get mapped at all, but there the default 1:1 mapping can be used. Then you’re fine, because you only have to do the mapping once in /etc/map and that’s it.

The Apache mod_rewrite modules also offers very flexible ways to do reverse mapping, but it lacks the a way to apply a forward mapping to the HTML pages (as the Sling Output Rewriter does). So mod_rewrite is a cool tool, but it is not sufficient to completely cover all aspects of resource mapping.

How can I avoid Oak write/merge conflicts?

Sandeep asked in a comment to the previous posting:

Even if your sessions are short, and you have made a call to session.refresh(true), it is possible that some one made a change before you did a, right? So, what is the best practice in dealing with such a scenario?
Keep refreshing ( in a loop, until your is successful or until you hit an assumed maximum number of attempts limit?
Or is there any other recommended best practice?

That’s a good question, but also a question with no satisfying answer. Whenever you want to modify nodes in the repository, there’s a change that the same nodes are changed in parallel, even you change only a single node or property. In reality, this rarely happens. Most features in AEM are built in way, that each step (or each workflow instance, each Sling job, each replication event, etc) has its own nodes to operate upon. So concurrency must not provoke such a situation, that multiple concurrent operations compete for writes on a single node.
So from a coding perspective it should possible to avoid such situations. Not only because of this kind of problems, but also because of performance and debugging.

Something you cannot deal with in this way are author changes. If 2 authors decide to change a page at the same time, it’s likely that they screw up the page. You can hardly avoid that just using code. But if you cannot guaratnee from a work organization point of view, that no 2 persons work at the same page at the same time, teach your authors to use the „lock“ feature. I basically prevents other authors from making changes temporarily. But according to the Oak documentation it isn’t suited to be used as short-living locks (in a database sense), but rather longer-living locks (author locks a page to prevent other authors from editing it).

So, to come a conclusion to Sandeeps question: It depends. If you designed your application carefully, you should rarely come into such situations, that you compete with multiple threads for a node. But whenever it occurs it should be considered as a bug, analyzed and then get fixed.
But there can be other cases, where this approach could make sense. In any case I would retry a few times (e.g. 10) and then break the operation with a meaningful log message. But I don’t think that it’s good to retry indefinitely.

AEM anti pattern: Long running sessions

AEM 6.x comes with Apache Oak, which features the use of the MVCC principle. MVCC (multi version concurrency control) is a principle, which gives you an view on a certain state within the repository. This state does not change, but can be considered as immutable. If you want to perform a change on the repository, the change is performed against this state and the applied (merged) to the HEAD state (which is the most current state within the repository). This merge is normally not a problem, if the state of the session doesn’t differ too much from the HEAD state; in case the merge fails, you get an OakMerge exception.

Note, that this is change compared to Jackrabbit 2.x and CRX 2.x, where the state of a session was always update, and where these merge exception never happened. This also means, that you might need to change your code to make it work well with Oak!

If you have long-running sessions, the probability of such an OakMerge exceptions is getting higher and higher. This is due to other changes happening in the repository, which could affect also the areads where your session wants to perform its changes. This is a problem especially in cases, where you run a service, which opens a session in the activate() method and closes it in deactivate() and uses it to save data to the repository as well. These are rare cases (because they are discouraged since years), but they still exist.

The problem is, that if a save() operations fails due to such an OakMerge exception, the temporary space of that session is polluted. The temporary space of a session is heap memory, where all the changes are stored, which are about to get saved. A successfully cleans that space afterwards, but if an exception happens this space is not cleaned. And if a fails because of such OakMerge exceptions, any subsequent session will fail as well.

Such an exception could like this (relevant parts only):

Caused by: javax.jcr.InvalidItemStateException: OakState0001: Unresolved conflicts in /content/geometrixx/en/services/jcr:content
at org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(
at org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(
at org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(
at org.apache.jackrabbit.oak.jcr.session.ItemImpl$4.perform(
at org.apache.jackrabbit.oak.jcr.session.ItemImpl$4.perform(
at org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.perform(
at org.apache.jackrabbit.oak.jcr.session.ItemImpl.perform(

There are 2 ways to mitigate this problem:

  • Avoid long running sessions and replace them by a number of short-living sessions. This is the way to go and in most cases the easiest solution to implement. This also avoids the problems coming with shared sessions.
  • Add code to call session.refresh(true) before you do your changes. This refreshes the session state to the HEAD state, exceptions are less likely then. If you run into a RepositoryException you should explicitly cleanup your transient space using session.refresh(false); then you’ll loose your changes, but the next will not fail for sure. This the path you should choose when you cannot create new sessions.

Changed Sling bundles in AEM 6.0 Servicepack 3

Servicepack 3 for AEM 6.0 is now available (releasenotes).

Here’s the complete list of sling bundles in stock AEM 6.0 and the various levels of servicepacks. Bundles which are not available with a specific version are listed as “-“, version numbers marked in red appeared first in this servicepack. Where possible I added the links to the changes in servicepack 3.

The most notable change for SP3 from a Sling perspective is the switch to the fsclassloader (see 6D’s blogpost for it) for all scripting languages. So the compiled JSPs do not longer reside inside the repository (/var/classes), but now are placed in the filesystem.

Symbolic name aeM 6.0 aem 6.0 SP1 aem 6.0 SP2 aem 6.0 SP3 2.1.0 2.1.0 2.1.0 2.1.0 2.7.0 2.7.0 2.8.0 2.8.0 0.9.0.R988585 0.9.0.R988585 0.9.0.R988585 0.9.0.R988585 1.1.7.R1584705 1.1.7.R1584705 1.1.7.R1584705 1.1.7.R1584705 0.0.1.R1582230 0.0.1.R1582230 0.0.1.R1582230 0.0.1.R1681728 2.2.0 2.2.0 2.2.0 2.2.0 1.3.2 1.3.2 1.3.2 1.3.2 2.1.0 2.2.0 2.2.0 2.2.0 1.0.2 1.0.0 1.0.0 1.0.0 1.0.0 2.0.6 2.0.6 2.0.6 2.0.6 4.0.0 4.0.0 4.0.0 4.0.0 1.0.2 1.0.2 1.0.2 1.0.2 2.1.4 2.1.4 2.1.4 2.1.4 2.2.0 2.2.0 2.2.0 2.2.0 2.4.2 2.4.2 2.4.2 2.4.8 (changelog) 3.2.0 3.2.0 3.2.0 3.2.0 1.0.0 1.0.0 1.0.0 1.0.0 1.0.8 1.0.8 1.0.8 1.1.6 (changelog) 1.0.0 1.0.0 1.0.0 1.0.0 2.3.3.R1588174 2.3.3.R1588174 2.3.10 2.3.10 3.3.10 3.3.10 3.5.0 3.7.4 (changelog) 1.0.2 0.2.2 0.2.2 0.2.2 0.2.2 1.0.0 1.0.0 1.0.0 1.0.0 1.0.0 1.0.0 1.0.0 1.0.0 1.0.2 1.0.2 1.0.2 1.0.2 1.0.2 1.0.2 1.0.2 1.0.2 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 2.2.8 2.2.8 2.2.8 2.2.8 1.0.0 1.0.0 1.0.0 1.0.0 3.5.0 3.5.0 3.5.4 3.6.4 (changelog) 1.0.12 1.0.12 1.0.12 1.1.2 (changelog) 1.0.2 1.0.2 1.0.4 1.1.0 (changelog) 3.1.6 3.1.6 3.1.8 3.1.8 0.1.0 0.1.0 0.1.0 0.1.0 2.2.0 2.2.0 2.2.0 2.2.0 2.2.2 2.2.2 2.2.2 2.2.2 3.2.0 3.2.0 3.2.0 3.2.0.B001-EMPTY 2.1.0 2.1.0 2.1.0 2.1.0 2.1.6 2.1.6 2.1.6 2.1.6 1.2.0 1.2.0 1.2.0 1.2.0 2.0.0 2.0.0 2.0.0 2.0.0 1.0.0 1.0.0 1.0.0 1.0.0 2.3.7.R1591843 2.3.7.R1591843 2.3.8 2.4.4.B001 (changelog) 0.0.1.R1562502 0.0.1.R1562502 0.0.1.R1562502 0.0.1.R1562502 2.2.2 2.2.2 2.2.2 2.2.2 1.0.2 1.0.2 1.0.2 1.0.2 1.2.0 1.2.0 1.2.0 1.2.0 1.0.0 1.0.0 1.0.0 1.0.0 1.0.2 1.0.4 1.0.4 1.0.4 1.0.2 1.0.2 1.0.2 1.0.2 0.0.1.R1579485 0.0.1.R1579485 0.0.1.R1579485 0.0.1.R1579485 1.0.0 1.0.0 1.0.0 1.0.0 1.1.2 1.1.2 1.1.2 1.1.2 1.1.0 1.1.1.R1618115 1.1.6 1.1.14.B008 (changelog) 1.0.4 1.0.4 1.0.4 1.0.4 2.1.6 2.1.6 2.1.6 2.1.6 2.0.26 2.0.26 2.0.26 2.0.26 2.0.6 2.0.11.R1607999 2.0.12 2.0.12 2.0.13.R1566989 2.0.14 2.0.14 2.0.14 2.0.28 2.1.4 2.1.4 2.1.6 (changelog) 2.2.0 2.2.0 2.2.0 2.2.0 2.0.6 2.0.6 2.0.6 2.0.6 1.0.6 1.0.6 1.0.6 1.0.10 (changelog) 1.0.0 1.0.0 1.0.0 1.0.4 (changelog) 1.0.0.Revision1200172 1.0.0.Revision1200172 1.0.0.Revision1200172 1.0.0.Revision1200172 2.1.8 2.1.8 2.1.8 2.1.8 2.3.4 2.3.5.R1592719 2.3.5.R1592719 2.3.5.R1592719-B004 2.3.2 2.3.2 2.3.6 2.3.6 1.3.0 1.3.0 1.3.0 1.3.0 0.0.1.Rev1526908 0.0.1.Rev1526908 0.0.1.Rev1526908 0.0.1.Rev1526908 0.0.1.Rev1387008 0.0.1.Rev1387008 0.0.1.Rev1387008 0.0.1.Rev1387008 1.0.0 1.0.0 1.0.0 1.0.0

The problems of multi-tenancy: tenant separation and „friendly tenants”

In the last articles (1,2) I covered some aspects of multi-tenancy, which are very likely to occur in AEM projects (but not restricted to such projects). I stressed that there a lot of aspects which have the potential to cause trouble on a non-technical level. But you cannot draw a clear line between the business/political aspects and the technical aspects, because they often tend to fuel each other. Implementing multi-tenancy is political decision which implies design, implementation and operational decisions, which are not for free; which in turn then heat up any business discussion about the costs of the platform. And then the call goes back to the architect not to implement the full stack, but only a reduced one, which can cause trouble again on business side … there are a lot of these stories, and it only proves, that you can hardly do a decision in one domain without impacting the other.

But let’s focus now on the technical level and how it is influenced by multi-tenancy. In any multi-tenancy system the full and clean separation of the tenants is the ultimate goal. That means: no shared resources beyone the ones which are supposed to be shared intentionally. At least the usage of the shared resources must be restricted in a way, that one tenant cannot negatively influence the other tenants; or that the influence of any single tenant on the others is marginal and always managable. On the other hand it should be cost-effective, that means, that a multi-tenancy system for N clients must be cheaper than N non-multi-tenant systems (a single system for each tenant).

(If you reach this point it might make sense to evaluate if the additional cost of making a system capable to operate multiple tenants outweighs the cost and complexity of managing more systems. If that’s the case, stop here and replicate create a single-tenant application and deploy it to multiple systems.)

The simplest approach to multi-tenancy is to host all tenants (or as much as possible) on a single system. As all these tenants now live within the boundaries of a single instance (a single JVM, a single hardware/virtual machine) they share all the hardware resources (CPU, memory, I/O), but also the software resources (threads, queues, caches, „the application“). This sharing means formost, that the maximum performance of each tenant is limited under the assumption, that other tenants need resources at the same time too.
This scenario (let’s call it „friendly tenants“) is often encountered in enterprises, where multiple brands, divisions or coutries are hosted on a single platform. But it has some implications:

  1. All tenants share the same application.
  2. Downtime for platform upgrades/maintenance/bugfixes affects all tenants.
  3. Platform failures affects all tenants.

These limitations can be quite heavy. While the limitations 2 and 3 are accepted in most cases (given that the platform is stable and performant otherwise), the limitation of the development scope is often considered as problem. Because it enforces, that all changes a tenants demands go into the platform; thus all requirements of all tenants are prioritized from a platform perspective („which features bring the most benefit for all tenants?“), so the priorities of a single tenant don’t have that much weight.
Of course you can allow custom development for individual tenants (maybe even by multiple development parties), but then the application must be designed and implemented carefully to avoid „friendly fire“ (changes to a tenant affects other tenants as well).

This „friendly tenant“ scenario is likely to have the lowest costs, as the usage of resources is low compared to the number of tenants and the individual requirements of tenants are often considered lower priority compared to the requirements shared by a set of tenants. With AEM you can implement such a scenario quite well using ACLs. The MSM gives you a good tool when the tenants also share content.

Dispatcher and shared content

In the September session of the Ask thec expert series (passcode: “Dispatch”) I talked about problems arising out the requirement to deal with multiple sites and each site having it’s own domain, and that a sling mapping is used to map the long repository paths to shorter URLs (like mapping /content/geometrix/en/services.html to I already tried to deal with this question in the Q&A part of the session, but I will write it here in more depth.

In the session on AEM dispatcher setups there was a question how to deal with shared content. If you do a straight-forward configuration of the dispatcher and map a shared content path (being it assets or pages) into the site structure of a site, the content is cached at this location in the dispatcher cache, but the invalidation happens only once at the „original“ path. So the content within the mapped paths in the site structure is not invalidated at all.
This is a problem, but you can see this problem from more than one angle.

The first question is, if you really need to share this content at all. I am not a SEO expert, but from what I heard, having duplicate content on multiple domains gives you a negative score on your page rank. Also from my point of view at some point the necessity rises to customize this shared content per tenant, which leads often to copy a shared page into the site and customize it there, essentially not using the shared content anymore. If there’s the risk of having this problem, you should think of using the MSM to avoid this „copy-and-adapt“ workflow and make it manageable. In that case you have true local copies and you don’t need to map the pages into the site content structure, avoiding the caching and invalidation problem completely.

The second question is, if it makes sense to offload all this shared content into a dedicated „shared ocontent“ domain, which is used by all sites; in that case the need to duplicate is avoided as well.

These are 2 suggestions to avoid some of the problems of the „shared content“ approach. If you cannot use them, you have to go the way of duplicate content at dispatcher level, with all the implications it has, mainly:

  • potential SEO problems because of duplicate content
  • increased disk consumption on dispatcher level

To deal with the problem of duplicate content and invalidation you have to go the way to create a custom invalidation logic, which is aware of your special setup and which does the invalidation accordingly. See the documentation on the dispatcher regarding this topic.

1000 nodes per folder and Oak orderable nodes

Every now and then there’s the question, how many child nodes are supported in JCR. While the technical correct answer is „there is no limit“, in practice there are some limitations.

In CRX 2.x the nodes are always ordered. In CRX 2.x even unordered nodes are treated as if they are ordered, which made the difference nearly to non-existent. [Thanks Justin for making this clear!] This means, that the order needs to be maintained on all operations, including add and remove of sibling nodes. The more child nodes a node has, the more time it takes to maintain this list.

So, what’s about this „1000 child nodes“ limit? First of all, this number is arbitrary :-) But when you use CRXDE Lite, it’s getting really slow to browse a node with lots of child nodes, mostly because of the time it takes the Javascript to render it. But of course also the performance of add and remove operations degrade linearly. Also you don’t have hardly cases where you would have more than 1000 child nodes.

But for the aspect of reading nodes there is no impact on performance. So it is not a problem to have 6000 nodes in /libs/wcm/core/i18n/en, because you only read the nodes, but you don’t change them.

But nevertheless this „limit“ can be cumbersome, especially if you don’t need to the feature of ordered child nodes. Also the fact that there is this limit means, that adding you have the impact (at a a lower level) also already with less nodes.

With Apache Oak this has changed. With Oak nodes are not ordered unless its parent has node type which supports ordering.

To illiustrate the difference between sling:folder and sling:orderedFolder; i did a small test. I wrote a small benchmark to create 5000 nodes, then add more nodes, do random reads and delete them afterwards. For every operation a single node is created or deleted followed by a save(). (Sourcecode)

Operation sling:Folder sling:OrderedFolder
Create 5000 nodes 6124 ms 17129 ms
Random read 500 nodes 2 ms 9 ms
Add 500 nodes 112 ms 564 ms

This small benchmark (executed on 2014 Macbook pro with SSD, AEM 6.0, TarMK, Oak 1.0.0) shows:

  • Adding lots of child nodes to a node is much faster when you using a non-ordering nodetype
  • Also random read is faster, obviously Oak can use more efficient data structures than a list, if it doesn’t need to maintain the ordering.

The factor of 3-4 is obviously quite significant. Of course the benefit is smaller if you have less child nodes.