Interview with Cedric Hüsler in the AEM podcast

Peter and Joey from the AEM podcast recently published their interview with Cedric Hüsler. Check out part 1, part 2 and part 3, there are a lot of interesting statements in there.

Cedric Hüsler is the director of product management for AEM for Adobe and a veteran in the WCM market. Although such a title sometimes suggests something different, Cedric is still very technical (at least sometimes he is, first-hand experience :-)) and this interview is a must-listen for all, which are interested in the ways AEM went and will go.

Thanks Peter and Joey for this podcast!

Creating the content architecture with AEM

In the last post I tried to describe the difference between the information architecture and content architecture; and from an architectural point of the view the content architecture is quite important, because based on that your application design will emerge. But how can you get to a stable and well-thought content structure?

Well, there’s no bullet-proof approach for it. When you design the content architecture for an AEM-based application it’s best to have some experience with the hierarchical approach offered by the repository approach. I will try to outline a process which might help you to get you there.
It’s not a definite guideline and I will never guarantee that it will work for you, as it is just based on my experience with the projects I did. But I hope that it will give some input and can act as a kind of checklist for you. My colleague Alex Klimetschek did a presentation at the adaptTo() conference 2012 about it.

The tree

But before we start, I want to remind you of the fact, that everything you do has to fit into the JCR tree. This tree is typically a big help, because we often think in trees (think of decision trees, divide-and-conquer algorithms, etc), also the URL is organized in a tree-ish way. Many people in IT are familiar with the hierarchical way filesystems are organized, so it’s both an comfortable and easy-to-explain approach.

Of course there are cases, where it makes things hard to model; but you are hit that problem, you should try to choose a different approach. Building any n:m relation in the AEM content tree is counter-intuitive, hard to implement and typically not really performant.

Start with the navigation

Coming from the information architecture you typically have some idea, how the navigation in the site should look like. In the typical AEM-based site, the navigation is based on the content tree; that means that traversing the first 2-3 levels of your site tree will create the navigation (tree). If you map it the other way around, you can get from the navigation to the site tree as well.

This definition definitivly has impact on your site, as now the navigation is tied to your content structure; changing one without the other is hard. So make your decision carefully.

Consider content-reuse

As the next step consider the parts of the website, which have to be identical, e.g. header and footer. You should organize your content in a way, that these central parts are maintained once for the whole site. And that any change on them can be inherited down the content tree. When you choose this approach, it’s also very easy to implement a feature, which allows you to change that content at every level, and inherit the changed content down the tree, effectively breaking the inheritance at this point.

If you are this level, also consider the fact of dispatcher invalidation. Whenever you change such a “centralized” content, it should be easily possible to purge the dispatcher cache; in the best case the activation of the changed content will trigger the invalidation of all affected pages (not more!), assuming that you have your /statefilelevel parameter set correctly.

Consider access control

As third step let’s consider the already existing structure under the aspect of access control, which you will need on the authoring environment.
On smaller sites this topic isn’t that important, because you have only a single content team, which maintains all the page. But especially in larger organizations you have multiple teams, and each team is responsible for dedicated parts of the site.

When you design your content structure, overlay the content structure with these authoring teams, and make sure, that you can avoid any situation, where a principal has write access to a page, but not to any of the child pages. While this is not always possible, try to follow this guidelines regarding access control:

  • When looking from the root node in the tree to node on a lower level, always add more privileges, but do not remove them.
  • Every author for that site should have read access to the whole site.

If you have a very complicated ACL setup (and you’ve already failed to make it simpler), consider to change your content structure at this point, and give the ACL setup a higher focus than for example the navigation.

My advice at this point: Try to make your ACL setup very easy; the more complex it gets the more time you will spend in debugging your group and permission setup to find out, what’s going on in a certain situation; also the harder it will be to explain it to your authors.

Multi-Site with MSM

As you went now through these 3 steps, you are through with it and already have some idea how your final content structure needs to look like. There is another layer of complexity if you need to maintain multiple sites using the multi-site-manager (MSM). The MSM allows you to inherit content and content structure to another site, which is typically located in a parallel sub-tree of the overall content tree. Choosing the MSM will keep your content structures consistent, which also means, that you need to plan and setup your content master (in MSM terms it is called the blueprint) in a way, that the resulting structure is well-suited for all copies of it (in MSM: live copies).

And on top of the MSM you can add more specifics, features and requirements, which also influence the content structure of your site. But let’s finish here for the moment.

When you are done with all these exercises, you already have a solid basis and considered a lot of relevant aspects. Nevertheless you should still ask others for a second opinion. Scrutiny pays really off here, because you are likely to live with this structure for a while.

Information architecture & content architecture

Recently I had a discussion in the AEM forums about how to reuse content. During this discussion I was reminded again at the importance of the way how you structure content in your repository.

For this often the term “information architecture” is used, but from my point of view that’s not 100% correct. Information architecture handles the various aspect how your website itself is structured (in terms of navigation, layout but also content). It’s most important aspect is the efficient navigation and consumption of the content on the website by end users (see the wikipedia article for it, ). But it doesn’t care about aspects like content reuse (“where do I maintain the footer navigation”), relations between websites (“how can I reduce work to maintain similar sites”), translations or access control for the editors of these systems.

Therefor I want to introduce the term “content architecture“, which deals with questions like that. The information architecture has a lot of influence, but it’s solely focused on the resulting website; the content architecture focusses on way, how such sites can be created and maintained efficiently.

In the AEM world the difference can be made visible very easily: You can see the information architecture on the website, while you can see the content architecture within CRXDE Lite. Omitting any details: The information architecture is the webpage, the content architecture the repository tree.
If you have some experience with AEM you know that the structure of the website typically matches some subtree below /content. But in the repository tree you don’t find a “header” node at the top of every subtree of a “jcr:content” node of a page, same with the footer. This piece of the resulting rendered website is taken from elsewhere, but not maintained as part of every page, although the information architecture mandates, that every page has a header and a footer.

Besides that the repository also holds a lot of other supporting “content”, which is important for a information architecture but not directly mandated by it. You have certain configuration which controls the rendering of a page; for example it might control which contact email address is displayed at the page footer. From an information architecture point of view it’s not important, where it is stored; but from a content architecture it is very important, because you might have the chance to control it at a single location, which then takes effect for all pages. Or at multiple locations, which result in changing it for individual pages. Or in a per-subtree configuration, where all pages below a certain page are affected. Depending on the requirement this will result in different content architectures.

Your information architecture will influence your content architecture (in some areas it even be a 1:1 relation), but the content architecture goes way beyond it, and deals with other “*bilities” like “manageability”, “evolvability” (how future proof is the content if there will be changes to information architecture?) or “customizability” (how flexible in terms of individualization per page/subsite is my content architecture?).

You can see, that it’s important to be aware of the content architecture, because it will have a huge influence on your application. Your application typically has a lot if built-in assumptions about the way content is structured. For example: “The child nodes below the content root node form the first-level navigation of the site”. Or “the homepage of the site uses a template called ‘homepage'” (which is btw also not covered by any information architecture, but an essential part of the content architecture).

In the JCR world there is the second rule of David’s model: “Drive the content hierarchy, don’t let it happen”. That’s the rule I quote most often, and even though it’s 10 years old, it’s still very true. Because it focusses on the aspect of managing the content tree (= content architecture), and that you should decide carefully considering the consequences of it.

And rest assured: It’s easier to change your application than to change the content tree! (At least if it’s designed properly. If it isn’t, … It’s even hard to change them both.)

AEM coding pattern: Run-mode specific code

It is very typical how have code which is supposed to run not on all environments, but only on some. For example you might have code which is supposed to import data only on the authoring instance.

Then code often looks like this:

if (isAuthoring) {
  // import data
}

boolean isAuthoring() {
  return slingSettingsService.getRunmodes().contains("author");
}

This code does what it’s supposed to do. But there can be a problem, that you want to run this code not only on authors, but (for whatever reasons) also on publish. Or you don’t want to run the code on UAT authors.
In such cases this code does not work anymore, because it’s not flexible enough (the runmode is hardcoded); any change requires a code change and deployment.

A better way is to reformulate the requirement a bit: “The data import will only run if there is the ‘importActive’ flag set to true”.

If you design this flag “importActive” as an OSGI config, and combine it with runmode dependent configuration, then you can achieve the same behaviour as above, but be much more flexible. You can even disable it (and if only for a certain time).

The code could then look like this

@Property (boolValue="true")
private static final String IMPORT_ACTIVE_PROP = "importActive";
private boolean importActive();

Protected activate(ComponentContext ctx) {
  importActive = PropertiesUtil (ctx.getProperties().get(IMPORT_ACTIVE_PROP));
}

if (importActive) {
  // import data
}

Now you translate the requirement “imports should only happen on authoring” into a configuration decision, and it’s no longer hardcoded. And that’s why the reason why I will be picky on code reviews about it.

Do not write to the repository in event handling

Repository writes are always resource intensive operations, which always come with a cost. First of all, the write operation helds a number of locks, which limit the concurrency in write operations in total.  Secondly the write itself can take some time, especially if the I/O is loaded or you are running in a cluster with MongoDB as backend; in this case it’s the latency of the network connection plus the latency of MongoDB itself. And third, every write operations causes async index updates, triggers JCR observation and Sling resource events etc. That’s the reason why you shouldn’t take write operations too easy.

But don’t be scared to write to the repo because of performance reason, no way! Instead try to avoid unnecessary writes. Either batch them and collapse multiple write operations into a single transaction, if the business case allows it. Or avoid the repository writes alltogether, especially if the information is not required to be persisted.

Recently I came across a very stressing pattern of write operations: Write amplification. A Sling Resource Event listener was listening for certain write events to the repository; and when one of these was received (which happened quite often), a Sling Job has been created to handle this event. And the job implementation just did another small change to the repository (write!) and finished.

In that case a single write operation resulted in:

  • A write operation to persist the Sling Job
  • A write operation performed by the Job implementation
  • A write operation to remove the Sling Job

Each of these “regular” write operations caused 3 subsequent writes to the repository, which is a great way to kill your write performance completely. Luckily no one of these 3 additional write operations caused the event listener to create a new Sling Job again … That would have caused the same effect as “out of office” notifications in the early days of Microsoft Exchange (which didn’t detect these and would have sent an “out-of-office” reply to the “out-of-office” sender): A very effective way of DOSing yourself!

a flood of writes and the remains of performance

But even if that was not the case, it resulted in a very loaded environment reducing the subjective performance a lot; threaddumps indicated massive lock contention on write operations. When these 3 additional writes have been optimized (effectivly removed, as collecting the information in memory and batch-writing it after 30 seconds was possible) the situation improved a lot, and the lock contention was gone.

The learning you should take away from this scenario: Avoid writing to the repository in JCR Observation or Sling event listeners! Collect the data and write outside of these handlers in a separate thread.

PS: An interesting side effect of sling event listeners taking a long time is, that these handlers are blacklisted after they took more than 5 seconds to process (e.g. because they have been blocked on writing). Then they are not fired again (until you restart AEM), if you don’t explicitly whitelist them or turn of this feature completly.

Design pattern: Configuration of OSGI services

When you are an AEM backend developer, the pattern is very familiar: Whenever you need to provide configuration data to the service, you collect this data in the activate() method (by good tradition that’s the name of the method annotated with the “@Activate” annotation). I use this pattern often and normally it does not cause any problems.

But recently in my current project we ran into an issue which caused headaches. We needed to provide an API Key which is supposed to change every now and then, and therefor is not configured by an OSGI property, but instead stored inside the repository, so it can be authored.

We deployed the code, entered the API key, and … Guess what? It was not working at all. The API key was read in the Activate method, but at the time the key was not yet present. And the only chance to make it work was to restart the service/bundle/instance. And besides the initial provisioning it would have required a restart every time the key has been changed.

That’s not a nice situation when you try to automate your deployment (or not to break your automated deployment). We had to rewrite our logic in a way, that the API key was read periodically (every minute) from the repository. Of course the optimal way would have been to use JCR observation or an Sling Event Handler to detect any changes on the API Key Node/Resource immediately …

So whenever you have such “dynamic” configuration data, you should design your code in a way, that it can cope with situations that this configuration is not there (yet) or changes. The least thing you want to do is to restart your instance(s) because such a configuration change has happened.

Let’s formulate this as an pattern: Do not read from the repository in the “activate” method of a service! The content you read might change during runtime, and you need to react on it.

AEM transaction size or “do a save every 1000 nodes”

An old rule of thumb, even on earlier versions of CQ5, is “when you do large repository operations, do a session.save() every 1000 nodes”. The justification for this typically, that this is the default of the Package Manager, and therefor it’s a kind of recommended approach. And to be honest, I don’t know the real reason for it, even though I work in the Day/Adobe ecosystem for quite some time.

But nevertheless, with Oak the situation has changed a bit. Limits are much more explicit, and this rule of “every 1000 nodes do a save” can be considered still as true statement. But let me give you some background on it, why this exists at all. And then let’s find out, if this rule is still safe to use.

In the JCR specification there is the concept of transient space. This transient space holds all activities performed on a session until an implicit or explicit save() of the session. So the transient space holds all temporary data of a transaction, and the save() is comparable to the final commit of a transaction.

This transient space is typically hold inside the java heap, so dealing with it is fast.

But by definition this transaction is not bound in terms of size. So technically you should be able to create sessions, which modify all nodes and every property of a repository of 2 TB size.  This transient space does not fit into heap of a standard size (say: 12GB) any more. In order to support this behavior nevertheless, Oak starts to move this transient space entirely into the storage (TarMK, Mongo) if the transient space is getting too large (in the DocumentNodeStore language this is called a “persistent branch”, see the documentation of the DocumentNodeStore for some details on branches); then the size of the transaction is only limited by the amount of free storage on the persistance, but no longer by the size of the Java heap.

The limit is called update.limit and by default this 10k (up to and including Oak 1.4/AEM 6.2, 100k starting with Oak 1.6/AEM 6.3, see OAK-3036. But of course you can change this value using “-Doak.update.limit=40000”.

This value describes the amount of changes a transient space for a single session can hold before it is moved into the persistence. A change is a any change to a property or a node (adding/removing/modifying/reordering/…).

OK, that’s the theory, but what does this mean for you?

First, if the transient space is swapped to the persistence, the final session.save() will take much longer compared to a transient space in memory. Because to do the save, the transient space needs to be read from the persistence first (which typically includes at least disk I/O, in cases of MongoDB network I/O, which is even slower).

And second, when you add or change nodes, you typical deal with properties as well. So if you are on AEM 6.2 or older, you should check that you don’t do too much changes within a session, so you don’t hit this “10’000 changes” limit and get the performance penalty. If you have a reasonable content structure, the above mentioned rule of thumb of “do a save every 1000 nodes” goes into the very right direction.

But that’s often not good enough, because the updates of the synchronous Oak indexes count towards the 10’000 changes as well. You might know, that the synchronous indexes mirror the JCR tree, thus adding 1000 JCR nodes will also add 1000 oak nodes for the nodetype index. And that’s not the only synchronous index…

Thus increasing the update.limit to a higher number makes pretty much sense just to be on the safe side. But there is a drawback when you have such large limits: It’s the size of the transient space. Imagine you upload 1000 assets (1 MB each) into your repository in a single session, and you have the update.limit set to 100’000. The number of changes will not reach the update.limit, that’s unlikey. But your transient space will consume 1 GB of heap at least! Is your system designed and setup to handle this? Do you have enough free JVM heap?

Let’s conclude: The rule of thumb “do a save every 1000 nodes” might be a bit too optimistic on AEM 6.2 and older (with default values), but ok on AEM 6.3. But always keep the amount of transient space in mind. It can overflow your heap and debugging out-of-memory situations is not nice.

If you are interested in the inner working of Oak, look at this great piece of documentation. It covers a lot of lowlevel concepts, which are useful to know when you deal with the repository more often.