Author Archives: Jörg

Do not write to the repository in event handling

Repository writes are always resource intensive operations, which always come with a cost. First of all, the write operation helds a number of locks, which limit the concurrency in write operations in total.  Secondly the write itself can take some time, especially if the I/O is loaded or you are running in a cluster with MongoDB as backend; in this case it’s the latency of the network connection plus the latency of MongoDB itself. And third, every write operations causes async index updates, triggers JCR observation and Sling resource events etc. That’s the reason why you shouldn’t take write operations too easy.

But don’t be scared to write to the repo because of performance reason, no way! Instead try to avoid unnecessary writes. Either batch them and collapse multiple write operations into a single transaction, if the business case allows it. Or avoid the repository writes alltogether, especially if the information is not required to be persisted.

Recently I came across a very stressing pattern of write operations: Write amplification. A Sling Resource Event listener was listening for certain write events to the repository; and when one of these was received (which happened quite often), a Sling Job has been created to handle this event. And the job implementation just did another small change to the repository (write!) and finished.

In that case a single write operation resulted in:

  • A write operation to persist the Sling Job
  • A write operation performed by the Job implementation
  • A write operation to remove the Sling Job

Each of these “regular” write operations caused 3 subsequent writes to the repository, which is a great way to kill your write performance completely. Luckily no one of these 3 additional write operations caused the event listener to create a new Sling Job again … That would have caused the same effect as “out of office” notifications in the early days of Microsoft Exchange (which didn’t detect these and would have sent an “out-of-office” reply to the “out-of-office” sender): A very effective way of DOSing yourself!

a flood of writes and the remains of performance

But even if that was not the case, it resulted in a very loaded environment reducing the subjective performance a lot; threaddumps indicated massive lock contention on write operations. When these 3 additional writes have been optimized (effectivly removed, as collecting the information in memory and batch-writing it after 30 seconds was possible) the situation improved a lot, and the lock contention was gone.

The learning you should take away from this scenario: Avoid writing to the repository in JCR Observation or Sling event listeners! Collect the data and write outside of these handlers in a separate thread.

PS: An interesting side effect of sling event listeners taking a long time is, that these handlers are blacklisted after they took more than 5 seconds to process (e.g. because they have been blocked on writing). Then they are not fired again (until you restart AEM), if you don’t explicitly whitelist them or turn of this feature completly.

Design pattern: Configuration of OSGI services

When you are an AEM backend developer, the pattern is very familiar: Whenever you need to provide configuration data to the service, you collect this data in the activate() method (by good tradition that’s the name of the method annotated with the “@Activate” annotation). I use this pattern often and normally it does not cause any problems.

But recently in my current project we ran into an issue which caused headaches. We needed to provide an API Key which is supposed to change every now and then, and therefor is not configured by an OSGI property, but instead stored inside the repository, so it can be authored.

We deployed the code, entered the API key, and … Guess what? It was not working at all. The API key was read in the Activate method, but at the time the key was not yet present. And the only chance to make it work was to restart the service/bundle/instance. And besides the initial provisioning it would have required a restart every time the key has been changed.

That’s not a nice situation when you try to automate your deployment (or not to break your automated deployment). We had to rewrite our logic in a way, that the API key was read periodically (every minute) from the repository. Of course the optimal way would have been to use JCR observation or an Sling Event Handler to detect any changes on the API Key Node/Resource immediately …

So whenever you have such “dynamic” configuration data, you should design your code in a way, that it can cope with situations that this configuration is not there (yet) or changes. The least thing you want to do is to restart your instance(s) because such a configuration change has happened.

Let’s formulate this as an pattern: Do not read from the repository in the “activate” method of a service! The content you read might change during runtime, and you need to react on it.

AEM transaction size or “do a save every 1000 nodes”

An old rule of thumb, even on earlier versions of CQ5, is “when you do large repository operations, do a every 1000 nodes”. The justification for this typically, that this is the default of the Package Manager, and therefor it’s a kind of recommended approach. And to be honest, I don’t know the real reason for it, even though I work in the Day/Adobe ecosystem for quite some time.

But nevertheless, with Oak the situation has changed a bit. Limits are much more explicit, and this rule of “every 1000 nodes do a save” can be considered still as true statement. But let me give you some background on it, why this exists at all. And then let’s find out, if this rule is still safe to use.

In the JCR specification there is the concept of transient space. This transient space holds all activities performed on a session until an implicit or explicit save() of the session. So the transient space holds all temporary data of a transaction, and the save() is comparable to the final commit of a transaction.

This transient space is typically hold inside the java heap, so dealing with it is fast.

But by definition this transaction is not bound in terms of size. So technically you should be able to create sessions, which modify all nodes and every property of a repository of 2 TB size.  This transient space does not fit into heap of a standard size (say: 12GB) any more. In order to support this behavior nevertheless, Oak starts to move this transient space entirely into the storage (TarMK, Mongo) if the transient space is getting too large (in the DocumentNodeStore language this is called a “persistent branch”, see the documentation of the DocumentNodeStore for some details on branches); then the size of the transaction is only limited by the amount of free storage on the persistance, but no longer by the size of the Java heap.

The limit is called update.limit and by default this 10k (up to and including Oak 1.4/AEM 6.2, 100k starting with Oak 1.6/AEM 6.3, see OAK-3036. But of course you can change this value using “-Doak.update.limit=40000”.

This value describes the amount of changes a transient space for a single session can hold before it is moved into the persistence. A change is a any change to a property or a node (adding/removing/modifying/reordering/…).

OK, that’s the theory, but what does this mean for you?

First, if the transient space is swapped to the persistence, the final will take much longer compared to a transient space in memory. Because to do the save, the transient space needs to be read from the persistence first (which typically includes at least disk I/O, in cases of MongoDB network I/O, which is even slower).

And second, when you add or change nodes, you typical deal with properties as well. So if you are on AEM 6.2 or older, you should check that you don’t do too much changes within a session, so you don’t hit this “10’000 changes” limit and get the performance penalty. If you have a reasonable content structure, the above mentioned rule of thumb of “do a save every 1000 nodes” goes into the very right direction.

But that’s often not good enough, because the updates of the synchronous Oak indexes count towards the 10’000 changes as well. You might know, that the synchronous indexes mirror the JCR tree, thus adding 1000 JCR nodes will also add 1000 oak nodes for the nodetype index. And that’s not the only synchronous index…

Thus increasing the update.limit to a higher number makes pretty much sense just to be on the safe side. But there is a drawback when you have such large limits: It’s the size of the transient space. Imagine you upload 1000 assets (1 MB each) into your repository in a single session, and you have the update.limit set to 100’000. The number of changes will not reach the update.limit, that’s unlikey. But your transient space will consume 1 GB of heap at least! Is your system designed and setup to handle this? Do you have enough free JVM heap?

Let’s conclude: The rule of thumb “do a save every 1000 nodes” might be a bit too optimistic on AEM 6.2 and older (with default values), but ok on AEM 6.3. But always keep the amount of transient space in mind. It can overflow your heap and debugging out-of-memory situations is not nice.

If you are interested in the inner working of Oak, look at this great piece of documentation. It covers a lot of lowlevel concepts, which are useful to know when you deal with the repository more often.

Sling healthchecks – what to monitor

The ability to know what’s going on inside an application is a major asset whenever you need to operate an application. As IT operation you don’t need (or want) to know details. But you need to know if the application works “ok” by any definition. Typically IT operations deploys alongside with the application a monitoring setting which allows to make the statement, if the application is ok or not.

In the AEM world such statements are typically made via Sling healthchecks. As a developer you can write assertions against the correct and expected behaviour of your application and expose the result. While this technical aspect is understood quite easily, the more important question is: “What should I monitor”?

In the last years and projects we typically used healthchecks for both deployment validation (if the application goes “ok” after a deployment, we continued with the next instance) and loadbalancer checks (if the application is “ok”, traffic is routed to this instance). This results in these requirements:

  • The status must not fluctuate, but rather be stable. This normally means, that temporary problems (which might affect only a single request) must no influence the result of the healthcheck. But if the error rate exceeds a certain threshold it should be reported (via healthchecks)
  • The healthcheck execution time shouldn’t be excessive. I would not recommend to perform JCR queries or other repository operations in a healthcheck, but rather consume data which is already there, to keep the execution time low.
  • A lot of the infrastructure of an AEM instance can be implicitly monitored. If you expose your healthcheck results via a page (/status.html) and this page results with a status “200 OK”, then you know, that the JVM process is running, the repo is up and that the sling script resolution and execution is working properly.
  • When you want your loadbalancer to determine the health status of your application by the results of Sling healthchecks, you must be quite sure, that every single healthcheck involved works in the right way. And that an “ERROR” in the healthcheck really means, that the application itself is not working and cannot respond to requests properly. In other words: If this healthchecks goes to ERROR on all publishs at the same time, your application is no longer reachable.

Ok, but what functionality should you monitor via healthchecks? Which part of your application should you monitor? Some recommendations.

  1. Monitor pre-requisites. If your application constantly needs connectivity to a backend system, and is not working properly without it, implement a healthcheck for it. If the configuration for accessing this system is not there or even the initial connection start fails, let your healthcheck report “ERROR”, because then the system  cannot be used.
  2. Monitor individual errors: For example, if such a backend connection throws an error, report it as warn (remember, you depend on it!). And implement an error threshold for errors, and if this threshold is reached, report it as ERROR.
  3. You should implement a healthcheck for every stateful OSGI service, which knows about “success” or “failed” operations.
  4. Try Avoid the case, that a single error is reported via multiple healthchecks. On the other side try to be as specific as possible when reporting. So instead of “more than 2% of all requests were answered with an HTTP statuscode 5xx” via healthcheck1 you should report “connection refused from LDAP server” in healthcheck 2. In many cases fundamental problems will trigger a lot of different symptoms (and therefor cause many healthchecks to fail) and it is very hard to change this behaviour. In that case you need to do document explicitly how to react in such responses and how to find the relevant healthcheck quickly.

Regarding the reporting itself you can report every problem/exception/failure or work with the threshold.

  • Report every single problem if the operations runs rarely. If you have a scheduled task with a daily interval, the healthcheck should report immediately if it fails. Also report “ok” if it works well again. The time between runs should give enough time to react.
  • If your operation runs very often (e.g. as part of every request) implement a threshold and report only a warning/error if this threshold is currently exceeded. But try to avoid constantly switching between “ok” and “not ok”.

I hope that this article gives you some idea how you should work with healthchecks. I see them as a great tool and very easy to implement. They are ideal to perform one-time validations (e.g. as smoketests after a deployment) and also for continous observation of an instance (by feeding healthcheck data to loadbalancers). Of course they can only observe the things you defined up front and do not replace testing. But they really speedup processes.

AEM and docker – a question of state

The containerization of the IT world continues. What started with virtualization in the early 2000s has reached with Docker a state, where it’s again a hype topic.

Therefor it’s natural that people also started to play with AEM in docker (, and many more).

Of course I was challenged with the requirement to run AEM in docker too. Customers and partners asking how to run AEM in docker. If I can provide dockerfiles etc.  I am hestitating to do it, because for me docker and AEM are not a really good fit (right now with AEM 6.3 in 2017).

Some background first: Docker containers should be stateless. Only if the application within the container does not hold any persistent state, you can shut it down (which means deleting all the files created by the application in the container itself), start it up, replace it by a different container holding a new version of the application etc. The whole idea is to make the persistent state somebody else’s problem (typically a database). Deployments should be as easy as starting new docker instances (from a pre-tested and validated docker images) and shutting down the old ones. Not working and testing in production anymore.

So, how does that collide with AEM? AEM is not only an application, but the application is closely tied with a repository, which holds state. Typically the application is stored within the repository, next to the “user data” (= content). This means, that you cannot just replace an AEM instance inside docker by a new instance without loosing this content (or resetting it to a state, which is shipped with the docker image). Loosing content is of course not acceptable.

So the typical docker rollout approach of new application versions (bringing new instances live based on a new docker image and shutting down the old ones) does not work with AEM; the content sitting in the repository is the problem.

People then came up with the idea, that the repository can stored outside of the docker image, so isn’t lost on restart/replacement of the image. Docker calls this “host directory as data volume” (

Storing the repo as data volume on the host filesystem

That idea sounds neat and of course it works. But then we have a different problem. When you start a new docker image and you mount this data volume containing the repository state, your AEM still runs the “old” version of your application. Starting the repository from a different docker image doesn’t bring any benefit then.

Docker image version 2 still starts application version 1.0

When you want to update your AEM application inside the repository, you would still need to perform an installation of your application into a running repository. Working in a production environment. And that’s not the idea why you want to use docker.
With docker we just wanted to start the new images and to stop the old ones.

Therefor I do not recommend to use docker with AEM; there is rarely a value for it, but it makes the setup more complicated without any real benefit.

The only exceptions I would accept are really short-lived instances, where hosting the repository inside the docker system isn’t a problem and purging the repo on shutdown is even a feature. Typically these are short-lived development instances (e.g. triggered by Continous integration pipeline, where you automatically create dedicated docker instances for feature branches). But that’s it.

And as a sidenote: This does not only affect TarMK-based AEM instances. If you have mongo-based instances, the application is also stored within the (Mongo-) repo. Just running AEM in a new docker image doesn’t update the application magically.

To repeat myself: This considers the current state. I know that the AEM engineering is perfectly aware of this fact, and I am sure that they try to adress it. Let’s wait for the future 🙂

What’s new in Sling with AEM 6.3

AEM 6.3 is out. Besides the various product enhancements and new features it also includes updated versions of many open source libraries. So it’s your chance to have a closer at the version numbers and find out what changed. And again, quite a lot.

For your convenience I collected the version information of the sling bundles which are part of the initial released version (GA). Please note that with Servicepacks and Cumulative Hotfixes new bundle versions might be introduced.
If a version number is marked red, it has changed compared to the previous AEM versions. Bundles which are not present in the given version are marked with a dash (-).

For the versions 5.6 to 6.1 please see this older posting of mine.

Symbolic name of the bundle AEM 6.1 (GA) AEM 6.2 (GA) AEM 6.3 (GA) 2.1.4 2.1.6 (Changelog) 2.1.8 (Changelog) 2.9.0 2.11.0 (Changelog) 2.16.2 (Changelog) 0.9.0.R988585 0.9.0.R988585 0.9.0.R988585 1.3.6 1.3.14 (Changelog) 1.3.24 (Changelog) 0.0.1.R1582230 1.0.6 (Changelog) 2.2.0 2.2.0 2.2.0 1.1.0 1.2.0 1.2.0 1.3.2 1.3.2 1.3.8 (Changelog) 2.2.0 2.2.0 2.3.0 (Changelog) 1.0.2 1.0.2 1.0.0 1.0.2 (Changelog) 1.0.4 (Changelog) 1.0.0 1.0.0 1.0.0 2.0.10 2.0.16 (Changelog) 2.0.20 (Changelog) 4.0.2 4.0.6 (Changelog) 5.0.0 (Changelog) 1.0.0 1.0.4 1.0.6 (Changelog) 1.0.6 1.0.0 (Changelog) 1.2.0 (Changelog) 2.1.8 2.1.8 2.1.10 (Changelog) 2.2.2 2.4.0 (Changelog) 2.4.0 2.4.6 2.4.14 (Changelog) 2.5.2 (Changelog) 3.2.0 3.2.6 (Changelog) 3.2.6 1.0.0 1.0.0 1.0.2 (Changelog) 1.0.2 1.0.2 1.0.4 (Changelog) 1.1.2 (Changelog) 1.1.6 (Changelog) 1.0.10 (Changelog) 1.0.18 (Changelog) 1.1.0 1.2.6 (Changelog) 1.2.16 (Changelog) 1.0.0 1.0.0 1.0.0 0.1.0 0.3.0 0.3.0 0.1.1.r1678168 0.1.15.r1733486 0.2.6 2.4.2 2.4.6 (Changelog) 2.6.6 (Changelog) 3.5.5.R1667281 4.0.0 (Changelog) 4.2.0 (Changelog) 1.0.0 1.0.4 (Changelog) 1.1.0 (Changelog) 0.2.2 1.1.4 1.1.6 (Changelog) 1.2.0 (Changelog) 1.0.0 1.0.2 (Changelog) 1.2.0 (Changelog) 1.0.2 1.0.2 1.0.2 1.0.2 1.0.2 1.0.2 1.0.0 1.0.0 1.2.0 1.2.2 (Changelog) 1.2.4 (Changelog) 1.1.2 1.1.2 1.1.2 2.4.0 2.4.4 (Changelog) 2.5.6 (Changelog) 1.0.0 1.0.0 1.0.2 (Changelog) 3.6.4 3.6.8 (Changelog) 3.8.6 (Changelog) 1.1.2 1.1.2 1.1.2 1.0.0 1.1.0 1.1.0 1.1.0 3.1.16 3.1.18 (Changelog) 3.1.22 (Changelog) 0.1.0 0.1.0 0.1.0 2.2.0 2.3.0 (Changelog) 2.4.0 (Changelog) 2.2.2 2.3.2 (Changelog) 3.0.0 (Changelog) 2.1.0 2.1.10 2.1.10 2.1.10 1.2.2 1.3.4 (Changelog) 1.3.8 (Changelog) 2.0.0 2.0.0 2.0.0 1.0.2 1.0.2 1.0.2 1.1.2 2.5.0 2.7.4.B001 (Changelog) 2.9.2 (Changelog) 1.0.2 1.0.2 1.0.2 2.2.2 2.3.4 (Changelog) 2.3.8 (Changelog) 1.0.2 1.0.2 1.0.2 1.2.0 1.2.2 (Changelog) 1.2.2 1.1.0 1.2.2 (Changelog) 1.3.2 (Changelog) 1.1.0 1.2.2 (Changelog) 1.3.9.r1784960 (Changelog) 1.0.6 1.8.0 1.1.0 1.0.4 1.0.4 1.0.6 (Changelog) 1.0.0 1.0.0 1.0.0 1.0.0 1.0.0 1.0.0 1.2.9.R1675563-B002 1.3.0 (Changelog) 1.3.0 1.2.4 1.4.8 (Changelog) 1.5.22 (Changelog) 1.0.4 1.1.2 (Changelog) 1.2.1.R1777332 (Changelog) 2.1.6 2.1.8 (Changelog) 2.1.12 (Changelog) 2.0.28 2.0.36 (Changelog) 2.0.44 (Changelog) 2.0.12 2.0.14 (Changelog) 2.1.2 (Changelog) 2.0.16 2.0.28 2.0.30 2.1.6 2.1.8 2.2.6 2.2.4 2.2.4 2.2.6 2.0.6 2.0.6 2.0.6 1.0.2 1.0.18 1.0.32 1.0.8 1.0.8 1.0.4 1.0.10 1.0.18 1.0.0 1.0.6 1.0.10 1.0.18 1.1.2 1.2.0 1.2.2 1.2.4 1.0.0.Revision1200172 1.0.0.Revision1200172 1.0.0.Revision1200172 2.1.10 2.1.14 2.1.22 2.3.6 2.3.8 2.3.15.r1780096 2.3.6 2.4.2 2.4.10 1.3.6 1.3.8 1.3.8 0.0.1.Rev1526908 0.0.1.Rev1526908 0.0.1.Rev1764482 0.0.1.Rev1387008 0.0.1.Rev1387008 0.0.1.Rev1758544 1.0.2 1.0.2 1.1.0 0.0.2 1.0.2

AEM 6.3: set admin password on initial startup (Update)

With AEM 6.3 you have the chance to setup the admin password already on the initial start. By default the quickstart asks you for the password if you start it directly. That’s a great feature and shortens quite some deployment instructions, but it doesn’t work always.

For example, if you first unpack your AEM instance and then use the start script, you’ll never get asked for the new admin password. The same if you work in an application server setup. And if you do automatic installations, you don’t want to get asked at all.

But I found, that even in these scenarios you can set the admin password as part of the initial installation. There are 2 different ways:

  1. Set the system property “admin.password” to your desired password; and it will be used (for example add “-Dadmin.password=mypassword” to the JVM parameters).
  2. Or set the system property “admin.password.file” and pass as value the path to a file; when this file is accessible by the AEM instance and the contains the line “admin.password=myAdminPassword“, this value will be used as admin password.

Please note, that this only works on the initial startup. On all subsequent startups these system properties are ignored; and you should probably remove them or at least purge the file in case of (2).

Update: Ruben Reusser mentioned, that the Osgi Webconsole Admin password is not changed (which is used in case the repository is not running). So you still need to work on that.