Category Archives: best practice

Try-with-resource or “I will never forget to close a resource resolver”

In Java 7 the idiom “try-with-resource”  has been introduced in the java world. It helps to never forget to close a resource. And since Sling9 (roughly 2016) the ResourceResolver interface implements the AutoCloseable marker interface, so the try-with-resource idiom can be used.

That means, that you can and should use this approach:

try (ResourceResolver resolver = resourceResolverFactory.getServiceResourceResolver(…)) {
// do something with the resolver
// no need to close it explicitly, it's closed automatically
}

With this approach you omit the otherwise obligatory finally block to close the resource resolver (which can be forgotten …).

This approach helps to reduce boilerplate code and eliminates some potential for errors. If you are developing for AEM 6.2 and newer, you should be able to use it.

ResourceResolvers and Sessions — “you open it, you close it”

I have already written about how to use resource resolvers and JCR sessions; the basic pattern to remember is always “you open it; you close it” (2nd rule).

While this stanza seems to be quite commons sense, the question is always: When is a session or a resource resolver opened/created? What API calls are responsible for it? Let me outline this today.

API calls which open a JCR resource:

API calls, which create a Sling ResourceResolver

These are the only API calls which open a JCR Session or a Sling ResourceResolver. And whenever you use one of these, you are responsible to close them as well.

And as corollary to this rule: if you have other methods or APIs which return a ResourceResolver or Session: Do not close these.

Some examples:

Session jcrSession = resourceResolver.adaptTo(Session.clase);

This just exposes the internal JCR Session of the ResourceResolver and because it’s not using one of the above APIs: Do not close this session! It’s closed automatically when you close the resource resolver.

Session adminSession = slingRepository.loginAdministrative(null);
Map authInfo = new HashMap();
authInfo.put(org.apache.sling.jcr.resource.api.JcrResourceConstants.AUTHENTICATION_INFO_SESSION, session);
ResourceResolver adminResourceResolver = resolverFactory.getResourceResolver(authInfo);

This code creates a resource resolver which wraps an already existing JCR Session. You have to close both adminSession and adminResourceResolver, because you created them both using the above mentioned API calls.

Validating AEM content-packages

A typical task when you run AEM as a platform is deployment. As platform team you own the platform, and you own the admin passwords. It’s your job to deploy the packages delivered by the various teams to you. And it’s also your job to keep the platform reliable and stable.

With every deployment you have the chance to break something. And not only the part of the platform which belongs to the team which code you deploy. That’s not a problem, if their (incorrect) code breaks their parts. But you break the system of other tenants, which are not involved at all in the deployment.

This is one of the most important tasks for you as platform owner. A single tenant must not break other tenants! Never! The problem is just, that it’s nearly impossible to guarantee. You typically rely on trust towards the development teams and that they earn that trust.

To help you a little bit with this, I created a simple maven plugin, which can validate content-packages against a ruleset. In this ruleset you can define, that a content-package delivered by tenant A will only contain content paths which are valid for tenant A. But the validation should fail, if the content-package would override clientlibraries of tenant-B. Or which will introduce new overlays in /apps/cq. Or which introduces a new OSGI setting with a non-project PID. Or anything else which can be part of a content-package.

Check out the the github repo and the README for its usage.

As already noted above, it can help you as a platform owner to ensure a certain quality of the packages you are supposed to install. On the other hand it can help you as project team to establish a set of rules which you want to follow. For examples you can verify a “we don’t use overlays” policy with this plugin as part of the build.

Of course the plugin is not perfect and you still can easily bypass the checks, because it does not parse the .content.xml files in there, but just checks the file system structure. And of course I cannot check bundles and the content which comes with them. But we all should assume that no team wants to break the complete system when deployment packges are being created (there are much easier ways to do so), but we just want to avoid the usual errors, which just happens when being under stress. If we catch a few of them upfront for the cost of configuring a rulset once, it’s worth the effort 🙂

Detecting JCR session leaks

A problem I encounter every now and then are leaking JCR sessions; that means that JCR sessions are opened, but never closed, but just abandoned. Like Files, JCR sessions need to be closed, otherwise their memory is not freed and they cannot be garbage collected by the JVM. Depending on the number of sessions you leave in that state this can lead to serious memory problems, ultimately leading to a crash of the JVM because of an OutOfMemory situation.

(And just to be on the safe side: In AEM ootb all ResourceResolvers use a JCR session internally; that means whatever I just said about JCR sessions applies the same way to Sling ResourceResolvers.)

I dealt with this topic already a few times (and always recommended to close the JCR sessions), but today I want to focus how you can easily find out if you are affected by this problem.

We use the fact that for every open session an mbean is registered. Whenever you see such a statement in your log:

14.08.2018 00:00:05.107 *INFO* [oak-repository-executor-1] com.adobe.granite.repository Service [80622, [org.apache.jackrabbit.oak.api.jmx.SessionMBean]] ServiceEvent REGISTERED

That’s says that an mbean service is registered for a JCR session; thus a JCR session has been opened. And of course there’s a corresponding message for unregistering:

14.08.2018 12:02:54.379 *INFO* [Apache Sling Resource Resolver Finalizer Thread] com.adobe.granite.repository Service [239851, [org.apache.jackrabbit.oak.api.jmx.SessionMBean]] ServiceEvent UNREGISTERING

So it’s very easy to find out if you don’t have a memory leak because of leaking JCR sessions: The number of log statements for registration of these mbeans must match the number of log statements for unregistration.

In many cases you probably don’t have exact matches. But that’s not a big problem if you consider:

  • On AEM startup a lot of sessions are opened and JCR observation listeners are registered to them. That means that a logfile with AEM starts and stops (and the number of starts do not match the number of stops) it’s very likely that these numbers do not match. Not a problem.
  • The registration (and also the unregistration) of these mbeans often happens in batches; if this happen during logfile rotation, you might have an imbalance, too. Again, not per se a problem.

It’s getting a problem, if the number of sessions opened is always bigger than the number of sessions closed over the course of a few days.

$ grep 'org.apache.jackrabbit.oak.api.jmx.SessionMBean' error.log | grep "ServiceEvent REGISTERED" | wc -l
212123
$ grep 'org.apache.jackrabbit.oak.api.jmx.SessionMBean' error.log | grep "ServiceEvent UNREGISTERING" | wc -l
1610
$

Here I just have the log data of a single day, and it’s very obvious, that there is a problem, as around 220k sessions are opened but never closed. On a single day!

To estimate the effect of this, we need to consider that for every of these log statements these objects are retained:

  • A JCR session (plus objects it reaches, and depending on the activities happening in this session it might also include any pending change, which will never going to be persisted)
  • A Mbean (referencing this session)

So if we assume that 1kb of memory is associated with every leaking session (and that’s probably an very optimistic assumption), this would mean that the system above would loose around 220M of heap memory every day. This system probably requires a restart every few days.

How can we find out what is causing this memory leak? Here it helps, that Oak stores the stack trace when opening sesions as part of the session object. Since around Oak 1.4 it’s only done if the number of open sessions exceeds 1000; you can tune this value with the system property “oak.sessionStats.initStackTraceThreshold”; set it to the appropriate value. This is a great help to find out where the session is opened.

And then go to /system/console/jmx, check for the “SessionStatistics” mbeans (typically quite at the bottom of the list) and select on the most recent ones (they have the openening date already in the name)

session information in the mbean view

session information in the mbean view

And then you can find in the “initStackTrace” the trace where this session has been opened:

Stacktrace of an open JCR session

Stacktrace of an open JCR session

With the information at hand where the session has been opened it should be obvious for you to find the right spot where to close the session.
If you spot a place where a session is opened in AEM product code but never closed, please check that with Adobe support. But be aware, that during system startup sessions are opened and will stay open while the system is running. That’s not a problem at all, and please do not report them!

It’s only a problem if you have a at least a few hundreds session open with the very same stack trace, that’s a good indication of such a “leaking session” problem.

A good followup reading on AEM HelpX pages with some details how you can fix it.

Referencing runmodes in Java

There was a question at this year’s AdaptTo, why there is no Java annotation to actually limit the scope of a component (imagine a servlet) to a specific runmode. This would allow you to specify in Java code, that a servlet is only supposed on run on author.

Technically it is easily possible to implement such an annotation. But it’s not done for a reason. Because runmodes have been developed as deployment vehicle to ship configuration. That means your deployment artefact can contain multiple configurations for the same component, and the decision which one to use is based on the runmode.
Runmodes are also not meant to be used as differentiator so code can operate differently based on this runmode. I would go so far to say, that the use of slingSettings.getRunModes() should be considered bad practice in AEM project code.

But of course the question remains, how one would implement the requirement that something must only be active on authoring (or any other environment, which can be expressed by runmodes). For that I would like to reference an earlier posting of mine. You still leverage runmodes, but this time via an indirection of OSGI configuration. This avoids hardcoding the runmode information in the java code.

HTL – a wrong solution to the problem?

(in reponse to Dan’s great posting:  “A Retrospective on HTL: The Wrong Solution for the Problem”)

Dan writes that with JSP there is a language out there which is powerful and useful, and that’s hardly a good reason for an experienced developer to switch to another language (well, besides the default XSS handling of HTL).

Well, I think I can agree on that. An experienced java web developer knows the limits of the JSP and JSP scriptlets and is capable to develop maintainable code. And to be fair, the code created by these developers is hardly a problem.

And Dan continues:

I am more productive writing JSP code and I believe most developers would be as well, as long as they avoid Scriptlet and leverage the Sling JSP Taglib and Sling Models.

And here I see the problem. Everything works quite well if you can keep up with the discipline. In my experience this works

  • As long as you have experienced developers who make the right decisions and
  • As long as you have time to fix things in the right way.

The problems begin when the first fix is done in the JSP instead of the underlying model. Or logic is created in the JSP instead of the creation of a dedicated model. Having such examples in your codebase can be seen as the begin of something called the broken window theory: it will act as an example how things can be done (and get through with it) unless you start fixing right away.

It requires a good amount of experience as developer, discipline and assertiveness towards your project lead to avoid implementing the quick-fix and doing it right instead, as it typically takes more time. If you live such a culture, it’s great! Congratulations!

If you don’t have the chance to work in such a team — you might need to work with less capable developers or you have a high fluctuation in your team(s) — you cannot trust each individual to do right decisions in 99 per cent of all cases. Instead you need a number of rules, which do not require too much disambiguation and discussion to apply correctly. Rules such as

  • Use HTL (because then you cannot implement logic in the template)
  • Always build a model class (even if you could get away without)

It might not be the most efficient way to develop code, but in the end you can be sure, that certain types of errors do not occur (such as missed XSS protection, large JSP scriptlets, et cetera). In many cases this outweighs the drawbacks of using HTL by far.

In the end using HTL is just a best practice. And as always you can deliberately violate best practices if you know exactly what you are doing and why following the best practices prevents you from reaching your goal.

So my conclusion is the same as Dan’s:

Ultimately, your choice in templating language really boils down to what your team is most comfortable with.

If my team has not proven track record to deliver good JSPs in the past (the teams I worked with in the last years have not), or I don’t know the team very well, I will definitely recommend HTL . Despite all the drawbacks. Because then I know what I get.

4 problems in projects on performance tests

Last week I was in contact with a collague of mine who was interested in my experience on performance tests we do in AEM projects.  In the last 12 years I worked with a lot of customers and implemented smaller and larger sites, but all of them had at least one of the following problems.

 (1) Lack of time

In all project plans time is allocated for performance tests, and even resources are assigned to it. But due to many unexpected problems in the project there are delays, which are very hard to compensate when the golive or release date is set and already announced to or by senior management. You typically try to bring live the best you can, and steps as performance tests are typically reduced first in time. Sometimes you are able to do performance tests before the golive, and sometimes they are skipped and postponed after golive. But in case when timelines cannot be met the quality assurance and performance tests are typical candidates which are cut down first.

(2) Lack of KPIs

When you the chance to do performance tests, you need KPIs. Just testing random functions of your website is simply not sufficient if you don’t know if this function is used at all. You might test the functionality which the least important one and miss the important ones. If you don’t have KPIs you don’t know if your anticipated load is realistic or not. Are 500 concurrent users good enough or should it rather be 50’000? Or just 50?

(3) Lack of knowledge and tools

Conducting performance tests requires good tooling; starting from the right environment (hopefully comparable sizing to production, comparable code etc) to the right tool (no, you should not use curl or invent your own tool!) and an environment where you execute the load generators. Not to forget proper monitoring for the whole setup. You want to know if you provide enough CPU to your AEM instances, do you? So you should monitor it!.

I have seen projects, where all that was provided, even licenses for the tool Loadrunner (an enterprise grade tool to do performance tests), but in the end the project was not able to use it because noone knew how to define testcases and run them in Loadrunner. We had to fall back to other tooling or the project management dropped performance testing alltogether.

(4) Lack of feedback

You conducted performance tests, you defined KPIs and you were able to execute tests and get results out of them. You went live with it. Great!
But does the application behave as you predicted? Do you have the same good performance results in PROD as in your performance test environment? Having such feedback will help you to refine your performance test, challenging your KPIs and assumptions. Feedback helps you to improve the performance tests and get better confidence in the results.

Conclusion

If you haven’t encountered these issues in your project, you did a great job avoid them. Consider yourself as a performance test professional. Or part of a project addicted to good testing. Or you are so small that you were able to ignore performance tests at all. Or you just deploy small increments, that you can validate the performance of each increment in production and redeploy a fixed version if you get into a problem.

Have you experienced different issues? Please share them in the comments.