What I check on code reviews

At several occassions I did code reviews on AEM projects  in the last months. I don’t do that exercise quite often, so I don’t have a real routine or checklists what to look at. But in the past I learned some lessons about how to write code for AEMs, so I hope I check relevant pieces. Feedback appreciated.

So let’s start with my top 10 items I look for:

  1. The use of “System.out.println()“, “System.err.println()” and “e.printStackTrace()” statements. Logging is cheap and easy, but obviously not easy enough, I still find these statements. They should be replaced, because these statements do not provide relevant metadata like time and class. And to be honest, people tend to look only at the error.log file, but not on stdout.log.
  2. Servlets bound to fixed paths. In most cases it should be replaced by binding either to a selector or to a resourcetype. The Sling documentation explains it quite well.
  3. The creation of dedicated sessions/ResourceResolvers (either admin sessions ot service user sessions) and if these sessions are properly closed. Although this should be common knowledge to AEM developers, there’s still code out there which doesn’t close resource resolvers or logs out sessions, causing memory leaks.
  4. The existence of long-running sessions. You shouldn’t write services, which open a session on activate and close them on deactive (see this blog post for the explanation). The only exception to this rule: JCR observation handlers.
  5. adaptTo() calls without proper null checks. adaptTo() is allowed to return null. There are cases where it can be neglected (in reality you’ll never have all occurrences of it checked), but in most cases it has to be checked to avoid NullPointerException.
  6. Log hygiene part 1: The excessive use of LOG.error(), when a LOG.info() is sufficient. Or even worse: LOG.error/info() instead of LOG.debug().
  7. Log hygiene part 2: Log messages without meaningful description. A log message has to contain relevant information like “with what parameter does this exception happen? At what node? Which request?”. Consider that some parts are always and implicitly logged (e.g. the thread name, which contains in case of a request the requested path), but you need to provide every other information which can be useful to understand the problem when found in the logs.
  8. The mixed usage of JCR and Sling API. Choose either one, but then stick to it. You should not have a method, which takes both a Session and a ResourceResolver as parameter (or other object from these domains).
  9. Performing string operations on paths. I already blogged about it.
  10. JCR queries. Are they properly used or can they get replaced by a short tree traversal?

So when you get through all of these quite generic items, your code is already quite well. And if you don’t have a specific problem which I should investigate, I will likely stop here. Because then you already proved, that you understand AEM and how to operate it quite well, so I wouldn’t expect major issues anymore.

I am sure that the personal background influences a lot the intuitive approach on code review. Therefor I am interested in your checklists and how it differs from mine. Just leave a comment, drop me a mail or tweet me 🙂

Let’s try to compile a list which we all can use to improve our code.

9 thoughts on “What I check on code reviews

  1. Ben Peter ⚾ (@ben_peter)

    nice list 🙂

    on point 3, I’d argue that it’s pretty hard to make a valid case for a JCR Observation handler, so even that is usually out. point 8 is similar — there’s often no valid case behind using JCR APIs where I encounter them in the wild, and I’d include AEM APIs in the list (although you can’t just stick to that and never use the Sling API, which is OK).

    what I like to check on as well are things like
    – thread safety (although arguably that is not an AEM thing, but it’s often done wrong)
    – whether Sling is properly understood and the code falling in line with how Sling works (a quick indicator is looking at whether resource inclusions are preferred to script inclusions)
    – custom code that duplicates API functionality (like if you *do* choose to works with paths as Strings, at least use org.apache.jackrabbit.util.Text)
    – XSS (a quick one if it’s just bread-and-butter components using Sightly^h^h^h^h^h^h^hHTL).
    – is author-only code prevented from running on publishers (you pointed out the best-practice pattern for that earlier)
    – whether time and place of code execution, caching and invalidation is understood and consistent

    1. Jörg Post author

      Hi Ben,

      of course there are lot of other things to check, I just concentrate on these obvious things in the first run. Not knowing the Sling and AEM APIs is of course another big issue, I often see a lot of code, which could be replaced by simple API calls.
      I agree the hardest part is often the question which API to use 🙂

    2. Steven Goossens

      +1 for the remark about point 8. Higher level API’s should always be preferred.

      A very big mistake that keeps popping up are stateful OSGi services. I assume this is a +1 for the thread safety remark in the previous reply.

  2. Bots & AEM corner (@BotsAemCorner)

    Will add my top things (except already mentioned) to look at:
    * general things like testing coverage, usage of scriptlets etc;
    * usage of JavaScript Use-API is a second item in the big NO-NO-NO list. Does anybody know how to test or debug it?
    * prefer using Sling Models over Java Use API (I prefer fancy annotations in Sling over getSlingScriptHelper().get…., testing also feels easier);
    * business logic, should never be placed in servlet or model and data access logic should never be mixed up with business logic;
    * and even more, keep AEM stack away from the business logic;
    * need to extract 10 params from the request? Write Sling Model adapter and adapt request to your model;
    * if you can use Resource (so Sling stack) then you should not use Node (JCR stack). Low-level API should be used only when you really need it;
    * if new code/feature needs a separate logger for it;
    * usage of Mockito (or similar libs) when Sling Mocks / AEM Mocks can be used;
    * are the dependencies added to the pom not provided by AEM (and if yes, is correct version has been specified)?

    1. Shubham

      For the point: “* need to extract 10 params from the request? Write Sling Model adapter and adapt request to your model;”. Can someone provide an example, if I have multiple params sent as a request in a servlet, how Sling Model Adapter might help instead of doing request.getRequestParameterMap().get(“param”) ??

      Thanks

  3. adobebrendan

    I also look out for over-use of overlays. Those break upgrades faster than anything else. Where possible, overlays should be for the smallest feasible part of the node structure, not the whole product feature being modified. Also, for customizations in the author UI, it’s sometimes much more ideal to inject buttons, etc with client libraries and DOM than it is to use overlays. Those kinds of changes also survive upgrades more effectively if done right.

    Note about resource resolver/sessions; always close them in the same method where they are opened. Create the resolver/session inside of a try/catch and close it in the finally block to ensure there are no cases where they are left open.

    If you have more than a few lines of Java in a JSP, you’re doing it wrong. Code quality tools don’t generally check JSPs very effectively so there is no way to perform quality metrics on that stuff. Move java logic to support classes, or if the JSP is mostly Java then it should have been a servlet in the first place.

    If you have more than a few lines of Javascript or CSS in a JSP or Sightly file then you’re doing it wrong. In addition to quality checks, it just makes everything a big mess and harder for UX folks to update the styling and front-end code. Use client libraries. If possible and it is a small amount of code, use a site-wide client library to start with. It loads a lot faster for browsers when they only have to make one JS request for one client library.

    Finally, if you don’t use quality metrics in your build, or at least in your CI server, then you should expect fragility and regressions over time. Use Checkstyle, Findbugs, and ES Lint (or similar) to ensure that developers are following good practices. If you want to be really fancy about it set up a SonarQube instance to keep an eye on things. Don’t wait because the longer you put it off, the more messes you’ll have to clean up later.

  4. Nono Junang

    I would like to add the following:
    1- Excessive use of data-sly-resource. Sometimes just having the html code defined as a data-sly-template and used through data-sly-call is enough. Remember each time data-sly-resource is used then the execution goes through the Sling resource resolution mechanism, which is quite heavy compared to data-sly-call. Think about whether you really need it.
    2- Defining everything as a cq:Component: Sometimes just having a Sling script is enough. It must not always be a cq:Component

Comments are closed.