(N)one click deployment

Last week I attended the AEMHub conference in London and I really loved it. Lot of nice people, interesting talks (and chats) and inspiring presentations. And Cognifide did a great job in organizing this. Thanks folks, especially to Juliette and Kimberly!

I also held a presentation called “(N)one click deployment”. It focussed to the point, that IT operation staff should not be held responsible for the automation of operation processes (for many reasons, like insufficient time, insufficient skills and sometimes even the lack of motivation). But instead, developers are by nature creators of automation, because programming is just automating steps to perform a task.

Additionally the features you might consider as natural tools for automating CQ maintenance or deployment procedures are just build blocks, not tools. When you use curl to automate such processes you have to care about error handling and reporting. Which can get pretty complicated, when you have to parse server responses to determine the right status, and your only tool is the Unix shell.

So in the end you’re better off when you use a programming language, which offers more feature than shell and makes things easier to build, test and debug. So if you are an operation guy which focusses on automating AEM deployments and maintenance tasks, don’t focus too much on handling too much external, but motivate the developers (and probably also the vendor :-)) to include more sophisticated building blocks to the application or the product itself, so your job is getting easier.

You can find my slidedeck on the offical AEMHub slideshare page.

Meta: New blog layout

When I started this blog back in December 2008 I really didn’t care that much for the design of the blog, and I simply took the Kubrick theme. Which is just very simple and straight style with a deep blue header.Now we are in 2014 and the times have changed and me as well. It’s time for some cleanup and adjustments. So today I changed the style of this to something more modern and also added a twitter image to my sidebar. And the comment function is now on top of a posting and no longer on the bottom. But that should be all.

If you are reading this blog through a feedreader you probably don’t see any change at all. But that’s fine then :-)

Using curl to install CQ packages — and why it isn’t a good idea

CQ5 has a very good tradition of exposing APIs, which are accessibly by simple HTTP (in some cases even RESTful APIs). This allows one to access data and processes from external and is also a good starting point for automation.

Especially the Package Manager API  is well documented, so most steps to automate deployment steps really start here. A typical shell script to automate package installation might look like this:

ZIP=directory/to/the/package-1.0.zip
FILENAME=`basename $ZIP`
CURL=curl -u admin:admin
HOST=http://localhost:4502/crx/packmgr/service/.json

$CURL -s -F package=@ZIP -F force=true $HOST?cmd=upload
if test $? -ne 0;
  echo “failed on upload"
fi
$CURL -X POST -s $HOST/etc/packages/mypackages/$FILENAME?cmd=install
if test $? -ne 0;
  echo “failed on install"
fi

As you see, it lacks any kind of sophisticated error handling. Introducing a good error handling is not convenient, as curl doesn’t return the HTTP status as a return code (but just if the request itself has been performed successfully), so you have a parse the complete output to decide if the server side returned a HTTP 200 or something else. Any non-seasoned shell-script-developer will probably just omit this part and hope for the best …

And even then: When your package installation throw an error during deserialization of a node (maybe you have a typo in one of your hand-crafted .content.xml files), the system still returns a http 200 code. Which of course it shouldn’t.
(The reason for the 200 is, that the code streams the installation progress on each node, and that the decision for the status code has to be done before all nodes are imported into the repository. Therefor the need for an internal status, which is one of the last lines of the result. Thanks Toby for this missing piece!)

And of course we still lack the checks if embedded bundles are starting up correctly …

So whenever you do such a light-weight deployment automation, be aware of the limits of it. Good error handling, especially if the errors are inlined in some output, was never a primary focus of shell scripting, and most of the automation scripts I’ve seen in the wild (and written myself, to be honest) never really cared about it.
But if you want to have it automated, it must be reliable. So you can focus on your other work, and not on checking deployment process logs for obvious and non-obvious errors.

On AEMHub I will talk about the importance of such tools and why developers should care about such operation topics. And I hope, that I can present the foundations of a small project aimed for proper CQ deployment automation.

Rewrapping CQ quickstart to include your own packages

CQ quickstart is a cool technology to ease the setup of CQ installations; although it’s not a perfect tool for server installations, it’s a perfect to developers to re-install a local CQ development environment or for any kind of demo installation.But a out-of-the-box installation is still a out-of-the-box installation, it doesn’t contain hot fixes or any application bundles, it’s just a raw CQ. In this posting of 2010 I described a way how you can leverage the install directory of CRX to deploy packages and how you can package it for distribution. It’s a bit clumsy, as it requires manual work or extra steps to automate it.

In this post I want to show you a way how you can rebuild a CQ quick start installation including extra packages. And on top of that, you can do it as part of your maven build process, which anyone can execute!

The basic idea is to put all artifacts into a maven repository (e.g. Nexus), so we address it via maven. And then use the maven-assembly-plugin to repackage the quick start file.

Required Steps:

  • Put your CQ quickstart into your maven repository, so you can reference it. The can freely choose the name, for our example let’s use groupId=com.adobe.aem.quickstart, artifactId=aem-quickstart, version=5.6.1, packaging=zip. For this example you can also just put the file in your local m2 archive: ~/m2/repository/com/adobe/aem/quickstart/aem-quickstart/5.6.1/aem-quickstart-5.6.1.zip
  • Update your pom.xml file and add dependencies to both maven-quickstart-plugin, the aem-quickstart artifact and an additional content package you create during your build (e.g. com.yourcompany.cq5.contentpackage:your-contentpackage).
  • Extend your pom.xml by this definition
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-assembly-plugin</artifactId>
        <executions>
          <execution>
            <id>quickstart-repackage<id>
            <configuration>
              <finalname>quickstart-repackaged</finalname>
              <descriptors>
                <descriptor>src/assembly/quickstart-repackaged.xml</descriptor>
              </descriptors>
              <appendAssemblyId>false</appendAssemblyId>
            </configuration>
            <phase>package</phase>
            <goals>
              <goal>single</goal>
            </goals>
          <execution>
        <executions>
      <plugin>
    </plugins>

    The magic lies in the descriptor file, which I placed in src/assembly (which is just as good as any other location …).

  • This file src/assembly/quickstart-repackaged.xml can look like this:
    <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2http://maven.apache.org/xsd/assembly-1.1.2.xsd”>
    
    <id>bin</id>
    <formats>
    <format>jar</format>
    </formats>
    
    <includeBaseDirectory>false</includeBaseDirectory>
    <dependencySets>
      <dependencySet>
        <outputDirectory>/</outputDirectory>
        <unpack>true</unpack>
        <includes>
          <include>com.adobe.aem.quickstart:aem-quickstart</include>
        <includes>
      </includes>
      <dependencySet>
        <dependencySet>
          <outputDirectory>/static/install</outputDirectory>
          <includes>
            <include>com.yourcompany.cq.contentpackage:your-contentpackage</include>
        </includes>
      </dependencySet>
    </dependencySets

    This descriptor tells the plugin to unpack the quickstart file and then add your your-contentpackage to the static/install folder; from this folder CQ also bootstraps packages during the startup. After this file has been added, the file is repackaged as jar file with the name “quickstart-repackaged” (taken from the pom.xml file).

  • Invoke maven with the package goal

If you take this route, you’ll have a fantastic way to automatically build your own flavour of quickstart files. Just download the latest version from your Jenkins server, double-click and — voila — you have a full-fledged up-to-date demonstration or testing instance up and running. And as soon as you have all required ingredients on your nexus, everyone can build such quickstart variants, as it only requires a working maven setup.

Deployment automation

Good developers are lazy. These people are one of the laziest creatures in the universe, because they tend never to do anything twice. They even created the acronym DRY because they’re to lazy to write “Don’t repeat yourself!”. But they are of course in a unique position: Developers are able to automate work which they tend to do over and over. They create libraries and frameworks, they write tests which can be executed by the computer. And they automate their build system so every product version they created can be recreated at will.

On AemHub 2014 I will have a talk and discuss the chances to apply this mindset also to tools for application deployments. I will do a short demonstration which illustrates the benefit of effective laziness which reduce the required efforts for CQ deployments.

I am looking forward to see you there.

TarPM lowlevel write performance analysis

The Tar Persistence Manager (TarPM) is the default persistence mechanism in CQ. It stores all data in a filesystem and is quite good and performant.
But there are situations where you would like what actually happens in the repository:

  • When your repository grows and grows
  • When you suffer a huge number of JCR events
  • When you do performance optimization
  • When you’re just interested, what happens under the hoods

In such occasions you can use the “CRX Change history” page on the OSGI console; if you choose the “details” link to the most recent tar file, it will show you the details of each transactions: the name of the changed nodes and a timestamp.

CRX Change History preview

In this little screenshot you can see, that first some changes to the image node have been written; immediately afterwards the corresponding audit event has been stored in the repository.

I use this tool especially when I need to check the data which is written to the repository. Especially when multiple operations run concurrently and I want need to monitor the behavior of the some application code I don’t know very well, this screen is of huge help. I can find out, if some process really writes that much data as anticipated. Also the amount of nodes written in a single transaction shows, if batch saves are used or if the developer preferred lots of individual saves (which have a performance penalty).
And you can really check, if your overflowing JCR event queue is caused by many write operations or by slow observation listeners.

So it’s a good tool if you assume that the writes to your repository should be quicker.

CQ development anti-pattern: Single session, multiple threads

The simple way to use relevant feature sof OSGI, combined with the power of Declarative Services (SCR), lead often to a very simple design of services. You can just define the service interface as a pure Java interface, and then implement your service as a single class with just some annotation. You can handle the complete live cycle of it in this single class.

@Component(immediate=true)
@Service
public class myServiceImpl implements MyService {

  @Activate
  protected void activate () {
  …
  }

  @Deactivate
  protected void deactivate() {
    …
  }

}

And that’s it. That’s a very simple approach, which can satisfy the requirements of most services.

But it can lead to an anti-pattern, which can be problematic. The problem is the fact, that it’s easy to acquire resources on activate and release them again on deactivate(). That’s especially true with JCR sessions.

@Reference
SlingRepository repo;

Session adminSession = null;

@Acticate
protected void activate(ComponentContext ctx) {
  try {
    adminSession = repo.loginAdministrative(null);
  } catch (RepositoryException e) {
  ...
  }
}

@Deactivate 
void deactivate(ComponentContext ctx) {
  if (adminSession != null && adminSession.isLive()) {
    adminSession.logout();
  }
}

So during the whole lifetime of this service you have a JCR (admin-) session, which is just available and you don’t need to have a special handling for it. It’s just there and it’s supposed to be open and alive (unless something really weird happens). Your service methods can now be as simple like this:

public void doSomething() {
// work with the adminSession
}

So, what’s the problem then? Why do I call it an anti-pattern?

Basically, it’s the use of a single session; your service might be called by multiple threads in parallel, and you might suppose, that these calls can be processed in parallel. Well, that isn’t the case. In the current implementation of Jackrabbit 2.x the JCR session has an internal lock, which prevents multiple sessions to work in parallel on the very same session (both read and write)! So by this design you limit your application’s scalability, because the threads are queuing up and wait until they get hold of that internal lock. And this is something we really should avoid.

So, what’s the best method to avoid this? That’s quite simple: Use a new session for each call. Sessions are cheap and don’t require a relevant startup time. I already presented this pattern in the context of preventing memory leaks and I want to repeat it here:

public void doSomething() {
  Session adminSession = null;
  try {
    adminSession = repo.loginAdministrative(null);
    // do something useful here
  } catch (RepositoryException e) {
    // error handling
  } finally {
    if (adminSession != null && adminSession.isLive()) {
      adminSession.logout;
    }
  }
}

Here you have a dedicated session for each call (and implicit for each thread calling this method), you never need to bother with this kind of locking issues.

A hint to find such bottlenecks: Set the log level of the class org.apache.jackrabbit.core.session.SessionState to DEBUG; if you have concurrent access to the same session from multiple threads, it will write a statement to the log.