Building a Full Domino Image for JUnit Tests

Sun Jan 23 15:57:51 EST 2022

Tags: docker domino
  1. Tinkering With Testcontainers for Domino-based Web Apps
  2. Adding Selenium Browser Tests to My Testcontainers Setup
  3. Building a Full Domino Image for JUnit Tests

Last year, I wrote about how I built images to use Testcontainers to run tests against a Liberty app that uses a Domino runtime. In that situation, I used the Domino Docker image from Flexnet but then pulled out the program files and stock data, mixed with pre-configured server support files from the repository.

Recently, though, I've had a need to have a similar setup, but altered in three ways:

  1. It should fully run Domino, not just use the data and program files for the runtime in Liberty
  2. It should not require pre-populating a server ID, notes.ini, and names.nsf from outside - that is, it should be self-configuring
  3. It should also have an extra component installed, one that must be installed after Domino is configured but before the image is fully built
  4. On the next launch, I need a post-install agent to run for final configuration, and the tests need to wait on that

Additionally, it should still be runnable with the same basic tools - the image should be built and the container started and destroyed by automated tests. This stricture comes into play in weird ways.

One-Touch Setup

The second requirement - that the container be self-configuring - is handled adroitly by One-Touch Setup, the feature in Domino 12 and above that lets you specify a configuration JSON file. I used this here in basically the same way as I described in that post: the script sets up and certifies a throwaway domain with a known admin username + password, and also deploys a few NTFs on first proper launch. Since this server intentionally doesn't communicate with the outside world, I don't need to provide any external files like a certificate authority or server ID.

Switching the Baseline

Initially, I continued to use the official image from Flexnet. However, I had run into some trouble with multi-stage builds using this earlier. I lack enough Docker knowledge to be sure, but my suspicion is that it declares /local/notesdata as an expected externally-provided volume. This on its own is fine, but it interacts oddly with automated container building. The trouble comes in when you do something like this:

1
2
3
4
5
6
7
FROM domino-docker:V1201_11222021prod
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/domino-config.json"
COPY --chown=notes:wheel domino-config.json /local/
RUN /local/start.sh

RUN /some/script/that/uses/notesdata

By the time it hits that second RUN line using automated build mechanisms, /local/notesdata is depopulated. I'm not sure why this is different from a command-line docker build, but it is what it is.

Fortunately, the "community" version of the image builder from https://github.com/IBM/domino-docker doesn't exhibit this behavior. I had been considering switching over already, so this made the decision all the easier.

Breaking Up Setup Into Stages

With this change in hand, I was able to more-properly break up the setup into multiple stages. This is important both for requirement #3 above and because it allows Docker to cache intermediate results. Though I want the server to auto-configure itself when building, I don't need the results of that to be different every run, and thus I can save tons of time in subsequent launches if I handle these caches well.

So my Dockerfile started to look something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM hclcom/domino:12.0.1
# Relay Domino output to the container logs
ENV DOMINO_DOCKER_STDOUT "yes"
ENV SetupAutoConfigure "1"
ENV SetupAutoConfigureParams "/local/DominoAutoConfig.json"

COPY --chown=notes:wheel domino-config.json /local/DominoAutoConfig.json
COPY --chown=notes:wheel notesdata/* /local/notesdatatemp/

RUN /domino_docker_entrypoint.sh

# Copy the app executable and support scripts
USER root
COPY appinstall /local/appinstall
RUN chmod +x /local/appinstall/install.sh && /local/appinstall/install.sh

# Back to notes user for the Domino entrypoint
USER notes

But here I hit another distinction between docker build and the automated mechanisms: that RUN /domino_docker_entrypoint.sh line would execute and get to the point where it emits "Application configuration completed successfully", but then would not actually exit properly. Again, I'm not sure why: the JSON file tells the server to not launch after configuration, and indeed it doesn't, but the script just doesn't relinquish control - but does when built from the command line.

So I rolled up my sleeves and wrote a wrapper script to kill the process when it's known to be done:

1
2
3
#!/usr/bin/env bash
/domino_docker_entrypoint.sh > /tmp/domsetup &
until tail -f /tmp/domsetup | grep -q "Application configuration completed successfully"; do : sleep 1; done

This runs the setup process, redirecting the output to a temp file, in the background. Then, it watches that file for the known ending text and exits when observed. A little janky in multiple ways for multiple reasons, but it works. That allows image build to progress normally to the next step in all environments, caching the results of the initial server setup. I replaced the RUN /domino_docker_entrypoint.sh line above with copying in and executing this script, and all was well.

Post-Install Agent

After the "appinstall" step, I have the peculiar need to then run code in a Notes context to fiddle with some components that aren't configured earlier. For now, I've settled on writing an agent that runs on server start, then signing and enabling it in the domino-config.json file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
    "action": "create",
    "filePath": "postinstall.nsf",
    "title": "Post Install",
    "templatePath": "/local/notesdatatemp/postinstall.ntf",
    "signUsingAdminp": true,
    "agents": [
        {
            "name": "PostInstall",
            "action": "sign"
        },
        {
            "name": "PostInstall",
            "action": "enable"
        }
    ]
}

Originally, I had this agent emit the text "postinstall done" when it finished, so that the Testcontainers runtime could look for that to know when it's safe to execute tests. However, this added a good while to the launch stage: at this point, launching the container has to wait on final post-install tasks from Domino, then signing the DB with adminp, then actually executing the agent. This added about a minute to the test pre-run time, and thus was a prime target for further caching.

So I altered the agent to check to see if it actually needs to work and then, if so, shuts down the server when it's done:

1
2
3
4
5
6
Session session = getSession();

if(needToDoWork()) {
    doWork();
    session.sendConsoleCommand(session.getServerName(), "q");
}

Then, I altered my Dockerfile to amend the end bit:

1
2
3
4
5
6
7
8
9
# ...snip
# Back to notes user for the Domino entrypoint
USER notes

# Run the server once to wait for postinstall to execute, then shut down
WORKDIR /local/notesdata
RUN /opt/hcl/domino/bin/server

# Now let the entrypoint launch again, without requiring further configuration

Again: janky, but it works. Now, all of the setup stages are cached on subsequent runs.

Building the Image in Testcontainers

In my original post, I showed using dockerfile-maven-plugin to build the image just before executing the tests. This works, but it complicated the pom.xml a bit and meant that running the tests in an IDE meant first running a Maven build. Not the end of the world, but not ideal.

Fortunately, Testcontainers can also build images. That meant that, rather than building the image in Maven and then re-using the same-named container in Java, I could do it all Java-side. To do this, I created a subclass of GenericContainer to centralize the configuration of the container:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
package example;

import java.io.IOException;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.containers.wait.strategy.HttpWaitStrategy;
import org.testcontainers.images.builder.ImageFromDockerfile;

public class DominoAppContainer extends GenericContainer<DominoAppContainer> {
    public static class DominoAppImage extends ImageFromDockerfile {
        public DominoAppImage() {
            super("example-app-it-testcontainers:1.0.0", false);
            withFileFromClasspath("Dockerfile", "/docker/Dockerfile");
            withFileFromClasspath("domino-config.json", "/docker/domino-config.json");
            withFileFromClasspath("firstrun.sh", "/docker/firstrun.sh");
            withFileFromClasspath("notesdata/exampledata.ntf", "/docker/notesdata/exampledata.ntf");
            withFileFromClasspath("notesdata/postinstall.ntf", "/docker/notesdata/postinstall.ntf");
            withFileFromClasspath("appinstall/install.sh", "/docker/appinstall/install.sh");
            withFileFromClasspath("appinstall/appinstall.jar", "/docker/appinstall/appinstall.jar");
        }
    }

    public DominoAppContainer() {
        super(new DominoAppImage());

        withImagePullPolicy(imageName -> false);
        withExposedPorts(80);
        waitingFor(
            new HttpWaitStrategy()
            .forPort(80)
            .forStatusCodeMatching(code -> code >= 200 && code < 400)
        );
        withLogConsumer(frame -> {
            switch (frame.getType()) {
                case STDERR:
                    try {
                        System.err.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                case STDOUT:
                    try {
                        System.out.write(frame.getBytes());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                default:
                case END:
                    break;
            }
        });
    }
}

It's a little fiddlier in that now I need to enumerate each classpath resource to copy in, but that also makes it all the more portable. I removed the dockerfile-maven-plugin execution from the pom.xml and switched to this instead. Since I name the image and tell it to not auto-delete on completion, this retained the desired caching behavior.

Conclusion

Overall, this whole process brought the test pre-launch time (after the first run) down from 3-5 minutes to about 20 seconds while also reducing the split between the Maven config and Java. Much more bearable when making small tweaks and re-running the suite, and it makes it a bit more explicable what's going on.

New Comment