Server

Logs

Updated: November 15, 2024

Nuxeo logging is compliant with common Java logging frameworks Log4J, Log4j2, JCL, SLF4J and JUL.

Logging in Nuxeo is configured through a Log4J XML file: $NUXEO_HOME/lib/log4j2.xml

Editing that file, you can set the logging rules (log level per file and component, files rotation, ...).

Log4J log levels are: ERROR, WARN, INFO, DEBUG and TRACE.

You can increase or decrease the logs for specific services. Here are some useful loggers:

  • org.nuxeo.runtime.deployment.preprocessor.DeploymentPreprocessor logs the pre-processing order
  • org.nuxeo.osgi.application.loader.FrameworkLoader writes the configuration summary
  • org.nuxeo.runtime.osgi.OSGiRuntimeService provides the final startup summary

Log Files

The log files location depends on the nuxeo.log.dir parameter. By default, Nuxeo generates the following log files (also available from the platform Admin tab and the Control Panel):

  • Launcher logs
    • nuxeoctl.log - Logs activity from NuxeoCtl
    • console.log - Logs output written to the console (stdout)
  • Server logs
    • server.log - Server logs
    • nuxeo-error.log - Gathers errors raised to the user
    • stderr.log - Gathers errors written to the console (stderr)
    • classloader.log - Gathers class loading errors from Catalina
    • tomcat.log - Gathers Tomcat logs

Tomcat Specific

The following information is for debug purpose, it may have impacts on performance and logging files size.

$NUXEO_HOME/conf/logging.properties has no effect; you can safely remove it.

Because Tomcat uses java.util.logging of which Log4J is not aware, Tomcat logs (org.apache.catalina.*) have to be redirected to Apache Commons Logging (compliant with Log4J).

Nuxeo provides a bridge for that redirection (in org.nuxeo.common.logging.JavaUtilLoggingHelper) with a default threshold set at INFO level. You can customize it by adding into nuxeo.conf:

JAVA_OPTS=$JAVA_OPTS -Dorg.nuxeo.runtime.redirectJUL.threshold=CONFIG

But that redirection is only available after Nuxeo Framework initialization whereas most of the startup logs from org.apache.catalina happen before (when the redirection is not yet active).

Note also that FINER and FINEST logs will be ignored anyway.

JSON format

By editing your log4j2.xml you can turn log into JSON, You can have a look at this JSON contribution

Garbage Collector

You can activate the JVM garbage collector logs with the following option in your nuxeo.conf:

JAVA_OPTS=$JAVA_OPTS -Xlog:gc*:file=${nuxeo.log.dir}/gc.log

Slow NXQL Queries

It is possible to easily pinpoint slow NXQL. If you want trace all the queries that take more than 100ms:

  1. Add the following line in your nuxeo.conf:

    • For VCS:

      org.nuxeo.vcs.query.log_min_duration_ms=100
      
      
    • For DBS:

      org.nuxeo.dbs.query.log_min_duration_ms=100
      
      
  2. Activate the log for session you use (VCS or DBS) at the INFO level. Here an example of lib/log4j2.xml to get them in a separate file:

    • For VCS:

      <Logger name="org.nuxeo.ecm.core.storage.sql.SessionImpl" level="info" additivity="false">
          <AppenderRef ref="SLOW" />
      </Logger>
      <File name="SLOW" fileName="${sys:nuxeo.log.dir}/slow-query.log" append="false">
          <PatternLayout pattern="%d{ISO8601} %-5p [%t] [%c] %m%X%n" />
      </File>
      
    • For DBS:

      <Logger name="org.nuxeo.ecm.core.storage.dbs.DBSSession" level="info" additivity="false">
          <AppenderRef ref="SLOW" />
      </Logger>
      <File name="SLOW" fileName="${sys:nuxeo.log.dir}/slow-query.log" append="false">
          <PatternLayout pattern="%d{ISO8601} %-5p [%t] [%c] %m%X%n" />
      </File>
      

To get the slowest queries:

$ grep duration_ms log/slow-query.log | sed -e's/^[^\t]*\t//g' -e 's/{.*$//g' |sort -nr | head

802.89  QueryFilter(principal=system, limit=0, offset=0)        query   SELECT * FROM Document WHERE ....
12.87   QueryFilter(principal=bob, limit=0, offset=0)        queryAndFetch   Select DISTINCT ecm:uuid...
10.12   QueryFilter(principal=system, limit=50, offset=0) count total results UNLIMITED query   SELECT * FROM Document WHERE ...
5.35    QueryFilter(principal=Administrator, limit=20, offset=0) count total results up to 20   query   SELECT * FROM ...

When configuring the log priority to TRACE a debug stack trace is available in the log so you got the code path.

Logging Transactional Events

Configure log4j in $NUXEO_HOME/lib/log4j2.xml by adding the following keywords to your appender pattern layout: %t for the thread name and %X for the logging context map:

<PatternLayout pattern="%d{ISO8601} %-5p [%t] [%c] %m%X%n" />

You should also add a new Logger if you want the traces to be enabled:

<Logger name="org.nuxeo.ecm.core.management.jtajca" level="trace" additivity="false" />

At this stage, once a transaction is started or a connection is opened, their identifiers are put in a context map for the logger. By adding the %X keyword, you've requested to print them each a message is logged. The transactions and connections will also be logged. You should add additional log statements at level DEBUG or TRACE around the code you want to monitor.

Important Log to Monitor

Any log at ERROR level should be investigated.

Below are described some important logs.

LevelSource MessageCause
INFO Component Loading Status: Pending: 0 / Missing: 0 / Unstarted: 0 / Total: 813 A Nuxeo node is started
ERROR Terminate computation: XXX due to previous failure Failure in a computation code after retries, a manual intervention is needed refer to Nuxeo Stream
ERROR Exception in processLoop: XXX Low level failure in a computation thread, a manual intervention is needed refer to Nuxeo Stream
WARN Submitted index command: XXX to index the entire default repository A Bulk Command to re-index the repository has been submitted
WARN Index command: XXX completed: XXX docs in XXXs (wait: XXXs, scroll: XXXs) rate: XXX docs/s A Bulk Command to re-index the repository is completed
ERROR [Consumer clientId=XXX, groupId=XXX] Offset commit failed on partition XXX at offset XXX: The request timed out. This is KafkaConsumer retry under network congestion that can be ignored Kafka-7791