Structure Log Messages in mendix

david medragh
6 min readJun 18, 2022

Log messages follow a certain pattern
log messages are split into two types of fields: structured and unstructured. The first few fields of a log are structured, while the last field doesn’t have a predefined structure.

The log template used in Mendix and the description of the different log fields can be found below:

log template used in Mendix
  • Timestamp: The moment the log message was generated by the logging component.
  • Source: The specific instance of the app that generated the log message. This is only applicable once an app is deployed to the cloud.
  • Log level: The importance that your app attaches to the message.
  • Log node: The component in your app that generated the log message.
  • Log message: The message detailing what happened.

The first four fields are structured and can be used by log analysis tools to aggregate records when analyzing your log files. The Log message field, is intended to provide detailed information to the reader about an event that is noteworthy or about an error that occurred. Additionally, this field enables a developer to convey all relevant information in a way that makes sense for a particular message.

Since the Log message field is not bound to a specific structure, it is important to read the message carefully. The reason for logging a message will have an impact on its length. Messages that are reporting what is happening will generally be short, while messages that report a problem will generally be very detailed and verbose. Especially messages that report unexpected problems.

Log levels are ordered based on severity

The concept of log levels was introduced in the 1980s when the Sendmail project (the most popular mail server around the turn of the century) started a new logging tool called syslog. This tool introduced the concept of Severity levels, a list of descriptive words like “Error”, “Warning” and “Informational” with an associated description. This concept of severity levels was later adopted as log levels by many application logging frameworks.

Log levels are ordered based on severity and we will present them in that way. They are presented below from most severe to less severe because each subsequent level will include the messages from all the levels above.

  • Critical: This level is used for the most severe messages, indicating an error that may prevent the application from functioning correctly. When a component logs a message at this level, the messages are displayed as white text on a red background. When you encounter log messages at this level, you should investigate immediately. They will be displayed in the Alerts section of your app under Critical Logs. When e-mail alerts are setup, they will trigger an e-mail.
  • Error: This level is used for messages that indicate that the application encountered a problem that prevents the application from performing some function. The application will be able to continue running but it should be investigated.
  • Warning: This level indicates that something unexpected happened or that some problem will prevent the application from running in the near future. For instance, the application is running out of disk space. The application will keep working normally if this message is shown.
  • Info: This level reports on noteworthy events during normal operation. These messages are generally informative, like an indication that the runtime has started or what type of license was applied at startup. No action is needed when encountering messages at this level.
  • Debug: This level is used when diagnosing problems. They give insight into the steps that your app is taking during normal operation. This is helpful when your app is generating log messages at the Error or Warning levels and the information in these messages is not clear enough to diagnose the issue.
  • Trace: This is the most verbose, or detailed, log level available. It adds even more information then debug and can be used to analyze problems that are difficult to understand when the Debug level doesn’t provide enough information. It’s best to use this level sparingly as the amount of information generated by this log level can be overwhelming.

Log levels are an important part of the context of a log message. They tell you how much importance you should put on the log message. This can also change based on the context in which you’re reading the log message. For instance, during normal operation of your application, you do not have to pay much attention to log messages from the most detailed level. Better yet, you should not have your log level set to the most detailed level during normal operation. If you do, the system will generate so much log information that it’s difficult to see the important log messages. Worst case scenario, your app will generate so much logging information that it will fill up your storage space.

Mendix Log Nodes

Log nodes indicate which part of the application generated the log message, and gives you valuable context on how to interpret the log message. Imagine for instance that a log message alerts you to the fact that your application failed to create an object. If that log message was generated by a microflow, the Log node will direct you to look at your microflows. Had the log message been generated by a datagrid, you would approach your investigation quite differently. Getting to know the most important log nodes in Mendix will help you interpret log messages by knowing what log level to set and which area of your application to investigate.

Log nodes in Mendix are registered in the central logging component dynamically. This offers great flexibility as log nodes can be added without updating the whole runtime. This is useful when you want to create your own. The downside is that there is no definitive list of all log nodes that you might encounter in a Mendix app. There is a list of log nodes that are used by the Mendix runtime, you can find it on in the documentation. You can find a few of the most used log

Default Mendix Log Nodes

The following log nodes are used by Mendix when writing log messages.This list is currently incomplete and is being updated.

Microflow Engine

Messages reported on this log node all relate to your microflows, including log messages for errors that your microflow might run into. If this Log node iset to the log level trace, you can see every step of your microflow being executed. This can be ideal when debugging an issue that only exists in production, as you can see which action was executed before the error occurred. But remember that the trace log level can quickly overwhelm your app environment so use it wisely.

ConnectionBus

This log node reports general information on your application’s connection to the database. To be able to analyze specific parts of the connection, there are related log nodes that deal with a specific part of this connection. You can recognize them because they start with ConnectionBus as well. The names of these Log nodes start with ConnectionBus followed by an underscore and a specification describing what that log node reports on:

  • ConnectionBus_Mapping: Used by the translator between XPath and SQL
  • ConnectionBus_Retrieve: Used by the data retrieval component
  • ConnectionBus_Security: Used by the security engine when accessing data
  • ConnectionBus_Update: Used by the data updating component
  • ConnectionBus_Validation: Used by the component that manages your entities in the database

REST_Publish and REST_Consume

Each of these log nodes reports on the communication that your app does with REST API’s. Publish, as you might imagine, reports on anything you expose while Consume focuses on data that you pull into your app through REST. A nice feature of this log node is that you can inspect the incoming data if you set the REST_Consume log node to trace. This allows you to verify that you’re receiving the data correctly and can be a great help in debugging.

Now that you have a better understanding of log nodes, let’s see how they can be used by log analysis tools. Imagine that you want to monitor whether your microflows are running correctly. It would be very useful to track the log messages that have the log level Error. If you would monitor all errors, you would capture errors in your microflows, but all other errors as well. By setting Error on only the MicroflowEngine log node you will be able capture only the errors that relate to microflows. This will allow you to setup an alert that will be triggered specifically when one of your microflows misbehaves.

--

--