Good logging practices are crucial for monitoring and troubleshooting your Node.js servers. They help you track errors in the application, discover performance optimization opportunities, and carry out different kinds of analysis on the system (such as in the case of outages or security issues) to make critical product decisions.
Even though logging is an essential aspect of building robust web applications, it's often ignored or glossed over in discussions about development best practices. Knowing what and how to log is tricky because it's often difficult to understand what information you’ll need during troubleshooting.
Since we understand how essential logging is, we need to practice good logging habits. We should accumulate logs that make it easier to diagnose the root cause of problems and solve potential issues before they impact end-users.
This article will outline some best practices to follow when writing logs in a Node.js application.
1. Use a Node.js Logging Library
Node.js developers tend to rely on the runtime's console methods (such as
console.log()
) to log events since it is built into the runtime, and offers a
familiar API similar to the JavaScript console mechanism provided by
web browsers.
While console.log()
has its uses, it is not an adequate
solution for implementing logging in a production application. It lacks
support for features and configuration options that are considered essential to
a good logging setup. For example, console methods do not support log levels
like warn
, error
or debug
, despite the provision of methods like
console.warn()
, console.error()
, and console.debug()
, respectively. Those
are simply functions that print to the standard output or standard error
without indicating log severity.
A good logging library provides a robust feature set that makes it easier to centralize, format, and distribute the logs to fit your needs. For example, a typical logging framework will provide various options for where to output log data (such as the terminal, filesystem, or a database), while also supporting the ability to send the logs over HTTP if you want to transport your log entries to a log management service.
There are three major concerns for choosing a suitable logging library: recording, formatting, and storing messages. You need to make sure that your library of choice addresses all three concerns in a satisfactory manner. Another critical consideration for selecting a logging library is performance. Since the logger will be used a lot throughout the codebase, it can harm your application's runtime performance. Therefore, you should also investigate the performance characteristics of a library, and see how it compares to alternatives.
In the Node.js ecosystem, there are several popular options to consider. Most of them offer similar features, but they also have their differences — so it's vital that you try them out yourself and see which one caters to your use case the best:
- Winston — The most popular logging library, with support for multiple transports. This allows you to easily configure your preferred storage location for your logs.
- Pino — Pino's biggest attraction is its speed. It claims to be up to five times faster than alternatives, in many cases.
- Bunyan — Another feature-rich logging framework that outputs in JSON by default and provides a CLI tool for viewing your logs.
- Roarr — Roarr is a different kind of logger that works in Node.js and the browser.
For the rest of the article, we'll use winston
to demonstrate the
different concepts. This is not necessarily an endorsement of winston
; it's only
being used here because it's the most popular logging framework for Node.js. You
can follow along by installing winston
in your project though npm
:
1npm install winston
2. Use the Correct Log Levels
If you've spent a reasonable amount of time in the software industry, especially supporting production applications, you may have observed the use of log levels, at least in passing. They provide a way to differentiate between the types of events in a system and add context to how important each event is. If you correctly utilize log levels in your application, it will be easy to distinguish between critical events that need to be immediately addressed versus purely informative events.
Although logging systems give different names to severity levels, the concepts remain largely the same. Here are the most common log levels that you are likely to encounter, regardless of the logging framework you choose (in decreasing order of severity):
- FATAL: Used to represent a catastrophic situation — your application cannot recover. Logging at this level usually signifies the end of the program.
- ERROR: Represents an error condition in the system that happens to halt a specific operation, but not the overall system. You can log at this level when a third-party API is returning errors.
- WARN: Indicates runtime conditions that are undesirable or unusual, but not necessarily errors. An example could be using a backup data source when the primary source is unavailable.
- INFO: Info messages are purely informative. Events that are user-driven or application-specific may be logged at this level. A common use of this level is to log interesting runtime events, such as the startup or shutdown of a service.
- DEBUG: Used to represent diagnostic information that may be needed for troubleshooting.
- TRACE: Captures every possible detail about an application's behavior during development.
The winston
library uses the following log levels by default — with error
being the most severe and silly
being the least:
1{
2 "error": 0,
3 "warn": 1,
4 "info": 2,
5 "http": 3,
6 "verbose": 4,
7 "debug": 5,
8 "silly": 6
9}
If the defaults do not suit your needs, you can change them while initializing a custom logger. For example, you can instead use the log levels discussed above.
1const { createLogger, format, transports } = require("winston");
2
3const logLevels = {
4 fatal: 0,
5 error: 1,
6 warn: 2,
7 info: 3,
8 debug: 4,
9 trace: 5,
10};
11
12const logger = createLogger({
13 levels: logLevels,
14 transports: [new transports.Console()],
15});
When you want to log a message, you can reference the desired level directly on the custom logger, as shown below:
1logger.info("System launch"); // {"message":"System launch","level":"info"}
2logger.fatal("A critical failure!"); // {"message":"A critical failure!","level":"fatal"}
Winston also allows you to define a level
property on the logger and on
each transport which specifies the maximum level of messages that should
be logged. For example, you can run your programs at info
level by default,
then switch to debug
or trace
level when you need to troubleshoot an issue
or deploy to a testing environment. You should control this setting
through an environmental variable.
1const logger = winston.createLogger({
2 level: "warn",
3 levels: logLevels,
4 transports: [new transports.Console({ level: "info" })],
5});
3. Use Structured Logging
When defining how your log messages look, the priority should be to make your log entries easy to read for both humans and machines.
One of the primary goals of logging is to enable post-mortem debugging, and this will involve reading log entries to reconstruct the steps that led to an event in the system. Having human-readable log entries will make this thankless task much easier for developers and sysadmins. It's also important to use a structured format that is easy to parse by machines. This allows for some automated processing on the logs (such as for alerting or auditing purposes).
JSON is a universal favorite for structured log entries because it is ubiquitous and easily readable by humans. It is also highly machine-readable and easily converted to other formats, even when working with other programming languages. When logging in JSON, it's necessary to use a standard schema so that the semantics of each field is clearly defined. This also makes it easy to find what you're looking for when analyzing log entries.
Winston outputs a JSON string by default with two fields: message
and level
.
The former contains the text that is being logged, while the latter signifies
the log level. Customizing the output is straightforward through
winston.format
, which uses
logform to implement its
different formats. For example, if you wanted to add a timestamp to each log
entry, you can do so by combining the timestamp
and json
formats as shown
below:
1const { createLogger, format, transports } = require("winston");
2
3const logger = createLogger({
4 format: format.combine(format.timestamp(), format.json()),
5 transports: [new transports.Console({})],
6});
This will produce log entries in the following format:
1{"message":"Connected to DB!","level":"info","timestamp":"2021-07-28T22:35:27.758Z"}
2{"message":"Payment received","level":"info","timestamp":"2021-07-28T22:45:27.758Z"
Ensure you read the logform docs to learn about the predefined formats and how to create your own custom formats.
4. Write Descriptive Messages
Log entries should adequately describe the events that they represent. Each message should be unique to the situation and should clearly explain the event that occurred at that point. In the event of an emergency, your log entries may be the only source of information to help you understand what happened, so it's important to get this aspect of logging right!
Here's an example of an inadequate log entry being used to communicate a request failure:
1Request failed, will retry.
The message above doesn't provide any insights into:
- The specific request that failed
- The reason for its failure
- The length of time before the request is retried
We can probably find out the answers to some of these questions by looking at other places such as other log entries or even the code itself. However, it is better to make the log entry more valuable on its own, through a more descriptive message:
1"POST" request to "https://example.com/api" failed. Response code: "429", response message: "too many requests". Retrying after "60" seconds.
This second message is much better because it provides adequate information about the failed request, including the status code and response message, and also notes that the request will be retried in 60 seconds. If all your messages are just as descriptive, you'll have a more pleasant time when attempting to make sense of your logs. Other examples of good log messages include the following:
1Status of task id "1234" changed from "IN_PROGRESS" to "COMPLETED".
2SomeMethod() processed "100" records in "35ms".
3User registration failed: field "email" is not valid email address; field "password" is below the minimum 8 characters.
When composing log messages, include all the relevant details pertaining to the event without being unnecessarily verbose. This will prevent other log readers (which could include your future self) from being overwhelmed with too much information to sift through. Your log messages should also be able to stand on their own; don't rely on a previous message's content to provide the context for a later entry.
5. Add the Right Amount of Context to Your Logs
Besides writing a descriptive log message, you also need to include the right amount of context in the log entry. Context makes it possible to quickly reconstruct the actions leading up to an event. Add basic information to the log, such as the timestamp of the event and the method where it occurred (or a stack trace, in the case of errors). You should also add data points relevant to the flow of the operation that triggered the event. These data points may be generated at different parts of the operation flow and aggregated at the point of logging.
In the context of a billing service, log entries generated by the system could include several data points, including:
- Session identifiers
- Usernames and IDs
- Product or transaction identifiers
- The current page that the user is on
You can use each of the above data points to track a user's flow through an entire checkout process. If an important event occurs, the available data will be automatically appended to the log output, and it will be possible to identify:
- the circumstances that led to the event (such as the user who experienced the event)
- the page where it occurred
- the transaction and product id that triggered the event.
These data points also let you filter log entries based on a common identifier such as the user id or product id.
Winston provides the ability to add global metadata (such as the component or service where an event occurred) to every generated log entry. In a complex application, this information in your logs is helpful for troubleshooting issues because it immediately directs you to the point of failure.
You can configure this when creating the logger for the component or service:
1const logger = createLogger({
2 format: format.combine(format.timestamp(), format.json()),
3 defaultMeta: {
4 service: "billing-service",
5 },
6 transports: [new transports.Console({})],
7});
A service
field will be included in all logs created by the logger
object:
1{"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T10:56:14.651Z"}
To add metadata to individual entries, you need to create a context or metadata object that you can pass around throughout the flow of an operation so that the data is accessible at logging points. You can also utilize the concept of child loggers to add metadata at logging points:
1const ctx = {
2 userId: "090121",
3 productId: "creme-de-la-creme",
4};
5
6logger.child({ context: ctx }).info('Order "1234" was processed successfully');
7// {"context":{"userId":"090121","productId":"creme-de-la-creme"},"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T12:20:13.249Z"}
6. Avoid Logging Sensitive Information
Regardless of whether you're in an industry with strict rules around compliance (such as healthcare or finance) or not, it's important to avoid including sensitive information in your logs.
Sensitive information includes social security numbers, addresses, passwords, credit card details, access tokens, and similar data types. Since log messages are often stored in plain text, such data will be exposed if the logs fall into the wrong hands. You also have to ensure you are not falling afoul of regulations that apply to countries where your product is operational (such as GDPR) by recording certain pieces of information.
You can avoid accidentally leaking sensitive data in your logs by minimizing which parts of the system work with that data. For instance, credit card details should only be seen by the billing component of your system, and sensitive data should be kept out of URLs — redacted where possible.
Although this isn't a foolproof solution, you can also use a blocklist to prevent specific fields from ever making it into the logs.
7. Log for Auditing and Profiling Reasons
We primarily use logs to diagnose issues and find the root cause of bugs. However, logs can also prove invaluable when auditing or profiling a system, or perhaps to generate interesting statistics about system behavior.
For example, you can log details of what users are doing on the system (like user sign-ins, the resources they created or accessed, etc.). In the absence of specialized tools, you can also use logging to profile your code by reporting how long an operation took or how many times a function was executed. The insights gleaned from this data can help you improve the performance of your application's hotspots.
Winston provides a simple profiling mechanism that you can take advantage of:
1logger.profile("test", { level: "info" });
2
3// Returns an object corresponding to a specific timing
4const profiler = logger.startTimer();
5
6setTimeout(() => {
7 // End the timer and log the duration
8 profiler.done({ message: "Logging message" });
9}, 1000);
A durationMs
field will be included in the output:
1{"message":"Logging message","level":"info","durationMs":1001,"timestamp":"2021-07-29T14:17:55.097Z"}
8. Automatically Log Uncaught Exceptions and Unhandled Promise Rejections
When you encounter an uncaught exception or unhandled promise rejection, it is always considered good practice to crash the program. Use a process manager like PM2 to automatically restart the process and restore the program to a clean state.
To understand
why such an event has occurred, it's also necessary to log the details of the
exception or promise rejection before exiting. Winston provides handlers for
both situations, which may be configured on a logger
instance:
1const logger = createLogger({
2 transports: [new transports.File({ filename: "file.log" })],
3 exceptionHandlers: [new transports.File({ filename: "exceptions.log" })],
4 rejectionHandlers: [new transports.File({ filename: "rejections.log" })],
5});
In the example above, uncaught exceptions will be logged into an
exceptions.log
file, while unhandled rejections will be logged into a
rejections.log
file. Entries will automatically include the full stack trace
as well as information about process arguments and memory usage associated with
the exception, giving you all the details you need to find the root of the
problem.
Learn how to configure Winston's unhandled exception handler.
9. Centralize and Monitor Your Logs
Once you have implemented logging in your Node.js server, the system will rapidly create new log entries. Depending on how much traffic your application receives, this can yield gigabytes of data in a relatively short space of time.
You can prevent your log files from getting too big by employing a log rotation solution that also handles cleaning up old logs. Even with this measure in place, it may be challenging to sift through the logs when you need to troubleshoot an issue (due to the sheer number of log files and entries and the distributed nature of most applications).
A more modern approach to this problem is to utilize a log management solution that lets you centralize, filter, and gather insights from all your logs in one place and parse or visualize them in various ways. With a centralized logging solution, you'll be able to configure alerts for critical events or when certain thresholds are surpassed so that you can address critical problems promptly. Winston provides several transport options that enable the easy transmission of log data from your application to an external system.
Node.js Logging Tips: Next Steps
In this article, we've covered some invaluable practical tips to help you write more useful log entries. Next, you should add logging to your Node.js server and configure it while keeping the above best practices in mind.
If you have any additional tips regarding logging in Node.js that were not discussed in this post, feel free to share them with me on Twitter.
Thanks for reading, and happy coding!
P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.
P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.