Docs Home → MongoDB Manual Show
On this page
As part of normal operation, MongoDB maintains a running log of events, including entries such as incoming connections, commands run, and issues encountered. Generally, log messages are useful for diagnosing issues, monitoring your deployment, and tuning performance. Starting in MongoDB 4.4, ExampleThe following is an example log message in JSON format as it would appear in the MongoDB log file:
JSON log entries can be pretty-printed for readability. Here is the same log entry pretty-printed:
In this log entry, for example, the key Structured logging with key-value pairs allows for efficient parsing by automated tools or log ingestion services, and makes programmatic search and analysis of log messages easier to perform. Examples of analyzing structured log messages can be found in the Parsing Structured Log Messages section. Starting in MongoDB 4.4, all log output is in JSON format including output sent to:
Output from the Each log entry is output as a self-contained JSON object which follows the Relaxed Extended JSON v2.0 specification, and has the following layout and field order:
Field descriptions:
The message and attributes fields will escape control characters as necessary according to the Relaxed Extended JSON v2.0 specification:
Control characters not listed above are escaped with An example of message escaping is provided in the examples section. Any
attributes that exceed the maximum size defined with Here is an example of a log entry with a truncated attribute:
In this case, the
Log entries containing one or more truncated attributes include a
Log entries with truncated attributes may also include an additional When output to the file or the syslog log destinations, padding is added after the severity, context, and id fields to increase readability when viewed with a fixed-width font. The following MongoDB log file excerpt demonstrates this padding:
When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.
You can use
More examples of working with MongoDB structured logs are available in the Parsing Structured Log Messages section. MongoDB log messages can be output to file, syslog, or stdout (standard output). To configure the log output destination, use one of the following settings, either in the configuration file or on the command-line: Configuration file:
Not specifying either file or syslog sends all logging output to stdout. For the full list of logging settings and options see: Configuration file:
NoteError
messages sent to The timestamp field type indicates the precise date and time at which the logged event occurred.
When
logging to file or to syslog [1], the default format for the timestamp is See Filtering by Date Range for log parsing examples that filter on the timestamp field. NoteStarting in MongoDB 4.4, the
The severity field type indicates the severity level associated with the logged event.
Severity levels range from "Fatal" (most severe) to "Debug" (least severe):
You can specify the verbosity level of various components to determine the amount of Informational and Debug messages MongoDB outputs. Severity categories above these levels are always shown. [2] To set verbosity levels, see Configure Log Verbosity Levels. The component field type indicates the category a logged event is a member of, such as NETWORK or COMMAND.
Each component is individually configurable via its own verbosity filter. The available components are as follows: ACCESS Messages related to access control, such as
authentication. To specify the log level for COMMAND Messages related to database commands, such as CONTROL Messages related to control activities, such as initialization. To specify the log level for ELECTION Messages related specifically to replica set elections. To specify the log level for
FTDC Messages related to the diagnostic data collection mechanism, such as server statistics and status messages. To specify the log level for
GEO Messages related to the parsing of geospatial shapes, such as verifying the GeoJSON shapes. To specify the log level for INDEX Messages related to indexing operations, such as creating indexes. To specify the
log level for INITSYNC Messages related to initial sync operation. To specify the log level for
JOURNAL Messages related specifically to storage journaling activities. To specify the log level for
NETWORK Messages related to network activities, such as accepting connections. To specify the log level for QUERY Messages related to queries, including query planner activities. To
specify the log level for RECOVERY Messages related to storage recovery activities. To specify the log level for
REPL Messages related to replica sets, such as initial sync, heartbeats, steady state replication, and rollback. [2] To
specify the log level for
REPL_HB Messages related specifically to replica set heartbeats. To specify the log level for
ROLLBACK Messages related to
rollback operations. To specify the log level for
SHARDING Messages related to sharding activities, such as the startup of the
STORAGE Messages related to storage activities, such as processes involved in the
TXN New in version 4.0.2. Messages related to multi-document transactions. To specify the log level for
WRITE Messages related to write operations, such as WT New in version 5.3. Messages related to the
WiredTiger storage engine. To specify the log level for WTBACKUP New in version 5.3. Messages related to backup operations performed by the WiredTiger storage engine. To specify the log level
for the WTCHKPT New in version 5.3. Messages related to checkpoint operations performed by the WiredTiger storage engine. To specify the log level for
WTCMPCT New in version 5.3. Messages related to compaction operations performed by the WiredTiger storage engine. To specify the log level for
WTEVICT New in version 5.3. Messages related to eviction operations performed by the WiredTiger storage engine. To specify the log level for
WTHS New in version 5.3. Messages related to the history store of the WiredTiger storage engine. To specify the log level for
WTRECOV New in version 5.3. Messages related to recovery operations performed by the WiredTiger storage engine. To specify the log level for
WTRTS New in version 5.3. Messages related to rollback to stable (RTS) operations performed by the WiredTiger storage engine. To specify the log level for
WTSLVG New in version 5.3. Messages related to salvage operations performed by the WiredTiger storage engine. To specify the log level for
WTTIER New in version 5.3. Messages related to tiered storage operations performed by the WiredTiger storage engine. To specify the log level for
WTTS New in version 5.3. Messages related to timestamps used by the WiredTiger storage engine. To specify the log level for
WTTXN New in version 5.3. Messages related to transactions performed by the WiredTiger storage engine. To specify the log level for
WTVRFY New in version 5.3. Messages related to verification operations performed by the WiredTiger storage engine. To specify the log level for WTWRTLOG New in version 5.3. Messages related to
log write operations performed by the WiredTiger storage engine. To specify the log level for - Messages not associated with a named component. Unnamed components
have the default log level specified in the See Filtering by Component for log parsing examples that filter on the component field. MongoDB drivers </> and client applications (including :binary:`~bin.mongosh) have the ability to send identifying information at the time of connection to the server. After the connection is established, the client does not send the identifying information again unless the connection is dropped and reestablished. This identifying information is contained in the attributes field of the log entry. The exact information included varies by client. Below is a sample log message containing
the client data document as transmitted from a
When secondary members of a replica set initiate a connection to a primary, they
send similar data. A sample log message containing this initiation connection might appear as follows. The client data is contained in the
See the examples section for a pretty-printed example showing client data. For a complete description of client information and required fields, see the MongoDB Handshake specification. You can specify the logging verbosity level to increase or decrease the the amount of log messages MongoDB outputs. Verbosity levels can be adjusted for all components together, or for specific named components individually. Verbosity affects log entries in the severity categories Informational and Debug only. Severity categories above these levels are always shown. You might set verbosity levels to a high value to show detailed logging for debugging or development, or to a low value to minimize writes to the log on a vetted production deployment. [2] To view the current verbosity levels, use the
Your output might resemble the following:
The initial A value of You can configure the verbosity level using: the To configure the default log level for
all components, use the For example, the following configuration sets the
You would set these values in the configuration file or on the command line for your
All components not specified explicitly in the configuration have a verbosity level of To set the For example, the following sets the
You would set these values from Use the
You would set this value from Client operations (such as queries) appear in the log if their duration exceeds the slow operation threshold or when the log verbosity level is 1 or higher. [2] These log entries include the full command object associated with the operation. Starting in MongoDB 4.2, the profiler entries and the diagnostic log messages (i.e. mongod/mongos log messages) for read/write operations include:
Starting in MongoDB 5.0, slow operation log messages include a The following example output includes information about a slow aggregation operation:
See the examples section for a pretty-printed version of this log entry. New in version 5.0. Starting in MongoDB 5.0, you can use the
To determine if a merge operation or a shard issue is causing a slow query, compare the
Log parsing is the act of programmatically searching through and analyzing log files, often in an automated manner. With the introduction of structured logging in MongoDB 4.4, log parsing is made simpler and more powerful. For example:
The following examples demonstrate common log parsing workflows when working with MongoDB JSON log output. When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.
These examples use The following example shows the top 10 unique message values in a given log file, sorted by frequency:
Remote client connections are shown in the log under the "remote" key in the attribute object. The following counts all unique connections over the course of the log file and presents them in descending order by number of occurrences:
Note that connections from the same IP address, but connecting over different ports, are treated as different connections by this command. You could limit output to consider IP addresses only, with the following change:
The following example counts all remote MongoDB driver connections, and presents each driver type and version in descending order by number of occurrences:
The following example analyzes the reported client data of remote MongoDB driver connections and client applications, including
The string "Darwin", as reported in this log field, represents a macOS client. With slow operation logging enabled, the following returns only the slow operations that took above 2000 milliseconds:, for further analysis:
Consult the jq documentation for more information on the Log components (the third field in the JSON log output format) indicate the general category a given log message falls under. Filtering by component is often a great starting place when parsing log messages for relevant events. The following example prints only the log messages of component type REPL:
The following example prints all log messages except those of component type REPL:
The following example print log messages of component type REPL or STORAGE:
Consult the
jq documentation for more information on the Log IDs (the fifth field in the JSON log output format) map to specific log events, and can be relied upon to remain stable over successive MongoDB releases. As an example, you might be interested in the following two log events, showing a client connection followed by a disconnection:
The log IDs for these two entries are
Consult the
jq documentation for more information on the Log output can be further refined by filtering on the timestamp field, limiting log entries returned to a specific date range. For example, the following returns all log entries that occurred on April 15th, 2020:
Note that this syntax includes the full timestamp, including milliseconds but excluding the timezone offset. Filtering by date range can be combined with any of the examples above, creating weekly reports or yearly summaries for example. The following syntax expands the "Monitoring Connections" example from earlier to limit results to the month of May, 2020:
Consult the jq documentation for more
information on the Log ingestion services are third-party products that intake and aggregate log files, usually from a distributed cluster of systems, and provide ongoing analysis of that data in a central location. The JSON log format, introduced with MongoDB 4.4, allows for more flexibility when working with log ingestion and analysis services. Whereas plaintext logs generally require some manner of transformation before being eligible for use with these products, JSON files can often be consumed out of the box, depending on the service. Further, JSON-formatted logs offer more control when performing filtering for these services, as the key-value structure offers the ability to specifically import only the fields of interest, while omitting the rest. Consult the documentation for your chosen third-party log ingestion service for more information. The following examples show log messages in JSON output format. These log messages are presented in pretty-printed format for convenience. This example shows a startup warning:
This example shows a client connection that includes client data:
This example shows a slow operation message:
This example
demonstrates character escaping, as shown in the
Starting in MongoDB 5.0, log messages for slow queries on
views include a
The following example uses the
Assume a slow query is run on
Starting in MongoDB 5.0, log messages for slow queries include a
Which method can you use to ensure that the logs you put in storage have not been altered when you use them in the future?Which method can yo use to ensure that the logs you put in storage have not been altered when you go to use them in the future? Create a hash of each log.
Which of the following standards relates to the use of credit cards?Which of the following standards relates to the use of credit cards? Personal Card Industry Data Security Standard (PCI DSS) compliance audits relate to the use of credit cards. These audits are regulated and enforced by major credit card companies.
Which two types of service accounts must you use to set up event subscriptions?Which TWO types of service accounts must you use to set up event subscriptions? You would choose a default machine account and specific user service account.
What is the purpose of audit trails quizlet?An audit trail is a set of marks or records to point back to the original transaction. Used in manufacturing, it is a list of components used to manufacture a finished product. In an ERP system, data from it is often used to trigger inventory deductions and to add the finished product to inventory.
|