Metrics reference
You should use caution when interpreting unfamiliar metrics. Reading the Performance section is recommended to better understand the metrics. |
Types of metrics
Neo4j has the following types of metrics:
-
Global — covers the whole Neo4j DBMS.
-
Per database — covers an individual database.
The metrics fall into one of the following categories:
-
Gauge — shows an instantaneous reading of a particular value.
-
Counter — shows an accumulated value.
-
Histogram — shows the distribution of values.
Neo4j supports several ways of exposing metrics. For more details, refer to the page Expose metrics.
Global metrics
Global metrics cover the whole database management system and represent the system’s status as a whole.
Global metrics have the following name format:
-
<user-configured-prefix>.dbms.<metric-name>
, where the <user-configured-prefix> can be configured with theserver.metrics.prefix
configuration setting.
Metrics of this type are reported as soon as the database management system is available.
For example, all JVM-related metrics are global.
In particular, the neo4j.dbms.vm.thread.count
metric has a default user-configured-prefix neo4j
and the global metric name is vm.thread.count
.
By default, global metrics include:
-
Thread metrics
-
Web Server metrics
Database metrics
Each database metric is reported for a particular database only. Database metrics are only available during the lifetime of the database. When a database becomes unavailable, all of its metrics become unavailable also.
Database metrics have the following name format:
-
<user-configured-prefix>.database.<database-name>.<metric-name>
, where the <user-configured-prefix> can be configured with theserver.metrics.prefix
configuration setting.
For example, any transaction metric is a database metric.
In particular, the neo4j.database.mydb.transaction.started
metric has a default user-configured-prefix neo4j
and is a metric for the mydb
database.
By default, database metrics include:
General-purpose metrics
Bolt metrics
Name | Description |
---|---|
|
The total number of Bolt connections opened since startup. This includes both succeeded and failed connections. Useful for monitoring load via the Bolt drivers in combination with other metrics. (counter) |
|
The total number of Bolt connections closed since startup. This includes both properly and abnormally ended connections. Useful for monitoring load via Bolt drivers in combination with other metrics. (counter) |
|
The total number of Bolt connections that are currently executing Cypher and returning results. Useful to track the overall load on Bolt connections. This is limited to the number of Bolt worker threads that have been configured via |
|
The total number of Bolt connections that are not currently executing Cypher or returning results. (gauge) |
|
The total number of messages received via Bolt since startup. Useful to track general message activity in combination with other metrics. (counter) |
|
The total number of messages that have started processing since being received. A received message may have begun processing until a Bolt worker thread becomes available. A large gap observed between |
|
The total number of Bolt messages that have completed processing whether successfully or unsuccessfully. Useful for tracking overall load. (counter) |
|
The total number of messages that have failed while processing. A high number of failures may indicate an issue with the server and further investigation of the logs is recommended. (counter) |
|
(unsupported feature) When |
|
The total amount of time in milliseconds that worker threads have been processing messages. Useful for monitoring load via Bolt drivers in combination with other metrics. (counter) |
|
Introduced in 5.21The amount of time in milliseconds that worker threads spent bound to a given connection. (histogram) |
|
(unsupported feature) When |
|
(unsupported feature) When |
|
(unsupported feature) When |
Bolt Driver metrics
Name | Description |
---|---|
|
The total number of managed transaction function calls. (counter) |
|
The total number of unmanaged transaction function calls. (counter) |
|
The total number of implicit transaction function calls. (counter) |
|
The total number of driver-level execute function calls. (counter) |
Database checkpointing metrics
Name | Description |
---|---|
|
The total number of checkpoint events executed so far. (counter) |
|
The total time, in milliseconds, spent in checkpointing so far. (counter) |
|
The duration, in milliseconds, of the last checkpoint event. Checkpoints should generally take several seconds to several minutes. Long checkpoints can be an issue, as these are invoked when the database stops, when a hot backup is taken, and periodically as well. Values over |
|
Introduced in 5.10The accumulated number of bytes flushed during all checkpoint events combined. (counter) |
|
Number of millisecond checkpoint was paused by io limiter. (gauge) |
|
Number of times checkpoint was paused by io limiter. (gauge) |
|
The number of pages that were flushed during the last checkpoint event. (gauge) |
|
The number of IOs from Neo4j perspective performed during the last check point event. (gauge) |
|
The IO limit used during the last checkpoint event. (gauge) |
Cypher metrics
Name | Description |
---|---|
|
The total number of times Cypher has decided to re-plan a query. Neo4j caches 1000 plans by default. Seeing sustained replanning events or large spikes could indicate an issue that needs to be investigated. (counter) |
|
The total number of seconds waited between query replans. (counter) |
Database data count metrics
Name | Description |
---|---|
|
The total number of relationships in the database. (gauge) |
|
The total number of nodes in the database. A rough metric of how big your graph is. And if you are running a bulk insert operation you can see this tick up. (gauge) |
|
Introduced in 5.15 The total number of internally generated IDs for the different relationship types stored in the database. These IDs do not reflect changes in the actual data. Informational, not an indication of any issue. (gauge) |
Database neo4j pools metrics
Name | Description |
---|---|
|
Used or reserved heap memory in bytes. (gauge) |
|
Used or reserved native memory in bytes. (gauge) |
|
Sum total used heap and native memory in bytes. (gauge) |
|
Sum total size of capacity of the heap and/or native memory pool. (gauge) |
|
Available unused memory in the pool, in bytes. (gauge) |
Database operation count metrics
Name | Description |
---|---|
|
Count of successful database create operations. (counter) |
|
Count of successful database start operations. (counter) |
|
Count of successful database stop operations. (counter) |
|
Count of successful database drop operations. (counter) |
|
Count of failed database operations. (counter) |
|
Count of database operations that failed previously but have recovered. (counter) |
Database state count metrics
Name | Description |
---|---|
|
Databases hosted on this server. Databases in states |
|
Databases in a failed state on this server. (gauge) |
|
Databases that desire to be started on this server. (gauge) |
Database data metrics
Deprecated in 5.15
Name | Description |
---|---|
|
The total number of internally generated IDs for the different relationship types stored in the database. These IDs do not reflect changes in the actual data. Informational, not an indication of any issue. (gauge) |
|
The total number of internally generated IDs for the different property names stored in the database. These IDs do not reflect changes in the actual data. Informational, not an indication of any issue. (gauge) |
|
The total number of internally generated reusable IDs for the relationships stored in the database. These IDs do not reflect changes in the actual data. If you want to have a rough metric of how big your graph is, use |
|
The total number of internally generated reusable IDs for the nodes stored in the database. These IDs do not reflect changes in the actual data. If you want to have a rough metric of how big your graph is, use |
Global neo4j pools metrics
Name | Description |
---|---|
|
Used or reserved heap memory in bytes. (gauge) |
|
Used or reserved native memory in bytes. (gauge) |
|
Sum total used heap and native memory in bytes. (gauge) |
|
Sum total size of the capacity of the heap and/or native memory pool. (gauge) |
|
Available unused memory in the pool, in bytes. (gauge) |
Database page cache metrics
Name | Description |
---|---|
|
The total number of exceptions seen during the eviction process in the page cache. (counter) |
|
The total number of page flushes executed by the page cache. (counter) |
|
The total number of page merges executed by the page cache. (counter) |
|
The total number of page unpins executed by the page cache. (counter) |
|
The total number of page pins executed by the page cache. (counter) |
|
The total number of page evictions executed by the page cache. (counter) |
|
The total number of cooperative page evictions executed by the page cache due to low available pages. (counter) |
|
Introduced in 5.17The total number of pages flushed by page eviction. (counter) |
|
Introduced in 5.17The total number of pages flushed by cooperative page eviction. (counter) |
|
The total number of page faults in the page cache. If this count keeps increasing over time, it may indicate that more page cache is required. However, note that when Neo4j Enterprise starts up, all page cache warmup activities result in page faults. Therefore, it is normal to observe a significant page fault count immediately after startup. (counter) |
|
The total number of failed page faults happened in the page cache. (counter) |
|
The total number of cancelled page faults happened in the page cache. (counter) |
|
The total number of vectored page faults happened in the page cache. (counter) |
|
The total number of failed vectored page faults happened in the page cache. (counter) |
|
The total number of page faults that are not caused by the page pins happened in the page cache. Represent pages loaded by the vectored faults (counter) |
|
The total number of page hits happened in the page cache. (counter) |
|
The ratio of hits to the total number of lookups in the page cache. Performance relies on efficiently using the page cache, so this metric should be in the 98-100% range consistently. If it is much lower than that, then the database is going to disk too often. (gauge) |
|
The ratio of number of used pages to total number of available pages. This metric shows what percentage of the allocated page cache is actually being used. If it is 100%, then it is likely that the hit ratio will start dropping, and you should consider allocating more RAM to page cache. (gauge) |
|
The total number of bytes read by the page cache. (counter) |
|
The total number of bytes written by the page cache. (counter) |
|
The total number of IO operations performed by page cache. (counter) |
|
The total number of times page cache flush IO limiter was throttled during ongoing IO operations. (counter) |
|
The total number of millis page cache flush IO limiter was throttled during ongoing IO operations. (counter) |
|
The total number of page copies happened in the page cache. (counter) |
Query execution metrics
Name | Description |
---|---|
|
Count of successful queries executed. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Count of failed queries executed. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Execution time in milliseconds of queries executed successfully. (histogram) |
|
Count of successful queries executed by the parallel runtime. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Count of failed queries executed by the parallel runtime. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Execution time in milliseconds of queries executed successfully in parallel runtime. (histogram) |
|
Count of successful queries executed by the pipelined runtime. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Count of failed queries executed by the pipelined runtime. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Execution time in milliseconds of queries executed successfully in pipelined runtime. (histogram) |
|
Count of successful queries executed by the slotted runtime. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Count of failed queries executed by the slotted runtime. Server-side routed queries contribute to this count on the server where they eventually land and are executed, not on the intermediate, routing server. (counter) |
|
Execution time in milliseconds of queries executed successfully in slotted runtime. (histogram) |
Query routing metrics
Name | Description |
---|---|
|
The total number of queries executed locally. (counter) |
|
The total number of queries routed over to another member of the same cluster. (counter) |
|
The total number of queries routed over to a server outside the cluster. (counter) |
Database store size metrics
Name | Description |
---|---|
|
The total size of the database and transaction logs, in bytes. The total size of the database helps determine how much cache page is required. It also helps compare the total disk space used by the data store and how much is available. (gauge) |
|
The size of the database, in bytes. The total size of the database helps determine how much cache page is required. It also helps compare the total disk space used by the data store and how much is available. (gauge) |
|
Introduced in 5.21An estimate of reserved but available space in the database, in bytes. At least this much space is potentially reusable when writing new data. (gauge) |
Database transaction log metrics
Name | Description |
---|---|
|
The total number of transaction log rotations executed so far. (counter) |
|
The total time, in milliseconds, spent in rotating transaction logs so far. (counter) |
|
The duration, in milliseconds, of the last log rotation event. (gauge) |
|
The total number of bytes appended to the transaction log. (counter) |
|
The total number of transaction log flushes. (counter) |
|
The size of the last transaction append batch. (gauge) |
Database transaction metrics
Name | Description |
---|---|
|
The total number of started transactions. (counter) |
|
The highest peak of concurrent transactions. This is a useful value to understand. It can help you with the design for the highest load scenarios and whether the Bolt thread settings should be altered. (counter) |
|
The number of currently active transactions. Informational, not an indication of any issue. Spikes or large increases could indicate large data loads or just high read load. (gauge) |
|
The number of currently active read transactions. (gauge) |
|
The number of currently active write transactions. (gauge) |
|
The total number of committed transactions. Informational, not an indication of any issue. Spikes or large increases indicate large data loads or just high read load. (counter) |
|
The total number of committed read transactions. Informational, not an indication of any issue. Spikes or large increases indicate high read load. (counter) |
|
The total number of committed write transactions. Informational, not an indication of any issue. Spikes or large increases indicate large data loads, which could correspond with some behavior you are investigating. (counter) |
|
The total number of rolled back transactions. (counter) |
|
The total number of rolled back read transactions. (counter) |
|
The total number of rolled back write transactions. Seeing a lot of writes rolled back may indicate various issues with locking, transaction timeouts, etc. (counter) |
|
The total number of terminated transactions. (counter) |
|
The total number of terminated read transactions. (counter) |
|
The total number of terminated write transactions. (counter) |
|
The ID of the last committed transaction. Track this for each instance. (Cluster) Track this for each primary, and each secondary. Might break into separate charts. It should show one line, ever increasing, and if one of the lines levels off or falls behind, it is clear that this member is no longer replicating data, and action is needed to rectify the situation. (counter) |
|
The ID of the last closed transaction. (counter) |
|
The transactions' size on heap in bytes. (histogram) |
|
The transactions' size in native memory in bytes. (histogram) |
|
The total number of multi version transaction validation failures. (counter) |
Database index metrics
Name | Description |
---|---|
|
The total number of times fulltext indexes have been queried. (counter) |
|
The total number of fulltext index population jobs that have been completed. (counter) |
|
The total number of times lookup indexes have been queried. (counter) |
|
The total number of lookup index population jobs that have been completed. (counter) |
|
The total number of times text indexes have been queried. (counter) |
|
The total number of text index population jobs that have been completed. (counter) |
|
The total number of times range indexes have been queried. (counter) |
|
The total number of range index population jobs that have been completed. (counter) |
|
The total number of times point indexes have been queried. (counter) |
|
The total number of point index population jobs that have been completed. (counter) |
|
The total number of times vector indexes have been queried. (counter) |
|
The total number of vector index population jobs that have been completed. (counter) |
Metrics specific to clustering
Catch-up metrics
Name | Description |
---|---|
|
TX pull requests received from other cluster members. (counter) |
Discovery metrics v1
Name | Description |
---|---|
|
Size of replicated data structures. (gauge) |
|
Discovery cluster member size. (gauge) |
|
Discovery cluster unreachable size. (gauge) |
|
Discovery cluster convergence. (gauge) |
|
Discovery restart count. (gauge) |
|
Discovery restart failed count. (gauge) |
Discovery metrics v2
Name | Description |
---|---|
|
Number of members in alive or suspected state. (gauge) |
|
Number of unreachable cluster members. (gauge) |
Raft core metrics
Deprecated in 5.0
Name | Description |
---|---|
|
The append index of the Raft log. Each index represents a write transaction (possibly internal) proposed for commitment. The values mostly increase, but sometimes they can decrease as a consequence of leader changes. The append index should always be bigger than or equal to the commit index. (gauge) |
|
The commit index of the Raft log. Represents the commitment of previously appended entries. Its value increases monotonically if you do not unbind the cluster state. The commit index should always be less than or equal to the append index and bigger than or equal to the applied index. (gauge) |
|
The applied index of the Raft log. Represents the application of the committed Raft log entries to the database and internal state. The applied index should always be less than or equal to the commit index. The difference between this and the commit index can be used to monitor how up-to-date the follower database is. (gauge) |
|
The Raft Term of this server. It increases monotonically if you do not unbind the cluster state. (gauge) |
|
Transaction retries. (counter) |
|
Is this server the leader? Track this for each Core cluster member. It will report 0 if it is not the leader and 1 if it is the leader. The sum of all of these should always be 1. However, there will be transient periods in which the sum can be more than 1 because more than one member thinks it is the leader. Action may be needed if the metric shows 0 for more than 30 seconds. (gauge) |
|
In-flight cache total bytes. (gauge) |
|
In-flight cache max bytes. (gauge) |
|
In-flight cache element count. (gauge) |
|
In-flight cache maximum elements. (gauge) |
|
In-flight cache hits. (counter) |
|
In-flight cache misses. (counter) |
|
Raft Log Entry Prefetch Lag. (gauge) |
|
Raft Log Entry Prefetch total bytes. (gauge) |
|
Raft Log Entry Prefetch buffer size. (gauge) |
|
Raft Log Entry Prefetch buffer async puts. (gauge) |
|
Raft Log Entry Prefetch buffer sync puts. (gauge) |
|
Delay between Raft message receive and process. (gauge) |
|
Timer for Raft message processing. (counter, histogram) |
|
The total number of Raft replication requests. It increases with write transactions (possibly internal) activity. (counter) |
|
The total number of Raft replication requests attempts. It is bigger or equal than the replication requests. (counter) |
|
The total number of Raft replication attempts that have failed. (counter) |
|
Raft Replication maybe count. (counter) |
|
The total number of Raft replication requests that have succeeded. (counter) |
|
The time elapsed since the last message from a leader in milliseconds. Should reset periodically. (gauge) |
Metrics specific to Causal Clustering are deprecated, as the previous table shows. The deprecated Raft core metrics are replaced accordingly by the Raft metrics in the following table. |
Raft metrics
Name | Description |
---|---|
|
The append index of the Raft log. Each index represents a write transaction (possibly internal) proposed for commitment. The values mostly increase, but sometimes they can decrease as a consequence of leader changes. The append index should always be bigger than or equal to the commit index. (gauge) |
|
The commit index of the Raft log. Represents the commitment of previously appended entries. Its value increases monotonically if you do not unbind the cluster state. The commit index should always be less than or equal to the append index and bigger than or equal to the applied index. (gauge) |
|
The applied index of the Raft log. Represents the application of the committed Raft log entries to the database and internal state. The applied index should always be less than or equal to the commit index. The difference between this and the commit index can be used to monitor how up-to-date the follower database is. (gauge) |
|
Introduced in 5.25 The head index of the Raft log. Represents the oldest Raft index that exists in the log. A prune event will increase this value. This can be used to track how much history of Raft logs the member has. (gauge) |
|
The Raft Term of this server. It increases monotonically if you do not unbind the cluster state. (gauge) |
|
Transaction retries. (counter) |
|
Is this server the leader? Track this for each rafted primary database in the cluster. It reports |
|
In-flight cache total bytes. (gauge) |
|
In-flight cache max bytes. (gauge) |
|
In-flight cache element count. (gauge) |
|
In-flight cache maximum elements. (gauge) |
|
In-flight cache hits. (counter) |
|
In-flight cache misses. (counter) |
|
Raft Log Entry Prefetch Lag. (gauge) |
|
Raft Log Entry Prefetch total bytes. (gauge) |
|
Raft Log Entry Prefetch buffer size. (gauge) |
|
Raft Log Entry Prefetch buffer async puts. (gauge) |
|
Raft Log Entry Prefetch buffer sync puts. (gauge) |
|
Delay between Raft message receive and process. (gauge) |
|
Timer for Raft message processing. (counter, histogram) |
|
The total number of Raft replication requests. It increases with write transactions (possibly internal) activity. (counter) |
|
The total number of Raft replication requests attempts. It is bigger or equal to the replication requests. (counter) |
|
The total number of Raft replication attempts that have failed. (counter) |
|
Raft Replication maybe count. (counter) |
|
The total number of Raft replication requests that have succeeded. (counter) |
|
The time elapsed since the last message from a leader in milliseconds. Should reset periodically. (gauge) |
Read Replica metrics
Deprecated in 5.0
Name | Description |
---|---|
|
The total number of pull requests made by this instance. (counter) |
|
The highest transaction id requested in a pull update by this instance. (counter) |
|
The highest transaction id that has been pulled in the last pull updates by this instance. (counter) |
Metrics specific to Causal Clustering are deprecated, as the previous table shows. The deprecated Read Replica metrics are replaced accordingly by the Story copy metrics in the following table. |
Store copy metrics
Name | Description |
---|---|
|
The total number of pull requests made by this instance. (counter) |
|
The highest transaction id requested in a pull update by this instance. (counter) |
|
The highest transaction id that has been pulled in the last pull updates by this instance. (counter) |
Java Virtual Machine Metrics
The JVM metrics show information about garbage collections (for example, the number of events and time spent collecting), memory pools and buffers, and the number of active threads running.
They are environment dependent and therefore, may vary on different hardware and with different JVM configurations.
The metrics about the JVM’s memory usage expose values that are provided by the MemoryPoolMXBeans and BufferPoolMXBeans.
The memory pools are memory managed by the JVM, for example, neo4j.dbms.vm.memory.pool.g1_survivor_space
.
Therefore, if necessary, you can tune them using the JVM settings.
The buffer pools are space outside of the memory managed by the garbage collector.
Neo4j allocates buffers in those pools as it needs them.
You can limit this memory using JVM settings, but there is never any good reason for you to set them.
JVM file descriptor metrics
Name | Description |
---|---|
|
The current number of open file descriptors. (gauge) |
|
(OS setting) The maximum number of open file descriptors. It is recommended to be set to 40K file handles, because of the native and Lucene indexing Neo4j uses. If this metric gets close to the limit, you should consider raising it. (gauge) |
GC metrics
Name | Description |
---|---|
|
Accumulated garbage collection time in milliseconds. Long GCs can be an indication of performance issues or potential instability. If this approaches the heartbeat timeout in a cluster, it may cause unwanted leader switches. (counter) |
|
Total number of garbage collections. (counter) |
JVM Heap metrics
Name | Description |
---|---|
|
Amount of memory (in bytes) guaranteed to be available for use by the JVM. (gauge) |
|
Amount of memory (in bytes) currently used. This is the amount of heap space currently used at a given point in time. Monitor this to identify if you are maxing out consistently, in which case, you should increase the initial and max heap size, or if you are underutilizing, you should decrease the initial and max heap sizes. (gauge) |
|
Maximum amount of heap memory (in bytes) that can be used. This is the amount of heap space currently used at a given point in time. Monitor this to identify if you are maxing out consistently, in which case, you should increase the initial and max heap size, or if you are underutilizing, you should decrease the initial and max heap sizes. (gauge) |
JVM memory buffers metrics
Name | Description |
---|---|
|
Estimated number of buffers in the pool. (gauge) |
|
Estimated amount of memory used by the pool. (gauge) |
|
Estimated total capacity of buffers in the pool. (gauge) |
JVM memory pools metrics
Name | Description |
---|---|
|
Estimated amount of memory in bytes used by the pool. (gauge) |