%PDF- %PDF-
Mini Shell

Mini Shell

Direktori : /var/lib/munin/
Upload File :
Create Path :
Current File : //var/lib/munin/datafile

version 2.0.57
localdomain;localhost.localdomain:diskstats_latency.loop3.graph_title Average latency for /dev/loop3
localdomain;localhost.localdomain:diskstats_latency.loop3.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop3.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop3.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop3.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop3.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop3.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop3.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop3.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop3.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop3.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop3.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop3.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop3.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop3.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop7.graph_title Disk throughput for /dev/loop7
localdomain;localhost.localdomain:diskstats_throughput.loop7.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop7.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop7.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop7.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop7.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop7.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop7.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop7.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop7.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop7.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop7.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop7.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop7.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop7.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop7.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop7.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop7.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop7.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop7.wrbytes.type GAUGE
localdomain;localhost.localdomain:cpuspeed.graph_title CPU frequency scaling
localdomain;localhost.localdomain:cpuspeed.graph_args --base 1000
localdomain;localhost.localdomain:cpuspeed.graph_category system
localdomain;localhost.localdomain:cpuspeed.graph_vlabel Hz
localdomain;localhost.localdomain:cpuspeed.graph_info This graph shows the current speed of the CPU at the time of the data retrieval (not its average). This is a limitiation of the 'intel_pstate' driver.
localdomain;localhost.localdomain:cpuspeed.graph_order cpu0 cpu1 cpu2 cpu3
localdomain;localhost.localdomain:cpuspeed.cpu1.graph_data_size normal
localdomain;localhost.localdomain:cpuspeed.cpu1.type GAUGE
localdomain;localhost.localdomain:cpuspeed.cpu1.min 800000
localdomain;localhost.localdomain:cpuspeed.cpu1.update_rate 300
localdomain;localhost.localdomain:cpuspeed.cpu1.label CPU 1
localdomain;localhost.localdomain:cpuspeed.cpu1.max 4070000
localdomain;localhost.localdomain:cpuspeed.cpu1.cdef cpu1,1000,*
localdomain;localhost.localdomain:cpuspeed.cpu2.update_rate 300
localdomain;localhost.localdomain:cpuspeed.cpu2.type GAUGE
localdomain;localhost.localdomain:cpuspeed.cpu2.min 800000
localdomain;localhost.localdomain:cpuspeed.cpu2.cdef cpu2,1000,*
localdomain;localhost.localdomain:cpuspeed.cpu2.max 4070000
localdomain;localhost.localdomain:cpuspeed.cpu2.label CPU 2
localdomain;localhost.localdomain:cpuspeed.cpu2.graph_data_size normal
localdomain;localhost.localdomain:cpuspeed.cpu3.graph_data_size normal
localdomain;localhost.localdomain:cpuspeed.cpu3.label CPU 3
localdomain;localhost.localdomain:cpuspeed.cpu3.cdef cpu3,1000,*
localdomain;localhost.localdomain:cpuspeed.cpu3.max 4070000
localdomain;localhost.localdomain:cpuspeed.cpu3.update_rate 300
localdomain;localhost.localdomain:cpuspeed.cpu3.min 800000
localdomain;localhost.localdomain:cpuspeed.cpu3.type GAUGE
localdomain;localhost.localdomain:cpuspeed.cpu0.graph_data_size normal
localdomain;localhost.localdomain:cpuspeed.cpu0.label CPU 0
localdomain;localhost.localdomain:cpuspeed.cpu0.max 4070000
localdomain;localhost.localdomain:cpuspeed.cpu0.cdef cpu0,1000,*
localdomain;localhost.localdomain:cpuspeed.cpu0.update_rate 300
localdomain;localhost.localdomain:cpuspeed.cpu0.min 800000
localdomain;localhost.localdomain:cpuspeed.cpu0.type GAUGE
localdomain;localhost.localdomain:threads.graph_title Number of threads
localdomain;localhost.localdomain:threads.graph_vlabel number of threads
localdomain;localhost.localdomain:threads.graph_category processes
localdomain;localhost.localdomain:threads.graph_info This graph shows the number of threads.
localdomain;localhost.localdomain:threads.graph_order threads
localdomain;localhost.localdomain:threads.threads.label threads
localdomain;localhost.localdomain:threads.threads.info The current number of threads.
localdomain;localhost.localdomain:threads.threads.graph_data_size normal
localdomain;localhost.localdomain:threads.threads.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6.graph_title IOs for /dev/loop6
localdomain;localhost.localdomain:diskstats_iops.loop6.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop6.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop6.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop6.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop6.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop6.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop6.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop6.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop6.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop6.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop6.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop6.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop6.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop6.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop6.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop6.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop6.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop6.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop6.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop6.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop6.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop6.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop6.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop6.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_latency.loop0.graph_title Average latency for /dev/loop0
localdomain;localhost.localdomain:diskstats_latency.loop0.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop0.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop0.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop0.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop0.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop0.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop0.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop0.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop0.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop0.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop0.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop0.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop0.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop0.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_iops.loop5.graph_title IOs for /dev/loop5
localdomain;localhost.localdomain:diskstats_iops.loop5.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop5.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop5.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop5.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop5.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop5.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop5.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop5.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop5.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop5.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop5.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop5.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop5.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop5.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop5.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop5.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop5.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop5.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop5.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop5.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop5.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop5.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop5.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop5.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop5.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop5.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop5.wrio.min 0
localdomain;localhost.localdomain:diskstats_latency.loop6.graph_title Average latency for /dev/loop6
localdomain;localhost.localdomain:diskstats_latency.loop6.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop6.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop6.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop6.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop6.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop6.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop6.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop6.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop6.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop6.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop6.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop6.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop6.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop6.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_throughput.loop2.graph_title Disk throughput for /dev/loop2
localdomain;localhost.localdomain:diskstats_throughput.loop2.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop2.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop2.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop2.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop2.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop2.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop2.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop2.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop2.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop2.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop2.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop2.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop2.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop2.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop2.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop2.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop2.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop2.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop2.rdbytes.graph no
localdomain;localhost.localdomain:forks.graph_title Fork rate
localdomain;localhost.localdomain:forks.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:forks.graph_vlabel forks / ${graph_period}
localdomain;localhost.localdomain:forks.graph_category processes
localdomain;localhost.localdomain:forks.graph_info This graph shows the number of forks (new processes started) per second.
localdomain;localhost.localdomain:forks.graph_order forks
localdomain;localhost.localdomain:forks.forks.graph_data_size normal
localdomain;localhost.localdomain:forks.forks.info The number of forks per second.
localdomain;localhost.localdomain:forks.forks.min 0
localdomain;localhost.localdomain:forks.forks.type DERIVE
localdomain;localhost.localdomain:forks.forks.update_rate 300
localdomain;localhost.localdomain:forks.forks.max 100000
localdomain;localhost.localdomain:forks.forks.label forks
localdomain;localhost.localdomain:diskstats_utilization.sdd.graph_title Disk utilization for /dev/sdd
localdomain;localhost.localdomain:diskstats_utilization.sdd.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.sdd.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.sdd.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.sdd.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.sdd.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.sdd.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sdd.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.sdd.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sdd.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.sdd.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.sdd.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.sdd.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:if_err_eno1.graph_order rcvd trans rcvd trans rxdrop txdrop collisions
localdomain;localhost.localdomain:if_err_eno1.graph_title eno1 errors
localdomain;localhost.localdomain:if_err_eno1.graph_args --base 1000
localdomain;localhost.localdomain:if_err_eno1.graph_vlabel packets in (-) / out (+) per ${graph_period}
localdomain;localhost.localdomain:if_err_eno1.graph_category network
localdomain;localhost.localdomain:if_err_eno1.graph_info This graph shows the amount of errors, packet drops, and collisions on the eno1 network interface.
localdomain;localhost.localdomain:if_err_eno1.rxdrop.graph no
localdomain;localhost.localdomain:if_err_eno1.rxdrop.label drops
localdomain;localhost.localdomain:if_err_eno1.rxdrop.type COUNTER
localdomain;localhost.localdomain:if_err_eno1.rxdrop.update_rate 300
localdomain;localhost.localdomain:if_err_eno1.rxdrop.graph_data_size normal
localdomain;localhost.localdomain:if_err_eno1.txdrop.negative rxdrop
localdomain;localhost.localdomain:if_err_eno1.txdrop.graph_data_size normal
localdomain;localhost.localdomain:if_err_eno1.txdrop.label drops
localdomain;localhost.localdomain:if_err_eno1.txdrop.update_rate 300
localdomain;localhost.localdomain:if_err_eno1.txdrop.type COUNTER
localdomain;localhost.localdomain:if_err_eno1.collisions.label collisions
localdomain;localhost.localdomain:if_err_eno1.collisions.graph_data_size normal
localdomain;localhost.localdomain:if_err_eno1.collisions.type COUNTER
localdomain;localhost.localdomain:if_err_eno1.collisions.update_rate 300
localdomain;localhost.localdomain:if_err_eno1.trans.negative rcvd
localdomain;localhost.localdomain:if_err_eno1.trans.graph_data_size normal
localdomain;localhost.localdomain:if_err_eno1.trans.warning 1
localdomain;localhost.localdomain:if_err_eno1.trans.type COUNTER
localdomain;localhost.localdomain:if_err_eno1.trans.update_rate 300
localdomain;localhost.localdomain:if_err_eno1.trans.label errors
localdomain;localhost.localdomain:if_err_eno1.rcvd.warning 1
localdomain;localhost.localdomain:if_err_eno1.rcvd.graph_data_size normal
localdomain;localhost.localdomain:if_err_eno1.rcvd.label errors
localdomain;localhost.localdomain:if_err_eno1.rcvd.graph no
localdomain;localhost.localdomain:if_err_eno1.rcvd.update_rate 300
localdomain;localhost.localdomain:if_err_eno1.rcvd.type COUNTER
localdomain;localhost.localdomain:diskstats_iops.loop4.graph_title IOs for /dev/loop4
localdomain;localhost.localdomain:diskstats_iops.loop4.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop4.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop4.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop4.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop4.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop4.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop4.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop4.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop4.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop4.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop4.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop4.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop4.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop4.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop4.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop4.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop4.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop4.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop4.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop4.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop4.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop4.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop4.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop4.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop4.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop4.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop4.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_throughput.loop4.graph_title Disk throughput for /dev/loop4
localdomain;localhost.localdomain:diskstats_throughput.loop4.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop4.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop4.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop4.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop4.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop4.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop4.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop4.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop4.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop4.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop4.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop4.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop4.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop4.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop4.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop4.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop4.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop4.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop4.wrbytes.draw LINE1
localdomain;localhost.localdomain:uptime.graph_title Uptime
localdomain;localhost.localdomain:uptime.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:uptime.graph_scale no
localdomain;localhost.localdomain:uptime.graph_vlabel uptime in days
localdomain;localhost.localdomain:uptime.graph_category system
localdomain;localhost.localdomain:uptime.graph_order uptime
localdomain;localhost.localdomain:uptime.uptime.draw AREA
localdomain;localhost.localdomain:uptime.uptime.label uptime
localdomain;localhost.localdomain:uptime.uptime.graph_data_size normal
localdomain;localhost.localdomain:uptime.uptime.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdb.graph_title Disk throughput for /dev/sdb
localdomain;localhost.localdomain:diskstats_throughput.sdb.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.sdb.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.sdb.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.sdb.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.sdb.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.sdb.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdb.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdb.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdb.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdb.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sdb.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.sdb.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdb.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdb.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdb.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdb.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdb.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.sdb.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdb.wrbytes.negative rdbytes
localdomain;localhost.localdomain:acpi.graph_title ACPI Thermal zone temperatures
localdomain;localhost.localdomain:acpi.graph_vlabel Celsius
localdomain;localhost.localdomain:acpi.graph_category sensors
localdomain;localhost.localdomain:acpi.graph_info This graph shows the temperature in different ACPI Thermal zones.  If there is only one it will usually be the case temperature.
localdomain;localhost.localdomain:acpi.graph_order thermal_zone0 thermal_zone1 thermal_zone2 thermal_zone3
localdomain;localhost.localdomain:acpi.thermal_zone0.graph_data_size normal
localdomain;localhost.localdomain:acpi.thermal_zone0.update_rate 300
localdomain;localhost.localdomain:acpi.thermal_zone0.label acpitz
localdomain;localhost.localdomain:acpi.thermal_zone2.update_rate 300
localdomain;localhost.localdomain:acpi.thermal_zone2.graph_data_size normal
localdomain;localhost.localdomain:acpi.thermal_zone2.label pch_skylake
localdomain;localhost.localdomain:acpi.thermal_zone3.label x86_pkg_temp
localdomain;localhost.localdomain:acpi.thermal_zone3.graph_data_size normal
localdomain;localhost.localdomain:acpi.thermal_zone3.update_rate 300
localdomain;localhost.localdomain:acpi.thermal_zone1.update_rate 300
localdomain;localhost.localdomain:acpi.thermal_zone1.graph_data_size normal
localdomain;localhost.localdomain:acpi.thermal_zone1.label acpitz
localdomain;localhost.localdomain:df.graph_title Disk usage in percent
localdomain;localhost.localdomain:df.graph_args --upper-limit 100 -l 0
localdomain;localhost.localdomain:df.graph_vlabel %
localdomain;localhost.localdomain:df.graph_scale no
localdomain;localhost.localdomain:df.graph_category disk
localdomain;localhost.localdomain:df.graph_order _dev_sda3 _dev_shm _run _run_lock _dev_sda2 data_varlib data_mail data_www data_logs data data_backups _dev_loop4
localdomain;localhost.localdomain:df.data_mail.label /var/mail
localdomain;localhost.localdomain:df.data_mail.update_rate 300
localdomain;localhost.localdomain:df.data_mail.critical 98
localdomain;localhost.localdomain:df.data_mail.warning 92
localdomain;localhost.localdomain:df.data_mail.graph_data_size normal
localdomain;localhost.localdomain:df.data_varlib.update_rate 300
localdomain;localhost.localdomain:df.data_varlib.label /var/lib
localdomain;localhost.localdomain:df.data_varlib.graph_data_size normal
localdomain;localhost.localdomain:df.data_varlib.warning 92
localdomain;localhost.localdomain:df.data_varlib.critical 98
localdomain;localhost.localdomain:df.data_www.critical 98
localdomain;localhost.localdomain:df.data_www.warning 92
localdomain;localhost.localdomain:df.data_www.graph_data_size normal
localdomain;localhost.localdomain:df.data_www.label /www
localdomain;localhost.localdomain:df.data_www.update_rate 300
localdomain;localhost.localdomain:df._dev_loop4.critical 98
localdomain;localhost.localdomain:df._dev_loop4.warning 92
localdomain;localhost.localdomain:df._dev_loop4.graph_data_size normal
localdomain;localhost.localdomain:df._dev_loop4.label /mnt
localdomain;localhost.localdomain:df._dev_loop4.update_rate 300
localdomain;localhost.localdomain:df._run_lock.critical 98
localdomain;localhost.localdomain:df._run_lock.warning 92
localdomain;localhost.localdomain:df._run_lock.graph_data_size normal
localdomain;localhost.localdomain:df._run_lock.label /run/lock
localdomain;localhost.localdomain:df._run_lock.update_rate 300
localdomain;localhost.localdomain:df.data_logs.update_rate 300
localdomain;localhost.localdomain:df.data_logs.label /var/log
localdomain;localhost.localdomain:df.data_logs.graph_data_size normal
localdomain;localhost.localdomain:df.data_logs.critical 98
localdomain;localhost.localdomain:df.data_logs.warning 92
localdomain;localhost.localdomain:df._run.graph_data_size normal
localdomain;localhost.localdomain:df._run.critical 98
localdomain;localhost.localdomain:df._run.warning 92
localdomain;localhost.localdomain:df._run.update_rate 300
localdomain;localhost.localdomain:df._run.label /run
localdomain;localhost.localdomain:df._dev_sda2.update_rate 300
localdomain;localhost.localdomain:df._dev_sda2.label /boot
localdomain;localhost.localdomain:df._dev_sda2.graph_data_size normal
localdomain;localhost.localdomain:df._dev_sda2.critical 98
localdomain;localhost.localdomain:df._dev_sda2.warning 92
localdomain;localhost.localdomain:df.data.update_rate 300
localdomain;localhost.localdomain:df.data.label /data
localdomain;localhost.localdomain:df.data.graph_data_size normal
localdomain;localhost.localdomain:df.data.warning 92
localdomain;localhost.localdomain:df.data.critical 98
localdomain;localhost.localdomain:df.data_backups.critical 98
localdomain;localhost.localdomain:df.data_backups.warning 92
localdomain;localhost.localdomain:df.data_backups.graph_data_size normal
localdomain;localhost.localdomain:df.data_backups.label /backups
localdomain;localhost.localdomain:df.data_backups.update_rate 300
localdomain;localhost.localdomain:df._dev_sda3.warning 92
localdomain;localhost.localdomain:df._dev_sda3.critical 98
localdomain;localhost.localdomain:df._dev_sda3.graph_data_size normal
localdomain;localhost.localdomain:df._dev_sda3.label /
localdomain;localhost.localdomain:df._dev_sda3.update_rate 300
localdomain;localhost.localdomain:df._dev_shm.warning 92
localdomain;localhost.localdomain:df._dev_shm.critical 98
localdomain;localhost.localdomain:df._dev_shm.graph_data_size normal
localdomain;localhost.localdomain:df._dev_shm.label /dev/shm
localdomain;localhost.localdomain:df._dev_shm.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop7.graph_title Disk utilization for /dev/loop7
localdomain;localhost.localdomain:diskstats_utilization.loop7.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop7.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop7.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop7.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop7.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop7.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop7.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop7.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop7.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.loop7.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop7.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop7.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.sdb.graph_title Disk utilization for /dev/sdb
localdomain;localhost.localdomain:diskstats_utilization.sdb.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.sdb.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.sdb.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.sdb.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.sdb.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.sdb.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.sdb.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.sdb.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.sdb.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sdb.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sdb.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.sdb.util.graph_data_size normal
localdomain;localhost.localdomain:users.graph_title Logged in users
localdomain;localhost.localdomain:users.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:users.graph_vlabel Users
localdomain;localhost.localdomain:users.graph_scale no
localdomain;localhost.localdomain:users.graph_category system
localdomain;localhost.localdomain:users.graph_printf %3.0lf
localdomain;localhost.localdomain:users.graph_order tty pty pts X other
localdomain;localhost.localdomain:users.other.label Other users
localdomain;localhost.localdomain:users.other.update_rate 300
localdomain;localhost.localdomain:users.other.colour FF0000
localdomain;localhost.localdomain:users.other.info Users logged in by indeterminate method
localdomain;localhost.localdomain:users.other.graph_data_size normal
localdomain;localhost.localdomain:users.pty.draw AREASTACK
localdomain;localhost.localdomain:users.pty.label pty
localdomain;localhost.localdomain:users.pty.colour 0000FF
localdomain;localhost.localdomain:users.pty.update_rate 300
localdomain;localhost.localdomain:users.pty.graph_data_size normal
localdomain;localhost.localdomain:users.tty.colour 00FF00
localdomain;localhost.localdomain:users.tty.update_rate 300
localdomain;localhost.localdomain:users.tty.label tty
localdomain;localhost.localdomain:users.tty.draw AREASTACK
localdomain;localhost.localdomain:users.tty.graph_data_size normal
localdomain;localhost.localdomain:users.X.update_rate 300
localdomain;localhost.localdomain:users.X.colour 000000
localdomain;localhost.localdomain:users.X.draw AREASTACK
localdomain;localhost.localdomain:users.X.label X displays
localdomain;localhost.localdomain:users.X.graph_data_size normal
localdomain;localhost.localdomain:users.X.info Users logged in on an X display
localdomain;localhost.localdomain:users.pts.graph_data_size normal
localdomain;localhost.localdomain:users.pts.colour 00FFFF
localdomain;localhost.localdomain:users.pts.update_rate 300
localdomain;localhost.localdomain:users.pts.label pts
localdomain;localhost.localdomain:users.pts.draw AREASTACK
localdomain;localhost.localdomain:diskstats_iops.sda.graph_title IOs for /dev/sda
localdomain;localhost.localdomain:diskstats_iops.sda.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.sda.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.sda.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.sda.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.sda.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.sda.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sda.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.sda.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sda.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sda.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sda.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.sda.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sda.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sda.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.sda.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sda.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sda.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.sda.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sda.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sda.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sda.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sda.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sda.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sda.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sda.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sda.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.sda.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_utilization.loop0.graph_title Disk utilization for /dev/loop0
localdomain;localhost.localdomain:diskstats_utilization.loop0.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop0.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop0.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop0.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop0.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop0.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop0.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.loop0.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop0.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop0.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop0.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop0.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.sda.graph_title Disk utilization for /dev/sda
localdomain;localhost.localdomain:diskstats_utilization.sda.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.sda.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.sda.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.sda.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.sda.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.sda.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.sda.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.sda.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sda.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.sda.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sda.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.sda.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sda.graph_title Disk throughput for /dev/sda
localdomain;localhost.localdomain:diskstats_throughput.sda.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.sda.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.sda.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.sda.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.sda.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.sda.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.sda.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sda.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sda.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sda.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sda.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.sda.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sda.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sda.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sda.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.sda.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sda.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sda.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sda.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:df_inode.graph_title Inode usage in percent
localdomain;localhost.localdomain:df_inode.graph_args --upper-limit 100 -l 0
localdomain;localhost.localdomain:df_inode.graph_vlabel %
localdomain;localhost.localdomain:df_inode.graph_scale no
localdomain;localhost.localdomain:df_inode.graph_category disk
localdomain;localhost.localdomain:df_inode.graph_order _dev_sda3 _dev_shm _run _run_lock _dev_sda2 data_varlib data_mail data_www data_logs data data_backups _dev_loop4
localdomain;localhost.localdomain:df_inode.data_backups.critical 98
localdomain;localhost.localdomain:df_inode.data_backups.warning 92
localdomain;localhost.localdomain:df_inode.data_backups.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data_backups.label /backups
localdomain;localhost.localdomain:df_inode.data_backups.update_rate 300
localdomain;localhost.localdomain:df_inode._dev_sda3.label /
localdomain;localhost.localdomain:df_inode._dev_sda3.update_rate 300
localdomain;localhost.localdomain:df_inode._dev_sda3.critical 98
localdomain;localhost.localdomain:df_inode._dev_sda3.warning 92
localdomain;localhost.localdomain:df_inode._dev_sda3.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data.update_rate 300
localdomain;localhost.localdomain:df_inode.data.label /data
localdomain;localhost.localdomain:df_inode.data.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data.warning 92
localdomain;localhost.localdomain:df_inode.data.critical 98
localdomain;localhost.localdomain:df_inode._dev_shm.label /dev/shm
localdomain;localhost.localdomain:df_inode._dev_shm.update_rate 300
localdomain;localhost.localdomain:df_inode._dev_shm.critical 98
localdomain;localhost.localdomain:df_inode._dev_shm.warning 92
localdomain;localhost.localdomain:df_inode._dev_shm.graph_data_size normal
localdomain;localhost.localdomain:df_inode._dev_loop4.label /mnt
localdomain;localhost.localdomain:df_inode._dev_loop4.update_rate 300
localdomain;localhost.localdomain:df_inode._dev_loop4.critical 98
localdomain;localhost.localdomain:df_inode._dev_loop4.warning 92
localdomain;localhost.localdomain:df_inode._dev_loop4.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data_www.update_rate 300
localdomain;localhost.localdomain:df_inode.data_www.label /www
localdomain;localhost.localdomain:df_inode.data_www.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data_www.critical 98
localdomain;localhost.localdomain:df_inode.data_www.warning 92
localdomain;localhost.localdomain:df_inode.data_varlib.label /var/lib
localdomain;localhost.localdomain:df_inode.data_varlib.update_rate 300
localdomain;localhost.localdomain:df_inode.data_varlib.warning 92
localdomain;localhost.localdomain:df_inode.data_varlib.critical 98
localdomain;localhost.localdomain:df_inode.data_varlib.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data_mail.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data_mail.critical 98
localdomain;localhost.localdomain:df_inode.data_mail.warning 92
localdomain;localhost.localdomain:df_inode.data_mail.update_rate 300
localdomain;localhost.localdomain:df_inode.data_mail.label /var/mail
localdomain;localhost.localdomain:df_inode._dev_sda2.update_rate 300
localdomain;localhost.localdomain:df_inode._dev_sda2.label /boot
localdomain;localhost.localdomain:df_inode._dev_sda2.graph_data_size normal
localdomain;localhost.localdomain:df_inode._dev_sda2.warning 92
localdomain;localhost.localdomain:df_inode._dev_sda2.critical 98
localdomain;localhost.localdomain:df_inode._run.label /run
localdomain;localhost.localdomain:df_inode._run.update_rate 300
localdomain;localhost.localdomain:df_inode._run.critical 98
localdomain;localhost.localdomain:df_inode._run.warning 92
localdomain;localhost.localdomain:df_inode._run.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data_logs.graph_data_size normal
localdomain;localhost.localdomain:df_inode.data_logs.warning 92
localdomain;localhost.localdomain:df_inode.data_logs.critical 98
localdomain;localhost.localdomain:df_inode.data_logs.update_rate 300
localdomain;localhost.localdomain:df_inode.data_logs.label /var/log
localdomain;localhost.localdomain:df_inode._run_lock.critical 98
localdomain;localhost.localdomain:df_inode._run_lock.warning 92
localdomain;localhost.localdomain:df_inode._run_lock.graph_data_size normal
localdomain;localhost.localdomain:df_inode._run_lock.label /run/lock
localdomain;localhost.localdomain:df_inode._run_lock.update_rate 300
localdomain;localhost.localdomain:hddtemp_smartctl.graph_title HDD temperature
localdomain;localhost.localdomain:hddtemp_smartctl.graph_vlabel Degrees Celsius
localdomain;localhost.localdomain:hddtemp_smartctl.graph_category sensors
localdomain;localhost.localdomain:hddtemp_smartctl.graph_info This graph shows the temperature in degrees Celsius of the hard drives in the machine.
localdomain;localhost.localdomain:hddtemp_smartctl.graph_order sda sdb sdc sdd
localdomain;localhost.localdomain:hddtemp_smartctl.sda.critical 60
localdomain;localhost.localdomain:hddtemp_smartctl.sda.warning 57
localdomain;localhost.localdomain:hddtemp_smartctl.sda.graph_data_size normal
localdomain;localhost.localdomain:hddtemp_smartctl.sda.max 100
localdomain;localhost.localdomain:hddtemp_smartctl.sda.label sda
localdomain;localhost.localdomain:hddtemp_smartctl.sda.update_rate 300
localdomain;localhost.localdomain:hddtemp_smartctl.sdb.graph_data_size normal
localdomain;localhost.localdomain:hddtemp_smartctl.sdb.critical 60
localdomain;localhost.localdomain:hddtemp_smartctl.sdb.warning 57
localdomain;localhost.localdomain:hddtemp_smartctl.sdb.update_rate 300
localdomain;localhost.localdomain:hddtemp_smartctl.sdb.label sdb
localdomain;localhost.localdomain:hddtemp_smartctl.sdb.max 100
localdomain;localhost.localdomain:hddtemp_smartctl.sdc.label sdc
localdomain;localhost.localdomain:hddtemp_smartctl.sdc.max 100
localdomain;localhost.localdomain:hddtemp_smartctl.sdc.update_rate 300
localdomain;localhost.localdomain:hddtemp_smartctl.sdc.critical 60
localdomain;localhost.localdomain:hddtemp_smartctl.sdc.warning 57
localdomain;localhost.localdomain:hddtemp_smartctl.sdc.graph_data_size normal
localdomain;localhost.localdomain:hddtemp_smartctl.sdd.label sdd
localdomain;localhost.localdomain:hddtemp_smartctl.sdd.max 100
localdomain;localhost.localdomain:hddtemp_smartctl.sdd.update_rate 300
localdomain;localhost.localdomain:hddtemp_smartctl.sdd.warning 57
localdomain;localhost.localdomain:hddtemp_smartctl.sdd.critical 60
localdomain;localhost.localdomain:hddtemp_smartctl.sdd.graph_data_size normal
localdomain;localhost.localdomain:proc_pri.graph_title Processes priority
localdomain;localhost.localdomain:proc_pri.graph_order low high locked high low locked
localdomain;localhost.localdomain:proc_pri.graph_category processes
localdomain;localhost.localdomain:proc_pri.graph_info This graph shows number of processes at each priority
localdomain;localhost.localdomain:proc_pri.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:proc_pri.graph_vlabel Number of processes
localdomain;localhost.localdomain:proc_pri.locked.graph_data_size normal
localdomain;localhost.localdomain:proc_pri.locked.info The number of processes that have pages locked into memory (for real-time and custom IO)
localdomain;localhost.localdomain:proc_pri.locked.update_rate 300
localdomain;localhost.localdomain:proc_pri.locked.draw STACK
localdomain;localhost.localdomain:proc_pri.locked.label locked in memory
localdomain;localhost.localdomain:proc_pri.high.label high priority
localdomain;localhost.localdomain:proc_pri.high.draw STACK
localdomain;localhost.localdomain:proc_pri.high.update_rate 300
localdomain;localhost.localdomain:proc_pri.high.info The number of high-priority processes (tasks)
localdomain;localhost.localdomain:proc_pri.high.graph_data_size normal
localdomain;localhost.localdomain:proc_pri.low.info The number of low-priority processes (tasks)
localdomain;localhost.localdomain:proc_pri.low.graph_data_size normal
localdomain;localhost.localdomain:proc_pri.low.label low priority
localdomain;localhost.localdomain:proc_pri.low.draw AREA
localdomain;localhost.localdomain:proc_pri.low.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop5.graph_title Average latency for /dev/loop5
localdomain;localhost.localdomain:diskstats_latency.loop5.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop5.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop5.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop5.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop5.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop5.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop5.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop5.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop5.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop5.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop5.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop5.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop5.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop5.avgwrwait.type GAUGE
localdomain;localhost.localdomain:nginx_status.graph_title Nginx status
localdomain;localhost.localdomain:nginx_status.graph_args --base 1000
localdomain;localhost.localdomain:nginx_status.graph_category nginx
localdomain;localhost.localdomain:nginx_status.graph_vlabel Connections
localdomain;localhost.localdomain:nginx_status.graph_order total reading writing waiting
localdomain;localhost.localdomain:nginx_status.reading.update_rate 300
localdomain;localhost.localdomain:nginx_status.reading.label Reading
localdomain;localhost.localdomain:nginx_status.reading.draw LINE
localdomain;localhost.localdomain:nginx_status.reading.graph_data_size normal
localdomain;localhost.localdomain:nginx_status.reading.info Reading
localdomain;localhost.localdomain:nginx_status.total.info Active connections
localdomain;localhost.localdomain:nginx_status.total.graph_data_size normal
localdomain;localhost.localdomain:nginx_status.total.label Active connections
localdomain;localhost.localdomain:nginx_status.total.draw LINE
localdomain;localhost.localdomain:nginx_status.total.update_rate 300
localdomain;localhost.localdomain:nginx_status.writing.graph_data_size normal
localdomain;localhost.localdomain:nginx_status.writing.info Writing
localdomain;localhost.localdomain:nginx_status.writing.update_rate 300
localdomain;localhost.localdomain:nginx_status.writing.draw LINE
localdomain;localhost.localdomain:nginx_status.writing.label Writing
localdomain;localhost.localdomain:nginx_status.waiting.info Waiting
localdomain;localhost.localdomain:nginx_status.waiting.graph_data_size normal
localdomain;localhost.localdomain:nginx_status.waiting.label Waiting
localdomain;localhost.localdomain:nginx_status.waiting.draw LINE
localdomain;localhost.localdomain:nginx_status.waiting.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.graph_title Disk IOs per device
localdomain;localhost.localdomain:diskstats_iops.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.graph_vlabel IOs/${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.graph_width 400
localdomain;localhost.localdomain:diskstats_iops.graph_order loop0_rdio loop0_wrio loop1_rdio loop1_wrio loop2_rdio loop2_wrio loop3_rdio loop3_wrio loop4_rdio loop4_wrio loop5_rdio loop5_wrio loop6_rdio loop6_wrio loop7_rdio loop7_wrio loop8_rdio loop8_wrio sda_rdio sda_wrio sdb_rdio sdb_wrio sdc_rdio sdc_wrio sdd_rdio sdd_wrio
localdomain;localhost.localdomain:diskstats_iops.sdc_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdc_wrio.negative sdc_rdio
localdomain;localhost.localdomain:diskstats_iops.sdc_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdc_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdc_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdc_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdc_wrio.label sdc
localdomain;localhost.localdomain:diskstats_iops.loop8_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop8_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop8_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop8_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop8_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop8_rdio.label loop8
localdomain;localhost.localdomain:diskstats_iops.loop8_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop2_rdio.label loop2
localdomain;localhost.localdomain:diskstats_iops.loop2_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop2_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop2_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop2_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop2_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop2_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop4_wrio.negative loop4_rdio
localdomain;localhost.localdomain:diskstats_iops.loop4_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop4_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop4_wrio.label loop4
localdomain;localhost.localdomain:diskstats_iops.loop4_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop4_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop4_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop6_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop6_rdio.label loop6
localdomain;localhost.localdomain:diskstats_iops.loop6_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop6_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop6_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdd_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdd_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdd_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdd_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdd_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sdd_rdio.label sdd
localdomain;localhost.localdomain:diskstats_iops.sdd_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop6_wrio.label loop6
localdomain;localhost.localdomain:diskstats_iops.loop6_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop6_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop6_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop6_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop6_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop6_wrio.negative loop6_rdio
localdomain;localhost.localdomain:diskstats_iops.sda_wrio.negative sda_rdio
localdomain;localhost.localdomain:diskstats_iops.sda_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sda_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sda_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.sda_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sda_wrio.label sda
localdomain;localhost.localdomain:diskstats_iops.sda_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdb_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdb_rdio.label sdb
localdomain;localhost.localdomain:diskstats_iops.sdb_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sdb_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdb_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdb_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdb_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop4_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop4_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop4_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop4_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop4_rdio.label loop4
localdomain;localhost.localdomain:diskstats_iops.loop4_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop4_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop2_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop2_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop2_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop2_wrio.label loop2
localdomain;localhost.localdomain:diskstats_iops.loop2_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop2_wrio.negative loop2_rdio
localdomain;localhost.localdomain:diskstats_iops.loop2_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop8_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop8_wrio.negative loop8_rdio
localdomain;localhost.localdomain:diskstats_iops.loop8_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop8_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop8_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop8_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop8_wrio.label loop8
localdomain;localhost.localdomain:diskstats_iops.sdd_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdd_wrio.negative sdd_rdio
localdomain;localhost.localdomain:diskstats_iops.sdd_wrio.label sdd
localdomain;localhost.localdomain:diskstats_iops.sdd_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdd_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdd_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdd_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop5_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop5_wrio.label loop5
localdomain;localhost.localdomain:diskstats_iops.loop5_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop5_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop5_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop5_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop5_wrio.negative loop5_rdio
localdomain;localhost.localdomain:diskstats_iops.sdb_wrio.label sdb
localdomain;localhost.localdomain:diskstats_iops.sdb_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdb_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdb_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdb_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdb_wrio.negative sdb_rdio
localdomain;localhost.localdomain:diskstats_iops.sdb_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sda_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sda_rdio.label sda
localdomain;localhost.localdomain:diskstats_iops.sda_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sda_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sda_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sda_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.sda_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop3_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop3_wrio.negative loop3_rdio
localdomain;localhost.localdomain:diskstats_iops.loop3_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop3_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop3_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop3_wrio.label loop3
localdomain;localhost.localdomain:diskstats_iops.loop3_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop0_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop0_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop0_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop0_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop0_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop0_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop0_rdio.label loop0
localdomain;localhost.localdomain:diskstats_iops.loop7_wrio.negative loop7_rdio
localdomain;localhost.localdomain:diskstats_iops.loop7_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop7_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop7_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop7_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop7_wrio.label loop7
localdomain;localhost.localdomain:diskstats_iops.loop7_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop1_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop1_wrio.label loop1
localdomain;localhost.localdomain:diskstats_iops.loop1_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop1_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop1_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop1_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop1_wrio.negative loop1_rdio
localdomain;localhost.localdomain:diskstats_iops.sdc_rdio.label sdc
localdomain;localhost.localdomain:diskstats_iops.sdc_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sdc_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdc_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdc_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdc_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdc_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop7_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop7_rdio.label loop7
localdomain;localhost.localdomain:diskstats_iops.loop7_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop7_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop7_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop7_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop7_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop1_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop1_rdio.label loop1
localdomain;localhost.localdomain:diskstats_iops.loop1_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop1_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop1_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop1_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop1_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop0_wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop0_wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop0_wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop0_wrio.label loop0
localdomain;localhost.localdomain:diskstats_iops.loop0_wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop0_wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop0_wrio.negative loop0_rdio
localdomain;localhost.localdomain:diskstats_iops.loop3_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop3_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop3_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop3_rdio.label loop3
localdomain;localhost.localdomain:diskstats_iops.loop3_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop3_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop3_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop5_rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop5_rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop5_rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop5_rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop5_rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop5_rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop5_rdio.label loop5
localdomain;localhost.localdomain:nginx_request.graph_title Nginx requests
localdomain;localhost.localdomain:nginx_request.graph_args --base 1000
localdomain;localhost.localdomain:nginx_request.graph_category nginx
localdomain;localhost.localdomain:nginx_request.graph_vlabel Requests per ${graph_period}
localdomain;localhost.localdomain:nginx_request.graph_order request
localdomain;localhost.localdomain:nginx_request.request.graph_data_size normal
localdomain;localhost.localdomain:nginx_request.request.draw LINE
localdomain;localhost.localdomain:nginx_request.request.label requests
localdomain;localhost.localdomain:nginx_request.request.min 0
localdomain;localhost.localdomain:nginx_request.request.type DERIVE
localdomain;localhost.localdomain:nginx_request.request.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop1.graph_title Average latency for /dev/loop1
localdomain;localhost.localdomain:diskstats_latency.loop1.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop1.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop1.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop1.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop1.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop1.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop1.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop1.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop1.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop1.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop1.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop1.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop1.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop1.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdb.graph_title IOs for /dev/sdb
localdomain;localhost.localdomain:diskstats_iops.sdb.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.sdb.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.sdb.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.sdb.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.sdb.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.sdb.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdb.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.sdb.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.sdb.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sdb.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdb.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdb.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdb.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.sdb.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.sdb.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdb.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdb.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdb.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdb.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.sdb.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdb.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdb.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sdb.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.sdb.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdb.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdb.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdb.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop3.graph_title IOs for /dev/loop3
localdomain;localhost.localdomain:diskstats_iops.loop3.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop3.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop3.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop3.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop3.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop3.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop3.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop3.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop3.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop3.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop3.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop3.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop3.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop3.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop3.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop3.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop3.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop3.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop3.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop3.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop3.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop3.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop3.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop3.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop3.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop3.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop3.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:entropy.graph_title Available entropy
localdomain;localhost.localdomain:entropy.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:entropy.graph_vlabel entropy (bytes)
localdomain;localhost.localdomain:entropy.graph_scale no
localdomain;localhost.localdomain:entropy.graph_category system
localdomain;localhost.localdomain:entropy.graph_info This graph shows the amount of entropy available in the system.
localdomain;localhost.localdomain:entropy.graph_order entropy
localdomain;localhost.localdomain:entropy.entropy.update_rate 300
localdomain;localhost.localdomain:entropy.entropy.graph_data_size normal
localdomain;localhost.localdomain:entropy.entropy.info The number of random bytes available. This is typically used by cryptographic applications.
localdomain;localhost.localdomain:entropy.entropy.label entropy
localdomain;localhost.localdomain:http_loadtime.graph_title HTTP loadtime of a page
localdomain;localhost.localdomain:http_loadtime.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:http_loadtime.graph_vlabel Load time in seconds
localdomain;localhost.localdomain:http_loadtime.graph_category network
localdomain;localhost.localdomain:http_loadtime.graph_info This graph shows the load time in seconds
localdomain;localhost.localdomain:http_loadtime.graph_order http___localhost_
localdomain;localhost.localdomain:http_loadtime.http___localhost_.label http://localhost/
localdomain;localhost.localdomain:http_loadtime.http___localhost_.info page load time
localdomain;localhost.localdomain:http_loadtime.http___localhost_.graph_data_size normal
localdomain;localhost.localdomain:http_loadtime.http___localhost_.update_rate 300
localdomain;localhost.localdomain:if_eno1.graph_order down up down up
localdomain;localhost.localdomain:if_eno1.graph_title eno1 traffic
localdomain;localhost.localdomain:if_eno1.graph_args --base 1000
localdomain;localhost.localdomain:if_eno1.graph_vlabel bits in (-) / out (+) per ${graph_period}
localdomain;localhost.localdomain:if_eno1.graph_category network
localdomain;localhost.localdomain:if_eno1.graph_info This graph shows the traffic of the eno1 network interface. Please note that the traffic is shown in bits per second, not bytes. IMPORTANT: On 32-bit systems the data source for this plugin uses 32-bit counters, which makes the plugin unreliable and unsuitable for most 100-Mb/s (or faster) interfaces, where traffic is expected to exceed 50 Mb/s over a 5 minute period.  This means that this plugin is unsuitable for most 32-bit production environments. To avoid this problem, use the ip_ plugin instead.  There should be no problems on 64-bit systems running 64-bit kernels.
localdomain;localhost.localdomain:if_eno1.up.info Traffic of the eno1 interface. Maximum speed is 1000 Mb/s.
localdomain;localhost.localdomain:if_eno1.up.graph_data_size normal
localdomain;localhost.localdomain:if_eno1.up.negative down
localdomain;localhost.localdomain:if_eno1.up.label bps
localdomain;localhost.localdomain:if_eno1.up.cdef up,8,*
localdomain;localhost.localdomain:if_eno1.up.max 1000000000
localdomain;localhost.localdomain:if_eno1.up.min 0
localdomain;localhost.localdomain:if_eno1.up.type DERIVE
localdomain;localhost.localdomain:if_eno1.up.update_rate 300
localdomain;localhost.localdomain:if_eno1.down.graph_data_size normal
localdomain;localhost.localdomain:if_eno1.down.label received
localdomain;localhost.localdomain:if_eno1.down.graph no
localdomain;localhost.localdomain:if_eno1.down.max 1000000000
localdomain;localhost.localdomain:if_eno1.down.cdef down,8,*
localdomain;localhost.localdomain:if_eno1.down.type DERIVE
localdomain;localhost.localdomain:if_eno1.down.min 0
localdomain;localhost.localdomain:if_eno1.down.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop7.graph_title Average latency for /dev/loop7
localdomain;localhost.localdomain:diskstats_latency.loop7.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop7.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop7.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop7.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop7.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop7.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop7.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop7.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop7.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop7.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop7.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop7.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop7.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop7.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_throughput.loop5.graph_title Disk throughput for /dev/loop5
localdomain;localhost.localdomain:diskstats_throughput.loop5.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop5.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop5.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop5.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop5.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop5.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop5.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop5.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop5.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop5.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop5.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop5.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop5.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop5.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop5.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop5.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop5.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop5.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop5.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdc.graph_title IOs for /dev/sdc
localdomain;localhost.localdomain:diskstats_iops.sdc.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.sdc.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.sdc.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.sdc.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.sdc.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.sdc.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdc.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdc.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdc.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.sdc.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdc.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.sdc.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdc.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sdc.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdc.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdc.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdc.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdc.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdc.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sdc.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.sdc.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.sdc.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.sdc.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdc.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdc.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdc.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sdc.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.graph_title S.M.A.R.T values for drive sda
localdomain;localhost.localdomain:smart_sda.graph_vlabel Attribute S.M.A.R.T value
localdomain;localhost.localdomain:smart_sda.graph_args --base 1000 --lower-limit 0
localdomain;localhost.localdomain:smart_sda.graph_category disk
localdomain;localhost.localdomain:smart_sda.graph_info This graph shows the value of all S.M.A.R.T attributes of drive sda (Samsung SSD 850 PRO 256GB). smartctl_exit_status is the return value of smartctl. A non-zero return value indicates an error, a potential error, or a fault on the drive.
localdomain;localhost.localdomain:smart_sda.graph_order Reallocated_Sector_Ct Power_On_Hours Power_Cycle_Count Wear_Leveling_Count Used_Rsvd_Blk_Cnt_Tot Program_Fail_Cnt_Total Erase_Fail_Count_Total Runtime_Bad_Block Uncorrectable_Error_Cnt Airflow_Temperature_Cel ECC_Error_Rate CRC_Error_Count POR_Recovery_Count Total_LBAs_Written smartctl_exit_status
localdomain;localhost.localdomain:smart_sda.Wear_Leveling_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Wear_Leveling_Count.update_rate 300
localdomain;localhost.localdomain:smart_sda.Wear_Leveling_Count.critical 000:
localdomain;localhost.localdomain:smart_sda.Wear_Leveling_Count.label Wear_Leveling_Count
localdomain;localhost.localdomain:smart_sda.smartctl_exit_status.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.smartctl_exit_status.update_rate 300
localdomain;localhost.localdomain:smart_sda.smartctl_exit_status.warning 1
localdomain;localhost.localdomain:smart_sda.smartctl_exit_status.label smartctl_exit_status
localdomain;localhost.localdomain:smart_sda.Total_LBAs_Written.critical 000:
localdomain;localhost.localdomain:smart_sda.Total_LBAs_Written.label Total_LBAs_Written
localdomain;localhost.localdomain:smart_sda.Total_LBAs_Written.update_rate 300
localdomain;localhost.localdomain:smart_sda.Total_LBAs_Written.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Reallocated_Sector_Ct.update_rate 300
localdomain;localhost.localdomain:smart_sda.Reallocated_Sector_Ct.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Reallocated_Sector_Ct.critical 010:
localdomain;localhost.localdomain:smart_sda.Reallocated_Sector_Ct.label Reallocated_Sector_Ct
localdomain;localhost.localdomain:smart_sda.Airflow_Temperature_Cel.update_rate 300
localdomain;localhost.localdomain:smart_sda.Airflow_Temperature_Cel.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Airflow_Temperature_Cel.label Airflow_Temperature_Cel
localdomain;localhost.localdomain:smart_sda.Airflow_Temperature_Cel.critical 000:
localdomain;localhost.localdomain:smart_sda.POR_Recovery_Count.label POR_Recovery_Count
localdomain;localhost.localdomain:smart_sda.POR_Recovery_Count.critical 000:
localdomain;localhost.localdomain:smart_sda.POR_Recovery_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.POR_Recovery_Count.update_rate 300
localdomain;localhost.localdomain:smart_sda.Runtime_Bad_Block.update_rate 300
localdomain;localhost.localdomain:smart_sda.Runtime_Bad_Block.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Runtime_Bad_Block.label Runtime_Bad_Block
localdomain;localhost.localdomain:smart_sda.Runtime_Bad_Block.critical 010:
localdomain;localhost.localdomain:smart_sda.CRC_Error_Count.update_rate 300
localdomain;localhost.localdomain:smart_sda.CRC_Error_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.CRC_Error_Count.critical 000:
localdomain;localhost.localdomain:smart_sda.CRC_Error_Count.label CRC_Error_Count
localdomain;localhost.localdomain:smart_sda.Used_Rsvd_Blk_Cnt_Tot.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Used_Rsvd_Blk_Cnt_Tot.update_rate 300
localdomain;localhost.localdomain:smart_sda.Used_Rsvd_Blk_Cnt_Tot.critical 010:
localdomain;localhost.localdomain:smart_sda.Used_Rsvd_Blk_Cnt_Tot.label Used_Rsvd_Blk_Cnt_Tot
localdomain;localhost.localdomain:smart_sda.Power_Cycle_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Power_Cycle_Count.update_rate 300
localdomain;localhost.localdomain:smart_sda.Power_Cycle_Count.critical 000:
localdomain;localhost.localdomain:smart_sda.Power_Cycle_Count.label Power_Cycle_Count
localdomain;localhost.localdomain:smart_sda.Program_Fail_Cnt_Total.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Program_Fail_Cnt_Total.update_rate 300
localdomain;localhost.localdomain:smart_sda.Program_Fail_Cnt_Total.label Program_Fail_Cnt_Total
localdomain;localhost.localdomain:smart_sda.Program_Fail_Cnt_Total.critical 010:
localdomain;localhost.localdomain:smart_sda.Erase_Fail_Count_Total.label Erase_Fail_Count_Total
localdomain;localhost.localdomain:smart_sda.Erase_Fail_Count_Total.critical 010:
localdomain;localhost.localdomain:smart_sda.Erase_Fail_Count_Total.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Erase_Fail_Count_Total.update_rate 300
localdomain;localhost.localdomain:smart_sda.Uncorrectable_Error_Cnt.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Uncorrectable_Error_Cnt.update_rate 300
localdomain;localhost.localdomain:smart_sda.Uncorrectable_Error_Cnt.label Uncorrectable_Error_Cnt
localdomain;localhost.localdomain:smart_sda.Uncorrectable_Error_Cnt.critical 000:
localdomain;localhost.localdomain:smart_sda.Power_On_Hours.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.Power_On_Hours.update_rate 300
localdomain;localhost.localdomain:smart_sda.Power_On_Hours.label Power_On_Hours
localdomain;localhost.localdomain:smart_sda.Power_On_Hours.critical 000:
localdomain;localhost.localdomain:smart_sda.ECC_Error_Rate.update_rate 300
localdomain;localhost.localdomain:smart_sda.ECC_Error_Rate.graph_data_size normal
localdomain;localhost.localdomain:smart_sda.ECC_Error_Rate.label ECC_Error_Rate
localdomain;localhost.localdomain:smart_sda.ECC_Error_Rate.critical 000:
localdomain;localhost.localdomain:diskstats_iops.loop8.graph_title IOs for /dev/loop8
localdomain;localhost.localdomain:diskstats_iops.loop8.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop8.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop8.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop8.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop8.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop8.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop8.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop8.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop8.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop8.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop8.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop8.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop8.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop8.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop8.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop8.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop8.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop8.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop8.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop8.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop8.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop8.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop8.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop8.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop8.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop8.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop8.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop5.graph_title Disk utilization for /dev/loop5
localdomain;localhost.localdomain:diskstats_utilization.loop5.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop5.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop5.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop5.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop5.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop5.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop5.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop5.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop5.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.loop5.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop5.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop5.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:postfix_mailqueue.graph_title Postfix Mailqueue
localdomain;localhost.localdomain:postfix_mailqueue.graph_vlabel Mails in queue
localdomain;localhost.localdomain:postfix_mailqueue.graph_category postfix
localdomain;localhost.localdomain:postfix_mailqueue.graph_total Total
localdomain;localhost.localdomain:postfix_mailqueue.graph_order active deferred maildrop incoming corrupt hold
localdomain;localhost.localdomain:postfix_mailqueue.active.label active
localdomain;localhost.localdomain:postfix_mailqueue.active.graph_data_size normal
localdomain;localhost.localdomain:postfix_mailqueue.active.update_rate 300
localdomain;localhost.localdomain:postfix_mailqueue.hold.graph_data_size normal
localdomain;localhost.localdomain:postfix_mailqueue.hold.update_rate 300
localdomain;localhost.localdomain:postfix_mailqueue.hold.label held
localdomain;localhost.localdomain:postfix_mailqueue.deferred.label deferred
localdomain;localhost.localdomain:postfix_mailqueue.deferred.graph_data_size normal
localdomain;localhost.localdomain:postfix_mailqueue.deferred.update_rate 300
localdomain;localhost.localdomain:postfix_mailqueue.maildrop.update_rate 300
localdomain;localhost.localdomain:postfix_mailqueue.maildrop.graph_data_size normal
localdomain;localhost.localdomain:postfix_mailqueue.maildrop.label maildrop
localdomain;localhost.localdomain:postfix_mailqueue.incoming.graph_data_size normal
localdomain;localhost.localdomain:postfix_mailqueue.incoming.update_rate 300
localdomain;localhost.localdomain:postfix_mailqueue.incoming.label incoming
localdomain;localhost.localdomain:postfix_mailqueue.corrupt.label corrupt
localdomain;localhost.localdomain:postfix_mailqueue.corrupt.graph_data_size normal
localdomain;localhost.localdomain:postfix_mailqueue.corrupt.update_rate 300
localdomain;localhost.localdomain:smart_sdc.graph_title S.M.A.R.T values for drive sdc
localdomain;localhost.localdomain:smart_sdc.graph_vlabel Attribute S.M.A.R.T value
localdomain;localhost.localdomain:smart_sdc.graph_args --base 1000 --lower-limit 0
localdomain;localhost.localdomain:smart_sdc.graph_category disk
localdomain;localhost.localdomain:smart_sdc.graph_info This graph shows the value of all S.M.A.R.T attributes of drive sdc (ST8000NM000A-2KE101). smartctl_exit_status is the return value of smartctl. A non-zero return value indicates an error, a potential error, or a fault on the drive.
localdomain;localhost.localdomain:smart_sdc.graph_order Raw_Read_Error_Rate Spin_Up_Time Start_Stop_Count Reallocated_Sector_Ct Seek_Error_Rate Power_On_Hours Spin_Retry_Count Power_Cycle_Count Unknown_Attribute Reported_Uncorrect Command_Timeout Airflow_Temperature_Cel Power_Off_Retract_Count Load_Cycle_Count Temperature_Celsius Hardware_ECC_Recovered Current_Pending_Sector Offline_Uncorrectable UDMA_CRC_Error_Count Head_Flying_Hours Total_LBAs_Written Total_LBAs_Read smartctl_exit_status
localdomain;localhost.localdomain:smart_sdc.Current_Pending_Sector.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Current_Pending_Sector.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Current_Pending_Sector.label Current_Pending_Sector
localdomain;localhost.localdomain:smart_sdc.Current_Pending_Sector.critical 000:
localdomain;localhost.localdomain:smart_sdc.Reported_Uncorrect.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Reported_Uncorrect.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Reported_Uncorrect.critical 000:
localdomain;localhost.localdomain:smart_sdc.Reported_Uncorrect.label Reported_Uncorrect
localdomain;localhost.localdomain:smart_sdc.Airflow_Temperature_Cel.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Airflow_Temperature_Cel.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Airflow_Temperature_Cel.critical 040:
localdomain;localhost.localdomain:smart_sdc.Airflow_Temperature_Cel.label Airflow_Temperature_Cel
localdomain;localhost.localdomain:smart_sdc.Power_Off_Retract_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Power_Off_Retract_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Power_Off_Retract_Count.label Power_Off_Retract_Count
localdomain;localhost.localdomain:smart_sdc.Power_Off_Retract_Count.critical 000:
localdomain;localhost.localdomain:smart_sdc.Load_Cycle_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Load_Cycle_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Load_Cycle_Count.label Load_Cycle_Count
localdomain;localhost.localdomain:smart_sdc.Load_Cycle_Count.critical 000:
localdomain;localhost.localdomain:smart_sdc.Hardware_ECC_Recovered.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Hardware_ECC_Recovered.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Hardware_ECC_Recovered.critical 000:
localdomain;localhost.localdomain:smart_sdc.Hardware_ECC_Recovered.label Hardware_ECC_Recovered
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Written.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Written.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Written.label Total_LBAs_Written
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Written.critical 000:
localdomain;localhost.localdomain:smart_sdc.Spin_Retry_Count.label Spin_Retry_Count
localdomain;localhost.localdomain:smart_sdc.Spin_Retry_Count.critical 097:
localdomain;localhost.localdomain:smart_sdc.Spin_Retry_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Spin_Retry_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Power_On_Hours.label Power_On_Hours
localdomain;localhost.localdomain:smart_sdc.Power_On_Hours.critical 000:
localdomain;localhost.localdomain:smart_sdc.Power_On_Hours.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Power_On_Hours.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Start_Stop_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Start_Stop_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Start_Stop_Count.critical 020:
localdomain;localhost.localdomain:smart_sdc.Start_Stop_Count.label Start_Stop_Count
localdomain;localhost.localdomain:smart_sdc.Head_Flying_Hours.label Head_Flying_Hours
localdomain;localhost.localdomain:smart_sdc.Head_Flying_Hours.critical 000:
localdomain;localhost.localdomain:smart_sdc.Head_Flying_Hours.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Head_Flying_Hours.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Raw_Read_Error_Rate.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Raw_Read_Error_Rate.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Raw_Read_Error_Rate.critical 044:
localdomain;localhost.localdomain:smart_sdc.Raw_Read_Error_Rate.label Raw_Read_Error_Rate
localdomain;localhost.localdomain:smart_sdc.Power_Cycle_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Power_Cycle_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Power_Cycle_Count.label Power_Cycle_Count
localdomain;localhost.localdomain:smart_sdc.Power_Cycle_Count.critical 020:
localdomain;localhost.localdomain:smart_sdc.Offline_Uncorrectable.critical 000:
localdomain;localhost.localdomain:smart_sdc.Offline_Uncorrectable.label Offline_Uncorrectable
localdomain;localhost.localdomain:smart_sdc.Offline_Uncorrectable.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Offline_Uncorrectable.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Unknown_Attribute.label Unknown_Attribute
localdomain;localhost.localdomain:smart_sdc.Unknown_Attribute.critical 050:
localdomain;localhost.localdomain:smart_sdc.Unknown_Attribute.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Unknown_Attribute.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Temperature_Celsius.critical 000:
localdomain;localhost.localdomain:smart_sdc.Temperature_Celsius.label Temperature_Celsius
localdomain;localhost.localdomain:smart_sdc.Temperature_Celsius.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Temperature_Celsius.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Read.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Read.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Read.label Total_LBAs_Read
localdomain;localhost.localdomain:smart_sdc.Total_LBAs_Read.critical 000:
localdomain;localhost.localdomain:smart_sdc.smartctl_exit_status.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.smartctl_exit_status.update_rate 300
localdomain;localhost.localdomain:smart_sdc.smartctl_exit_status.label smartctl_exit_status
localdomain;localhost.localdomain:smart_sdc.smartctl_exit_status.warning 1
localdomain;localhost.localdomain:smart_sdc.Command_Timeout.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Command_Timeout.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Command_Timeout.critical 000:
localdomain;localhost.localdomain:smart_sdc.Command_Timeout.label Command_Timeout
localdomain;localhost.localdomain:smart_sdc.Reallocated_Sector_Ct.label Reallocated_Sector_Ct
localdomain;localhost.localdomain:smart_sdc.Reallocated_Sector_Ct.critical 010:
localdomain;localhost.localdomain:smart_sdc.Reallocated_Sector_Ct.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Reallocated_Sector_Ct.update_rate 300
localdomain;localhost.localdomain:smart_sdc.UDMA_CRC_Error_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdc.UDMA_CRC_Error_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.UDMA_CRC_Error_Count.label UDMA_CRC_Error_Count
localdomain;localhost.localdomain:smart_sdc.UDMA_CRC_Error_Count.critical 000:
localdomain;localhost.localdomain:smart_sdc.Spin_Up_Time.critical 000:
localdomain;localhost.localdomain:smart_sdc.Spin_Up_Time.label Spin_Up_Time
localdomain;localhost.localdomain:smart_sdc.Spin_Up_Time.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Spin_Up_Time.update_rate 300
localdomain;localhost.localdomain:smart_sdc.Seek_Error_Rate.label Seek_Error_Rate
localdomain;localhost.localdomain:smart_sdc.Seek_Error_Rate.critical 045:
localdomain;localhost.localdomain:smart_sdc.Seek_Error_Rate.graph_data_size normal
localdomain;localhost.localdomain:smart_sdc.Seek_Error_Rate.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop3.graph_title Disk utilization for /dev/loop3
localdomain;localhost.localdomain:diskstats_utilization.loop3.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop3.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop3.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop3.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop3.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop3.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop3.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.loop3.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop3.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop3.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop3.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop3.util.label Utilization
localdomain;localhost.localdomain:diskstats_latency.sdd.graph_title Average latency for /dev/sdd
localdomain;localhost.localdomain:diskstats_latency.sdd.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.sdd.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.sdd.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.sdd.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.sdd.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdd.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdd.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.sdd.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdd.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdd.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.sdd.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdd.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.sdd.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdd.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:vmstat.graph_title VMstat
localdomain;localhost.localdomain:vmstat.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:vmstat.graph_vlabel process states
localdomain;localhost.localdomain:vmstat.graph_category processes
localdomain;localhost.localdomain:vmstat.graph_order wait sleep
localdomain;localhost.localdomain:vmstat.wait.max 500000
localdomain;localhost.localdomain:vmstat.wait.label running
localdomain;localhost.localdomain:vmstat.wait.type GAUGE
localdomain;localhost.localdomain:vmstat.wait.update_rate 300
localdomain;localhost.localdomain:vmstat.wait.graph_data_size normal
localdomain;localhost.localdomain:vmstat.sleep.label I/O sleep
localdomain;localhost.localdomain:vmstat.sleep.max 500000
localdomain;localhost.localdomain:vmstat.sleep.type GAUGE
localdomain;localhost.localdomain:vmstat.sleep.update_rate 300
localdomain;localhost.localdomain:vmstat.sleep.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop1.graph_title Disk utilization for /dev/loop1
localdomain;localhost.localdomain:diskstats_utilization.loop1.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop1.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop1.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop1.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop1.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop1.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop1.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop1.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop1.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.loop1.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop1.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop1.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.loop2.graph_title Disk utilization for /dev/loop2
localdomain;localhost.localdomain:diskstats_utilization.loop2.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop2.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop2.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop2.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop2.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop2.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop2.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.loop2.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop2.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop2.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop2.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.loop2.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdd.graph_title Disk throughput for /dev/sdd
localdomain;localhost.localdomain:diskstats_throughput.sdd.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.sdd.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.sdd.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.sdd.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.sdd.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.sdd.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sdd.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.sdd.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdd.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdd.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdd.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdd.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdd.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdd.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdd.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdd.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdd.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.sdd.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdd.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_utilization.loop4.graph_title Disk utilization for /dev/loop4
localdomain;localhost.localdomain:diskstats_utilization.loop4.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop4.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop4.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop4.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop4.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop4.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.loop4.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop4.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop4.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop4.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop4.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.loop4.util.graph_data_size normal
localdomain;localhost.localdomain:open_inodes.graph_title Inode table usage
localdomain;localhost.localdomain:open_inodes.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:open_inodes.graph_vlabel number of open inodes
localdomain;localhost.localdomain:open_inodes.graph_category system
localdomain;localhost.localdomain:open_inodes.graph_info This graph monitors the Linux open inode table.
localdomain;localhost.localdomain:open_inodes.graph_order used max
localdomain;localhost.localdomain:open_inodes.used.label open inodes
localdomain;localhost.localdomain:open_inodes.used.info The number of currently open inodes.
localdomain;localhost.localdomain:open_inodes.used.graph_data_size normal
localdomain;localhost.localdomain:open_inodes.used.update_rate 300
localdomain;localhost.localdomain:open_inodes.max.graph_data_size normal
localdomain;localhost.localdomain:open_inodes.max.update_rate 300
localdomain;localhost.localdomain:open_inodes.max.info The size of the system inode table. This is dynamically adjusted by the kernel.
localdomain;localhost.localdomain:open_inodes.max.label inode table size
localdomain;localhost.localdomain:diskstats_throughput.loop6.graph_title Disk throughput for /dev/loop6
localdomain;localhost.localdomain:diskstats_throughput.loop6.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop6.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop6.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop6.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop6.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop6.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop6.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop6.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop6.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop6.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop6.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop6.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop6.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop6.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop6.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop6.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop6.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop6.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop6.rdbytes.draw LINE1
localdomain;localhost.localdomain:irqstats.graph_title Individual interrupts
localdomain;localhost.localdomain:irqstats.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:irqstats.graph_vlabel interrupts / ${graph_period}
localdomain;localhost.localdomain:irqstats.graph_category system
localdomain;localhost.localdomain:irqstats.graph_info Shows the number of different IRQs received by the kernel.  High disk or network traffic can cause a high number of interrupts (with good hardware and drivers this will be less so). Sudden high interrupt activity with no associated higher system activity is not normal.
localdomain;localhost.localdomain:irqstats.graph_order i0 i8 i9 i14 i16 i120 i121 i122 i123 i124 i125 i126 i127 iNMI iLOC iSPU iPMI iIWI iRTR iRES iCAL iTLB iTRM iTHR iDFR iMCE iMCP iERR iMIS iPIN iNPI iPIW i0 i8 i9 i14 i16 i120 i121 i122 i123 i124 i125 i126 i127 iNMI iLOC iSPU iPMI iIWI iRTR iRES iCAL iTLB iTRM iTHR iDFR iMCE iMCP iERR iMIS iPIN iNPI iPIW
localdomain;localhost.localdomain:irqstats.i122.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i122.info Interrupt 122, for device(s): 327680-edge      xhci_hcd
localdomain;localhost.localdomain:irqstats.i122.update_rate 300
localdomain;localhost.localdomain:irqstats.i122.min 0
localdomain;localhost.localdomain:irqstats.i122.type DERIVE
localdomain;localhost.localdomain:irqstats.i122.label 327680-edge      xhci_hcd
localdomain;localhost.localdomain:irqstats.i123.label 520192-edge      eno1
localdomain;localhost.localdomain:irqstats.i123.update_rate 300
localdomain;localhost.localdomain:irqstats.i123.min 0
localdomain;localhost.localdomain:irqstats.i123.type DERIVE
localdomain;localhost.localdomain:irqstats.i123.info Interrupt 123, for device(s): 520192-edge      eno1
localdomain;localhost.localdomain:irqstats.i123.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i125.info Interrupt 125, for device(s): 32768-edge      i915
localdomain;localhost.localdomain:irqstats.i125.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i125.label 32768-edge      i915
localdomain;localhost.localdomain:irqstats.i125.update_rate 300
localdomain;localhost.localdomain:irqstats.i125.min 0
localdomain;localhost.localdomain:irqstats.i125.type DERIVE
localdomain;localhost.localdomain:irqstats.i14.label 14-fasteoi   INT345D:00
localdomain;localhost.localdomain:irqstats.i14.update_rate 300
localdomain;localhost.localdomain:irqstats.i14.min 0
localdomain;localhost.localdomain:irqstats.i14.type DERIVE
localdomain;localhost.localdomain:irqstats.i14.info Interrupt 14, for device(s): 14-fasteoi   INT345D:00
localdomain;localhost.localdomain:irqstats.i14.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iPMI.info Interrupt PMI, for device(s): Performance monitoring interrupts
localdomain;localhost.localdomain:irqstats.iPMI.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iPMI.label Performance monitoring interrupts
localdomain;localhost.localdomain:irqstats.iPMI.type DERIVE
localdomain;localhost.localdomain:irqstats.iPMI.min 0
localdomain;localhost.localdomain:irqstats.iPMI.update_rate 300
localdomain;localhost.localdomain:irqstats.iPIN.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iPIN.info Interrupt PIN, for device(s): Posted-interrupt notification event
localdomain;localhost.localdomain:irqstats.iPIN.update_rate 300
localdomain;localhost.localdomain:irqstats.iPIN.type DERIVE
localdomain;localhost.localdomain:irqstats.iPIN.min 0
localdomain;localhost.localdomain:irqstats.iPIN.label Posted-interrupt notification event
localdomain;localhost.localdomain:irqstats.i0.update_rate 300
localdomain;localhost.localdomain:irqstats.i0.type DERIVE
localdomain;localhost.localdomain:irqstats.i0.min 0
localdomain;localhost.localdomain:irqstats.i0.label 2-edge      timer
localdomain;localhost.localdomain:irqstats.i0.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i0.info Interrupt 0, for device(s): 2-edge      timer
localdomain;localhost.localdomain:irqstats.iNMI.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iNMI.info Interrupt NMI, for device(s): Non-maskable interrupts
localdomain;localhost.localdomain:irqstats.iNMI.type DERIVE
localdomain;localhost.localdomain:irqstats.iNMI.min 0
localdomain;localhost.localdomain:irqstats.iNMI.update_rate 300
localdomain;localhost.localdomain:irqstats.iNMI.label Non-maskable interrupts
localdomain;localhost.localdomain:irqstats.iIWI.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iIWI.info Interrupt IWI, for device(s): IRQ work interrupts
localdomain;localhost.localdomain:irqstats.iIWI.update_rate 300
localdomain;localhost.localdomain:irqstats.iIWI.min 0
localdomain;localhost.localdomain:irqstats.iIWI.type DERIVE
localdomain;localhost.localdomain:irqstats.iIWI.label IRQ work interrupts
localdomain;localhost.localdomain:irqstats.i121.label 1-edge      dmar1
localdomain;localhost.localdomain:irqstats.i121.update_rate 300
localdomain;localhost.localdomain:irqstats.i121.type DERIVE
localdomain;localhost.localdomain:irqstats.i121.min 0
localdomain;localhost.localdomain:irqstats.i121.info Interrupt 121, for device(s): 1-edge      dmar1
localdomain;localhost.localdomain:irqstats.i121.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i9.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i9.info Interrupt 9, for device(s): 9-fasteoi   acpi
localdomain;localhost.localdomain:irqstats.i9.type DERIVE
localdomain;localhost.localdomain:irqstats.i9.min 0
localdomain;localhost.localdomain:irqstats.i9.update_rate 300
localdomain;localhost.localdomain:irqstats.i9.label 9-fasteoi   acpi
localdomain;localhost.localdomain:irqstats.iTHR.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iTHR.info Interrupt THR, for device(s): Threshold APIC interrupts
localdomain;localhost.localdomain:irqstats.iTHR.type DERIVE
localdomain;localhost.localdomain:irqstats.iTHR.min 0
localdomain;localhost.localdomain:irqstats.iTHR.update_rate 300
localdomain;localhost.localdomain:irqstats.iTHR.label Threshold APIC interrupts
localdomain;localhost.localdomain:irqstats.i120.info Interrupt 120, for device(s): 0-edge      dmar0
localdomain;localhost.localdomain:irqstats.i120.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i120.label 0-edge      dmar0
localdomain;localhost.localdomain:irqstats.i120.update_rate 300
localdomain;localhost.localdomain:irqstats.i120.min 0
localdomain;localhost.localdomain:irqstats.i120.type DERIVE
localdomain;localhost.localdomain:irqstats.iMIS.update_rate 300
localdomain;localhost.localdomain:irqstats.iMIS.type DERIVE
localdomain;localhost.localdomain:irqstats.iMIS.min 0
localdomain;localhost.localdomain:irqstats.iMIS.label MIS
localdomain;localhost.localdomain:irqstats.iMIS.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iPIW.info Interrupt PIW, for device(s): Posted-interrupt wakeup event
localdomain;localhost.localdomain:irqstats.iPIW.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iPIW.label Posted-interrupt wakeup event
localdomain;localhost.localdomain:irqstats.iPIW.update_rate 300
localdomain;localhost.localdomain:irqstats.iPIW.min 0
localdomain;localhost.localdomain:irqstats.iPIW.type DERIVE
localdomain;localhost.localdomain:irqstats.i8.label 8-edge      rtc0
localdomain;localhost.localdomain:irqstats.i8.update_rate 300
localdomain;localhost.localdomain:irqstats.i8.type DERIVE
localdomain;localhost.localdomain:irqstats.i8.min 0
localdomain;localhost.localdomain:irqstats.i8.info Interrupt 8, for device(s): 8-edge      rtc0
localdomain;localhost.localdomain:irqstats.i8.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iTLB.min 0
localdomain;localhost.localdomain:irqstats.iTLB.type DERIVE
localdomain;localhost.localdomain:irqstats.iTLB.update_rate 300
localdomain;localhost.localdomain:irqstats.iTLB.label TLB shootdowns
localdomain;localhost.localdomain:irqstats.iTLB.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iTLB.info Interrupt TLB, for device(s): TLB shootdowns
localdomain;localhost.localdomain:irqstats.iERR.type DERIVE
localdomain;localhost.localdomain:irqstats.iERR.min 0
localdomain;localhost.localdomain:irqstats.iERR.update_rate 300
localdomain;localhost.localdomain:irqstats.iERR.label ERR
localdomain;localhost.localdomain:irqstats.iERR.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iLOC.label Local timer interrupts
localdomain;localhost.localdomain:irqstats.iLOC.min 0
localdomain;localhost.localdomain:irqstats.iLOC.type DERIVE
localdomain;localhost.localdomain:irqstats.iLOC.update_rate 300
localdomain;localhost.localdomain:irqstats.iLOC.info Interrupt LOC, for device(s): Local timer interrupts
localdomain;localhost.localdomain:irqstats.iLOC.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iRES.info Interrupt RES, for device(s): Rescheduling interrupts
localdomain;localhost.localdomain:irqstats.iRES.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iRES.label Rescheduling interrupts
localdomain;localhost.localdomain:irqstats.iRES.min 0
localdomain;localhost.localdomain:irqstats.iRES.type DERIVE
localdomain;localhost.localdomain:irqstats.iRES.update_rate 300
localdomain;localhost.localdomain:irqstats.iSPU.label Spurious interrupts
localdomain;localhost.localdomain:irqstats.iSPU.update_rate 300
localdomain;localhost.localdomain:irqstats.iSPU.min 0
localdomain;localhost.localdomain:irqstats.iSPU.type DERIVE
localdomain;localhost.localdomain:irqstats.iSPU.info Interrupt SPU, for device(s): Spurious interrupts
localdomain;localhost.localdomain:irqstats.iSPU.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iTRM.update_rate 300
localdomain;localhost.localdomain:irqstats.iTRM.min 0
localdomain;localhost.localdomain:irqstats.iTRM.type DERIVE
localdomain;localhost.localdomain:irqstats.iTRM.label Thermal event interrupts
localdomain;localhost.localdomain:irqstats.iTRM.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iTRM.info Interrupt TRM, for device(s): Thermal event interrupts
localdomain;localhost.localdomain:irqstats.iMCP.info Interrupt MCP, for device(s): Machine check polls
localdomain;localhost.localdomain:irqstats.iMCP.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iMCP.label Machine check polls
localdomain;localhost.localdomain:irqstats.iMCP.type DERIVE
localdomain;localhost.localdomain:irqstats.iMCP.min 0
localdomain;localhost.localdomain:irqstats.iMCP.update_rate 300
localdomain;localhost.localdomain:irqstats.iCAL.info Interrupt CAL, for device(s): Function call interrupts
localdomain;localhost.localdomain:irqstats.iCAL.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iCAL.label Function call interrupts
localdomain;localhost.localdomain:irqstats.iCAL.type DERIVE
localdomain;localhost.localdomain:irqstats.iCAL.min 0
localdomain;localhost.localdomain:irqstats.iCAL.update_rate 300
localdomain;localhost.localdomain:irqstats.iRTR.label APIC ICR read retries
localdomain;localhost.localdomain:irqstats.iRTR.type DERIVE
localdomain;localhost.localdomain:irqstats.iRTR.min 0
localdomain;localhost.localdomain:irqstats.iRTR.update_rate 300
localdomain;localhost.localdomain:irqstats.iRTR.info Interrupt RTR, for device(s): APIC ICR read retries
localdomain;localhost.localdomain:irqstats.iRTR.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i126.info Interrupt 126, for device(s): 360448-edge      mei_me
localdomain;localhost.localdomain:irqstats.i126.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i126.label 360448-edge      mei_me
localdomain;localhost.localdomain:irqstats.i126.type DERIVE
localdomain;localhost.localdomain:irqstats.i126.min 0
localdomain;localhost.localdomain:irqstats.i126.update_rate 300
localdomain;localhost.localdomain:irqstats.iNPI.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iNPI.info Interrupt NPI, for device(s): Nested posted-interrupt event
localdomain;localhost.localdomain:irqstats.iNPI.update_rate 300
localdomain;localhost.localdomain:irqstats.iNPI.type DERIVE
localdomain;localhost.localdomain:irqstats.iNPI.min 0
localdomain;localhost.localdomain:irqstats.iNPI.label Nested posted-interrupt event
localdomain;localhost.localdomain:irqstats.i124.info Interrupt 124, for device(s): 376832-edge      ahci[0000:00:17.0]
localdomain;localhost.localdomain:irqstats.i124.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i124.label 376832-edge      ahci[0000:00:17.0]
localdomain;localhost.localdomain:irqstats.i124.update_rate 300
localdomain;localhost.localdomain:irqstats.i124.type DERIVE
localdomain;localhost.localdomain:irqstats.i124.min 0
localdomain;localhost.localdomain:irqstats.i127.label 522240-edge      0000:00:1f.7
localdomain;localhost.localdomain:irqstats.i127.update_rate 300
localdomain;localhost.localdomain:irqstats.i127.min 0
localdomain;localhost.localdomain:irqstats.i127.type DERIVE
localdomain;localhost.localdomain:irqstats.i127.info Interrupt 127, for device(s): 522240-edge      0000:00:1f.7
localdomain;localhost.localdomain:irqstats.i127.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i16.info Interrupt 16, for device(s): 16-fasteoi   i801_smbus
localdomain;localhost.localdomain:irqstats.i16.graph_data_size normal
localdomain;localhost.localdomain:irqstats.i16.label 16-fasteoi   i801_smbus
localdomain;localhost.localdomain:irqstats.i16.min 0
localdomain;localhost.localdomain:irqstats.i16.type DERIVE
localdomain;localhost.localdomain:irqstats.i16.update_rate 300
localdomain;localhost.localdomain:irqstats.iDFR.graph_data_size normal
localdomain;localhost.localdomain:irqstats.iDFR.info Interrupt DFR, for device(s): Deferred Error APIC interrupts
localdomain;localhost.localdomain:irqstats.iDFR.update_rate 300
localdomain;localhost.localdomain:irqstats.iDFR.type DERIVE
localdomain;localhost.localdomain:irqstats.iDFR.min 0
localdomain;localhost.localdomain:irqstats.iDFR.label Deferred Error APIC interrupts
localdomain;localhost.localdomain:irqstats.iMCE.label Machine check exceptions
localdomain;localhost.localdomain:irqstats.iMCE.type DERIVE
localdomain;localhost.localdomain:irqstats.iMCE.min 0
localdomain;localhost.localdomain:irqstats.iMCE.update_rate 300
localdomain;localhost.localdomain:irqstats.iMCE.info Interrupt MCE, for device(s): Machine check exceptions
localdomain;localhost.localdomain:irqstats.iMCE.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop3.graph_title Disk throughput for /dev/loop3
localdomain;localhost.localdomain:diskstats_throughput.loop3.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop3.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop3.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop3.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop3.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop3.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop3.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop3.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop3.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop3.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop3.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop3.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop3.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop3.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop3.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop3.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop3.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop3.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop3.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop1.graph_title Disk throughput for /dev/loop1
localdomain;localhost.localdomain:diskstats_throughput.loop1.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop1.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop1.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop1.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop1.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop1.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop1.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop1.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop1.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop1.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop1.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop1.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop1.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop1.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop1.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop1.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop1.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop1.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop1.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_iops.loop7.graph_title IOs for /dev/loop7
localdomain;localhost.localdomain:diskstats_iops.loop7.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop7.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop7.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop7.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop7.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop7.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop7.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop7.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop7.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop7.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop7.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop7.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop7.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop7.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop7.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop7.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop7.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop7.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop7.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop7.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop7.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop7.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop7.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop7.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop7.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop7.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop7.rdio.label dummy
localdomain;localhost.localdomain:memory.graph_args --base 1024 -l 0 --upper-limit 33545957376
localdomain;localhost.localdomain:memory.graph_vlabel Bytes
localdomain;localhost.localdomain:memory.graph_title Memory usage
localdomain;localhost.localdomain:memory.graph_category system
localdomain;localhost.localdomain:memory.graph_info This graph shows what the machine uses memory for.
localdomain;localhost.localdomain:memory.graph_order apps page_tables swap_cache slab shmem cached buffers free swap apps buffers swap cached free shmem slab swap_cache page_tables vmalloc_used committed mapped active inactive
localdomain;localhost.localdomain:memory.swap.graph_data_size normal
localdomain;localhost.localdomain:memory.swap.info Swap space used.
localdomain;localhost.localdomain:memory.swap.colour COLOUR7
localdomain;localhost.localdomain:memory.swap.update_rate 300
localdomain;localhost.localdomain:memory.swap.draw STACK
localdomain;localhost.localdomain:memory.swap.label swap
localdomain;localhost.localdomain:memory.mapped.update_rate 300
localdomain;localhost.localdomain:memory.mapped.colour COLOUR11
localdomain;localhost.localdomain:memory.mapped.draw LINE2
localdomain;localhost.localdomain:memory.mapped.label mapped
localdomain;localhost.localdomain:memory.mapped.graph_data_size normal
localdomain;localhost.localdomain:memory.mapped.info All mmap()ed pages.
localdomain;localhost.localdomain:memory.apps.graph_data_size normal
localdomain;localhost.localdomain:memory.apps.info Memory used by user-space applications.
localdomain;localhost.localdomain:memory.apps.update_rate 300
localdomain;localhost.localdomain:memory.apps.colour COLOUR0
localdomain;localhost.localdomain:memory.apps.draw AREA
localdomain;localhost.localdomain:memory.apps.label apps
localdomain;localhost.localdomain:memory.committed.update_rate 300
localdomain;localhost.localdomain:memory.committed.colour COLOUR10
localdomain;localhost.localdomain:memory.committed.label committed
localdomain;localhost.localdomain:memory.committed.draw LINE2
localdomain;localhost.localdomain:memory.committed.graph_data_size normal
localdomain;localhost.localdomain:memory.committed.info The amount of memory allocated to programs. Overcommitting is normal, but may indicate memory leaks.
localdomain;localhost.localdomain:memory.buffers.graph_data_size normal
localdomain;localhost.localdomain:memory.buffers.info Block device (e.g. harddisk) cache. Also where "dirty" blocks are stored until written.
localdomain;localhost.localdomain:memory.buffers.colour COLOUR5
localdomain;localhost.localdomain:memory.buffers.update_rate 300
localdomain;localhost.localdomain:memory.buffers.draw STACK
localdomain;localhost.localdomain:memory.buffers.label buffers
localdomain;localhost.localdomain:memory.vmalloc_used.info 'VMalloc' (kernel) memory used
localdomain;localhost.localdomain:memory.vmalloc_used.graph_data_size normal
localdomain;localhost.localdomain:memory.vmalloc_used.label vmalloc_used
localdomain;localhost.localdomain:memory.vmalloc_used.draw LINE2
localdomain;localhost.localdomain:memory.vmalloc_used.colour COLOUR8
localdomain;localhost.localdomain:memory.vmalloc_used.update_rate 300
localdomain;localhost.localdomain:memory.free.info Wasted memory. Memory that is not used for anything at all.
localdomain;localhost.localdomain:memory.free.graph_data_size normal
localdomain;localhost.localdomain:memory.free.draw STACK
localdomain;localhost.localdomain:memory.free.label unused
localdomain;localhost.localdomain:memory.free.colour COLOUR6
localdomain;localhost.localdomain:memory.free.update_rate 300
localdomain;localhost.localdomain:memory.swap_cache.colour COLOUR2
localdomain;localhost.localdomain:memory.swap_cache.update_rate 300
localdomain;localhost.localdomain:memory.swap_cache.draw STACK
localdomain;localhost.localdomain:memory.swap_cache.label swap_cache
localdomain;localhost.localdomain:memory.swap_cache.graph_data_size normal
localdomain;localhost.localdomain:memory.swap_cache.info A piece of memory that keeps track of pages that have been fetched from swap but not yet been modified.
localdomain;localhost.localdomain:memory.inactive.draw LINE2
localdomain;localhost.localdomain:memory.inactive.label inactive
localdomain;localhost.localdomain:memory.inactive.colour COLOUR15
localdomain;localhost.localdomain:memory.inactive.update_rate 300
localdomain;localhost.localdomain:memory.inactive.info Memory not currently used.
localdomain;localhost.localdomain:memory.inactive.graph_data_size normal
localdomain;localhost.localdomain:memory.page_tables.info Memory used to map between virtual and physical memory addresses.
localdomain;localhost.localdomain:memory.page_tables.graph_data_size normal
localdomain;localhost.localdomain:memory.page_tables.label page_tables
localdomain;localhost.localdomain:memory.page_tables.draw STACK
localdomain;localhost.localdomain:memory.page_tables.colour COLOUR1
localdomain;localhost.localdomain:memory.page_tables.update_rate 300
localdomain;localhost.localdomain:memory.slab.label slab_cache
localdomain;localhost.localdomain:memory.slab.draw STACK
localdomain;localhost.localdomain:memory.slab.colour COLOUR3
localdomain;localhost.localdomain:memory.slab.update_rate 300
localdomain;localhost.localdomain:memory.slab.info Memory used by the kernel (major users  are caches like inode, dentry, etc).
localdomain;localhost.localdomain:memory.slab.graph_data_size normal
localdomain;localhost.localdomain:memory.cached.info Parked file data (file content) cache.
localdomain;localhost.localdomain:memory.cached.graph_data_size normal
localdomain;localhost.localdomain:memory.cached.label cache
localdomain;localhost.localdomain:memory.cached.draw STACK
localdomain;localhost.localdomain:memory.cached.update_rate 300
localdomain;localhost.localdomain:memory.cached.colour COLOUR4
localdomain;localhost.localdomain:memory.shmem.colour COLOUR9
localdomain;localhost.localdomain:memory.shmem.update_rate 300
localdomain;localhost.localdomain:memory.shmem.label shmem
localdomain;localhost.localdomain:memory.shmem.draw STACK
localdomain;localhost.localdomain:memory.shmem.graph_data_size normal
localdomain;localhost.localdomain:memory.shmem.info Shared Memory (SYSV SHM segments, tmpfs).
localdomain;localhost.localdomain:memory.active.graph_data_size normal
localdomain;localhost.localdomain:memory.active.info Memory recently used. Not reclaimed unless absolutely necessary.
localdomain;localhost.localdomain:memory.active.colour COLOUR12
localdomain;localhost.localdomain:memory.active.update_rate 300
localdomain;localhost.localdomain:memory.active.label active
localdomain;localhost.localdomain:memory.active.draw LINE2
localdomain;localhost.localdomain:smart_sdb.graph_title S.M.A.R.T values for drive sdb
localdomain;localhost.localdomain:smart_sdb.graph_vlabel Attribute S.M.A.R.T value
localdomain;localhost.localdomain:smart_sdb.graph_args --base 1000 --lower-limit 0
localdomain;localhost.localdomain:smart_sdb.graph_category disk
localdomain;localhost.localdomain:smart_sdb.graph_info This graph shows the value of all S.M.A.R.T attributes of drive sdb (ST8000NM000A-2KE101). smartctl_exit_status is the return value of smartctl. A non-zero return value indicates an error, a potential error, or a fault on the drive.
localdomain;localhost.localdomain:smart_sdb.graph_order Raw_Read_Error_Rate Spin_Up_Time Start_Stop_Count Reallocated_Sector_Ct Seek_Error_Rate Power_On_Hours Spin_Retry_Count Power_Cycle_Count Unknown_Attribute Reported_Uncorrect Command_Timeout Airflow_Temperature_Cel Power_Off_Retract_Count Load_Cycle_Count Temperature_Celsius Hardware_ECC_Recovered Current_Pending_Sector Offline_Uncorrectable UDMA_CRC_Error_Count Head_Flying_Hours Total_LBAs_Written Total_LBAs_Read smartctl_exit_status
localdomain;localhost.localdomain:smart_sdb.Seek_Error_Rate.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Seek_Error_Rate.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Seek_Error_Rate.critical 045:
localdomain;localhost.localdomain:smart_sdb.Seek_Error_Rate.label Seek_Error_Rate
localdomain;localhost.localdomain:smart_sdb.Spin_Up_Time.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Spin_Up_Time.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Spin_Up_Time.critical 000:
localdomain;localhost.localdomain:smart_sdb.Spin_Up_Time.label Spin_Up_Time
localdomain;localhost.localdomain:smart_sdb.Command_Timeout.label Command_Timeout
localdomain;localhost.localdomain:smart_sdb.Command_Timeout.critical 000:
localdomain;localhost.localdomain:smart_sdb.Command_Timeout.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Command_Timeout.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.smartctl_exit_status.warning 1
localdomain;localhost.localdomain:smart_sdb.smartctl_exit_status.label smartctl_exit_status
localdomain;localhost.localdomain:smart_sdb.smartctl_exit_status.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.smartctl_exit_status.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Reallocated_Sector_Ct.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Reallocated_Sector_Ct.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Reallocated_Sector_Ct.label Reallocated_Sector_Ct
localdomain;localhost.localdomain:smart_sdb.Reallocated_Sector_Ct.critical 010:
localdomain;localhost.localdomain:smart_sdb.UDMA_CRC_Error_Count.label UDMA_CRC_Error_Count
localdomain;localhost.localdomain:smart_sdb.UDMA_CRC_Error_Count.critical 000:
localdomain;localhost.localdomain:smart_sdb.UDMA_CRC_Error_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.UDMA_CRC_Error_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Read.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Read.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Read.label Total_LBAs_Read
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Read.critical 000:
localdomain;localhost.localdomain:smart_sdb.Unknown_Attribute.critical 050:
localdomain;localhost.localdomain:smart_sdb.Unknown_Attribute.label Unknown_Attribute
localdomain;localhost.localdomain:smart_sdb.Unknown_Attribute.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Unknown_Attribute.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Temperature_Celsius.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Temperature_Celsius.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Temperature_Celsius.critical 000:
localdomain;localhost.localdomain:smart_sdb.Temperature_Celsius.label Temperature_Celsius
localdomain;localhost.localdomain:smart_sdb.Raw_Read_Error_Rate.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Raw_Read_Error_Rate.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Raw_Read_Error_Rate.label Raw_Read_Error_Rate
localdomain;localhost.localdomain:smart_sdb.Raw_Read_Error_Rate.critical 044:
localdomain;localhost.localdomain:smart_sdb.Power_Cycle_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Power_Cycle_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Power_Cycle_Count.label Power_Cycle_Count
localdomain;localhost.localdomain:smart_sdb.Power_Cycle_Count.critical 020:
localdomain;localhost.localdomain:smart_sdb.Offline_Uncorrectable.critical 000:
localdomain;localhost.localdomain:smart_sdb.Offline_Uncorrectable.label Offline_Uncorrectable
localdomain;localhost.localdomain:smart_sdb.Offline_Uncorrectable.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Offline_Uncorrectable.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Head_Flying_Hours.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Head_Flying_Hours.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Head_Flying_Hours.label Head_Flying_Hours
localdomain;localhost.localdomain:smart_sdb.Head_Flying_Hours.critical 000:
localdomain;localhost.localdomain:smart_sdb.Start_Stop_Count.label Start_Stop_Count
localdomain;localhost.localdomain:smart_sdb.Start_Stop_Count.critical 020:
localdomain;localhost.localdomain:smart_sdb.Start_Stop_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Start_Stop_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Power_On_Hours.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Power_On_Hours.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Power_On_Hours.label Power_On_Hours
localdomain;localhost.localdomain:smart_sdb.Power_On_Hours.critical 000:
localdomain;localhost.localdomain:smart_sdb.Spin_Retry_Count.label Spin_Retry_Count
localdomain;localhost.localdomain:smart_sdb.Spin_Retry_Count.critical 097:
localdomain;localhost.localdomain:smart_sdb.Spin_Retry_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Spin_Retry_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Written.critical 000:
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Written.label Total_LBAs_Written
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Written.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Total_LBAs_Written.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Load_Cycle_Count.critical 000:
localdomain;localhost.localdomain:smart_sdb.Load_Cycle_Count.label Load_Cycle_Count
localdomain;localhost.localdomain:smart_sdb.Load_Cycle_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Load_Cycle_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Hardware_ECC_Recovered.critical 000:
localdomain;localhost.localdomain:smart_sdb.Hardware_ECC_Recovered.label Hardware_ECC_Recovered
localdomain;localhost.localdomain:smart_sdb.Hardware_ECC_Recovered.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Hardware_ECC_Recovered.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Airflow_Temperature_Cel.critical 040:
localdomain;localhost.localdomain:smart_sdb.Airflow_Temperature_Cel.label Airflow_Temperature_Cel
localdomain;localhost.localdomain:smart_sdb.Airflow_Temperature_Cel.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Airflow_Temperature_Cel.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Power_Off_Retract_Count.critical 000:
localdomain;localhost.localdomain:smart_sdb.Power_Off_Retract_Count.label Power_Off_Retract_Count
localdomain;localhost.localdomain:smart_sdb.Power_Off_Retract_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Power_Off_Retract_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Current_Pending_Sector.graph_data_size normal
localdomain;localhost.localdomain:smart_sdb.Current_Pending_Sector.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Current_Pending_Sector.critical 000:
localdomain;localhost.localdomain:smart_sdb.Current_Pending_Sector.label Current_Pending_Sector
localdomain;localhost.localdomain:smart_sdb.Reported_Uncorrect.critical 000:
localdomain;localhost.localdomain:smart_sdb.Reported_Uncorrect.label Reported_Uncorrect
localdomain;localhost.localdomain:smart_sdb.Reported_Uncorrect.update_rate 300
localdomain;localhost.localdomain:smart_sdb.Reported_Uncorrect.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sda.graph_title Average latency for /dev/sda
localdomain;localhost.localdomain:diskstats_latency.sda.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.sda.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.sda.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.sda.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.sda.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sda.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sda.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sda.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.sda.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sda.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.sda.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sda.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sda.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sda.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.sda.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sda.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sda.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sda.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sda.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sda.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sda.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdc.graph_title Average latency for /dev/sdc
localdomain;localhost.localdomain:diskstats_latency.sdc.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.sdc.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.sdc.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.sdc.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.sdc.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdc.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdc.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.sdc.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdc.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdc.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.sdc.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdc.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.sdc.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdc.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_throughput.graph_title Throughput per device
localdomain;localhost.localdomain:diskstats_throughput.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.graph_vlabel Bytes/${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.graph_width 400
localdomain;localhost.localdomain:diskstats_throughput.graph_info This graph shows averaged throughput for the given disk in bytes.  Higher throughput is usualy linked with higher service time/latency (separate graph).  The graph base is 1024 yeilding Kibi- and Mebi-bytes.
localdomain;localhost.localdomain:diskstats_throughput.graph_order loop0_rdbytes loop0_wrbytes loop1_rdbytes loop1_wrbytes loop2_rdbytes loop2_wrbytes loop3_rdbytes loop3_wrbytes loop4_rdbytes loop4_wrbytes loop5_rdbytes loop5_wrbytes loop6_rdbytes loop6_wrbytes loop7_rdbytes loop7_wrbytes loop8_rdbytes loop8_wrbytes sda_rdbytes sda_wrbytes sdb_rdbytes sdb_wrbytes sdc_rdbytes sdc_wrbytes sdd_rdbytes sdd_wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop3_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop3_rdbytes.label loop3
localdomain;localhost.localdomain:diskstats_throughput.loop3_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop3_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop3_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop3_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop3_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop5_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop5_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop5_rdbytes.label loop5
localdomain;localhost.localdomain:diskstats_throughput.loop5_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop5_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop5_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop5_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop1_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop1_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop1_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop1_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop1_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop1_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop1_rdbytes.label loop1
localdomain;localhost.localdomain:diskstats_throughput.loop7_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop7_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop7_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop7_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop7_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop7_rdbytes.label loop7
localdomain;localhost.localdomain:diskstats_throughput.loop7_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdc_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdc_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdc_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdc_wrbytes.label sdc
localdomain;localhost.localdomain:diskstats_throughput.sdc_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdc_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdc_wrbytes.negative sdc_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop0_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop0_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop0_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop0_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop0_wrbytes.label loop0
localdomain;localhost.localdomain:diskstats_throughput.loop0_wrbytes.negative loop0_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop0_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop0_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop0_rdbytes.label loop0
localdomain;localhost.localdomain:diskstats_throughput.loop0_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop0_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop0_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop0_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop0_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop7_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop7_wrbytes.negative loop7_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop7_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop7_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop7_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop7_wrbytes.label loop7
localdomain;localhost.localdomain:diskstats_throughput.loop7_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop1_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop1_wrbytes.negative loop1_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop1_wrbytes.label loop1
localdomain;localhost.localdomain:diskstats_throughput.loop1_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop1_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop1_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop1_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop5_wrbytes.negative loop5_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop5_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop5_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop5_wrbytes.label loop5
localdomain;localhost.localdomain:diskstats_throughput.loop5_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop5_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop5_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop3_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop3_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop3_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop3_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop3_wrbytes.label loop3
localdomain;localhost.localdomain:diskstats_throughput.loop3_wrbytes.negative loop3_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop3_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdd_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdd_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdd_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdd_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdd_rdbytes.label sdd
localdomain;localhost.localdomain:diskstats_throughput.sdd_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sdd_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sda_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sda_wrbytes.negative sda_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.sda_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sda_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sda_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sda_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sda_wrbytes.label sda
localdomain;localhost.localdomain:diskstats_throughput.sdb_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdb_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdb_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdb_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdb_rdbytes.label sdb
localdomain;localhost.localdomain:diskstats_throughput.sdb_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sdb_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop4_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop4_rdbytes.label loop4
localdomain;localhost.localdomain:diskstats_throughput.loop4_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop4_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop4_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop4_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop4_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop8_wrbytes.negative loop8_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop8_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop8_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop8_wrbytes.label loop8
localdomain;localhost.localdomain:diskstats_throughput.loop8_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop8_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop8_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop2_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop2_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop2_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop2_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop2_wrbytes.label loop2
localdomain;localhost.localdomain:diskstats_throughput.loop2_wrbytes.negative loop2_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop2_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop6_wrbytes.label loop6
localdomain;localhost.localdomain:diskstats_throughput.loop6_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop6_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop6_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop6_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop6_wrbytes.negative loop6_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop6_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdd_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdd_wrbytes.negative sdd_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.sdd_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdd_wrbytes.label sdd
localdomain;localhost.localdomain:diskstats_throughput.sdd_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdd_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdd_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sda_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sda_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sda_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sda_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sda_rdbytes.label sda
localdomain;localhost.localdomain:diskstats_throughput.sda_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sda_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdb_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdb_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdb_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdb_wrbytes.label sdb
localdomain;localhost.localdomain:diskstats_throughput.sdb_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdb_wrbytes.negative sdb_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.sdb_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop6_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop6_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop6_rdbytes.label loop6
localdomain;localhost.localdomain:diskstats_throughput.loop6_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop6_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop6_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop6_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop2_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop2_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop2_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop2_rdbytes.label loop2
localdomain;localhost.localdomain:diskstats_throughput.loop2_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop2_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop2_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop8_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop8_rdbytes.label loop8
localdomain;localhost.localdomain:diskstats_throughput.loop8_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop8_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop8_rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop8_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop8_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop4_wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop4_wrbytes.negative loop4_rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop4_wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop4_wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop4_wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop4_wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop4_wrbytes.label loop4
localdomain;localhost.localdomain:diskstats_throughput.sdc_rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdc_rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdc_rdbytes.label sdc
localdomain;localhost.localdomain:diskstats_throughput.sdc_rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sdc_rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdc_rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdc_rdbytes.update_rate 300
localdomain;localhost.localdomain:postfix_mailvolume.graph_title Postfix bytes throughput
localdomain;localhost.localdomain:postfix_mailvolume.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:postfix_mailvolume.graph_vlabel bytes / ${graph_period}
localdomain;localhost.localdomain:postfix_mailvolume.graph_scale yes
localdomain;localhost.localdomain:postfix_mailvolume.graph_category postfix
localdomain;localhost.localdomain:postfix_mailvolume.graph_order volume
localdomain;localhost.localdomain:postfix_mailvolume.volume.update_rate 300
localdomain;localhost.localdomain:postfix_mailvolume.volume.min 0
localdomain;localhost.localdomain:postfix_mailvolume.volume.type DERIVE
localdomain;localhost.localdomain:postfix_mailvolume.volume.label delivered volume
localdomain;localhost.localdomain:postfix_mailvolume.volume.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop6.graph_title Disk utilization for /dev/loop6
localdomain;localhost.localdomain:diskstats_utilization.loop6.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.loop6.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.loop6.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.loop6.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.loop6.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.loop6.util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop6.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.loop6.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop6.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop6.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop6.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.loop6.util.graph_data_size normal
localdomain;localhost.localdomain:fw_packets.graph_title Firewall Throughput
localdomain;localhost.localdomain:fw_packets.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:fw_packets.graph_vlabel Packets/${graph_period}
localdomain;localhost.localdomain:fw_packets.graph_category network
localdomain;localhost.localdomain:fw_packets.graph_order received forwarded
localdomain;localhost.localdomain:fw_packets.received.graph_data_size normal
localdomain;localhost.localdomain:fw_packets.received.draw AREA
localdomain;localhost.localdomain:fw_packets.received.label Received
localdomain;localhost.localdomain:fw_packets.received.min 0
localdomain;localhost.localdomain:fw_packets.received.type DERIVE
localdomain;localhost.localdomain:fw_packets.received.update_rate 300
localdomain;localhost.localdomain:fw_packets.forwarded.graph_data_size normal
localdomain;localhost.localdomain:fw_packets.forwarded.update_rate 300
localdomain;localhost.localdomain:fw_packets.forwarded.type DERIVE
localdomain;localhost.localdomain:fw_packets.forwarded.min 0
localdomain;localhost.localdomain:fw_packets.forwarded.label Forwarded
localdomain;localhost.localdomain:fw_packets.forwarded.draw LINE2
localdomain;localhost.localdomain:diskstats_latency.loop4.graph_title Average latency for /dev/loop4
localdomain;localhost.localdomain:diskstats_latency.loop4.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop4.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop4.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop4.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop4.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop4.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop4.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop4.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop4.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop4.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop4.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop4.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop4.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop4.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_iops.loop0.graph_title IOs for /dev/loop0
localdomain;localhost.localdomain:diskstats_iops.loop0.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop0.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop0.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop0.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop0.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop0.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop0.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop0.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop0.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop0.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop0.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop0.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop0.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop0.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop0.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop0.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop0.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop0.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop0.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop0.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop0.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop0.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop0.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop0.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop0.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop0.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop0.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:load.graph_title Load average
localdomain;localhost.localdomain:load.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:load.graph_vlabel load
localdomain;localhost.localdomain:load.graph_scale no
localdomain;localhost.localdomain:load.graph_category system
localdomain;localhost.localdomain:load.graph_info The load average of the machine describes how many processes are in the run-queue (scheduled to run "immediately").
localdomain;localhost.localdomain:load.graph_order load
localdomain;localhost.localdomain:load.load.label load
localdomain;localhost.localdomain:load.load.info 5 minute load average
localdomain;localhost.localdomain:load.load.graph_data_size normal
localdomain;localhost.localdomain:load.load.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop2.graph_title IOs for /dev/loop2
localdomain;localhost.localdomain:diskstats_iops.loop2.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop2.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop2.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop2.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop2.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop2.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop2.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop2.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop2.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop2.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop2.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop2.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop2.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop2.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop2.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop2.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop2.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop2.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop2.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop2.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop2.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop2.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop2.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop2.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop2.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop2.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop2.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdc.graph_title Disk throughput for /dev/sdc
localdomain;localhost.localdomain:diskstats_throughput.sdc.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.sdc.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.sdc.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.sdc.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.sdc.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.sdc.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdc.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdc.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.sdc.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdc.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.sdc.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.sdc.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdc.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.sdc.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.sdc.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.sdc.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.sdc.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.sdc.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.sdc.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sdc.graph_title Disk utilization for /dev/sdc
localdomain;localhost.localdomain:diskstats_utilization.sdc.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.sdc.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.sdc.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.sdc.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.sdc.graph_order util
localdomain;localhost.localdomain:diskstats_utilization.sdc.util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.sdc.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated.
localdomain;localhost.localdomain:diskstats_utilization.sdc.util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sdc.util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.sdc.util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sdc.util.label Utilization
localdomain;localhost.localdomain:diskstats_utilization.sdc.util.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdd.graph_title IOs for /dev/sdd
localdomain;localhost.localdomain:diskstats_iops.sdd.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.sdd.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.sdd.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.sdd.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.sdd.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.sdd.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdd.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sdd.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdd.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdd.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdd.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.sdd.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.sdd.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdd.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdd.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdd.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdd.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.sdd.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.sdd.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdd.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.sdd.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.sdd.wrio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdd.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.sdd.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdd.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.sdd.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.sdd.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop8.graph_title Disk throughput for /dev/loop8
localdomain;localhost.localdomain:diskstats_throughput.loop8.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop8.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop8.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop8.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop8.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop8.rdbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop8.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop8.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop8.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop8.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop8.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop8.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop8.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop8.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop8.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop8.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop8.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop8.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop8.wrbytes.type GAUGE
localdomain;localhost.localdomain:open_files.graph_title File table usage
localdomain;localhost.localdomain:open_files.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:open_files.graph_vlabel number of open files
localdomain;localhost.localdomain:open_files.graph_category system
localdomain;localhost.localdomain:open_files.graph_info This graph monitors the Linux open files table.
localdomain;localhost.localdomain:open_files.graph_order used
localdomain;localhost.localdomain:open_files.used.update_rate 300
localdomain;localhost.localdomain:open_files.used.label open files
localdomain;localhost.localdomain:open_files.used.graph_data_size normal
localdomain;localhost.localdomain:open_files.used.info The number of currently open files.
localdomain;localhost.localdomain:open_files.used.critical 9038904596117680128
localdomain;localhost.localdomain:open_files.used.warning 8485502273906394112
localdomain;localhost.localdomain:diskstats_latency.sdb.graph_title Average latency for /dev/sdb
localdomain;localhost.localdomain:diskstats_latency.sdb.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.sdb.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.sdb.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.sdb.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.sdb.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdb.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdb.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdb.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.sdb.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdb.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdb.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.sdb.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdb.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdb.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.graph_title Disk latency per device
localdomain;localhost.localdomain:diskstats_latency.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_latency.graph_vlabel Average IO Wait (seconds)
localdomain;localhost.localdomain:diskstats_latency.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.graph_width 400
localdomain;localhost.localdomain:diskstats_latency.graph_order loop0_avgwait loop1_avgwait loop2_avgwait loop3_avgwait loop4_avgwait loop5_avgwait loop6_avgwait loop7_avgwait sda_avgwait sdb_avgwait sdc_avgwait sdd_avgwait
localdomain;localhost.localdomain:diskstats_latency.loop7_avgwait.label loop7
localdomain;localhost.localdomain:diskstats_latency.loop7_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop7_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop7_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop7_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop7_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop7_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop1_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop1_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop1_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop1_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop1_avgwait.label loop1
localdomain;localhost.localdomain:diskstats_latency.loop1_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop1_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop4_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop4_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop4_avgwait.label loop4
localdomain;localhost.localdomain:diskstats_latency.loop4_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop4_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop4_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop4_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop3_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop3_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop3_avgwait.label loop3
localdomain;localhost.localdomain:diskstats_latency.loop3_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop3_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop3_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop3_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop5_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop5_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop5_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop5_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop5_avgwait.label loop5
localdomain;localhost.localdomain:diskstats_latency.loop5_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop5_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.sda_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sda_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sda_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sda_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sda_avgwait.label sda
localdomain;localhost.localdomain:diskstats_latency.sda_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sda_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop2_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop2_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop2_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop2_avgwait.label loop2
localdomain;localhost.localdomain:diskstats_latency.loop2_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop2_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop2_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop0_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop0_avgwait.label loop0
localdomain;localhost.localdomain:diskstats_latency.loop0_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop0_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop0_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop0_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop0_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdd_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdd_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.sdd_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdd_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdd_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdd_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdd_avgwait.label sdd
localdomain;localhost.localdomain:diskstats_latency.sdc_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.sdc_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdc_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdc_avgwait.label sdc
localdomain;localhost.localdomain:diskstats_latency.sdc_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdc_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdc_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdb_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.sdb_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.sdb_avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.sdb_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.sdb_avgwait.label sdb
localdomain;localhost.localdomain:diskstats_latency.sdb_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.sdb_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop6_avgwait.info Average wait time for an I/O request
localdomain;localhost.localdomain:diskstats_latency.loop6_avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop6_avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop6_avgwait.label loop6
localdomain;localhost.localdomain:diskstats_latency.loop6_avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop6_avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop6_avgwait.update_rate 300
localdomain;localhost.localdomain:munin_stats.graph_title Munin processing time
localdomain;localhost.localdomain:munin_stats.graph_info This graph shows the run time of the four different processes making up a munin-master run.  Munin-master is run from cron every 5 minutes and we want each of the programmes in munin-master to complete before the next instance starts.  Especially munin-update and munin-graph are time consuming and their run time bears watching. If munin-update uses too long time to run please see the munin-update graph to determine which host is slowing it down.  If munin-graph is running too slow you need to get clever (email the munin-users mailing list) unless you can buy a faster computer with better disks to run munin on.
localdomain;localhost.localdomain:munin_stats.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:munin_stats.graph_scale yes
localdomain;localhost.localdomain:munin_stats.graph_vlabel seconds
localdomain;localhost.localdomain:munin_stats.graph_category munin
localdomain;localhost.localdomain:munin_stats.graph_order update graph html limits
localdomain;localhost.localdomain:munin_stats.html.draw AREASTACK
localdomain;localhost.localdomain:munin_stats.html.label munin html
localdomain;localhost.localdomain:munin_stats.html.graph_data_size normal
localdomain;localhost.localdomain:munin_stats.html.update_rate 300
localdomain;localhost.localdomain:munin_stats.update.draw AREASTACK
localdomain;localhost.localdomain:munin_stats.update.label munin update
localdomain;localhost.localdomain:munin_stats.update.update_rate 300
localdomain;localhost.localdomain:munin_stats.update.warning 240
localdomain;localhost.localdomain:munin_stats.update.critical 285
localdomain;localhost.localdomain:munin_stats.update.graph_data_size normal
localdomain;localhost.localdomain:munin_stats.graph.critical 285
localdomain;localhost.localdomain:munin_stats.graph.warning 240
localdomain;localhost.localdomain:munin_stats.graph.graph_data_size normal
localdomain;localhost.localdomain:munin_stats.graph.label munin graph
localdomain;localhost.localdomain:munin_stats.graph.draw AREASTACK
localdomain;localhost.localdomain:munin_stats.graph.update_rate 300
localdomain;localhost.localdomain:munin_stats.limits.draw AREASTACK
localdomain;localhost.localdomain:munin_stats.limits.label munin limits
localdomain;localhost.localdomain:munin_stats.limits.graph_data_size normal
localdomain;localhost.localdomain:munin_stats.limits.update_rate 300
localdomain;localhost.localdomain:processes.graph_title Processes
localdomain;localhost.localdomain:processes.graph_info This graph shows the number of processes
localdomain;localhost.localdomain:processes.graph_category processes
localdomain;localhost.localdomain:processes.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:processes.graph_vlabel Number of processes
localdomain;localhost.localdomain:processes.graph_order sleeping idle stopped zombie dead paging uninterruptible runnable processes dead paging idle sleeping uninterruptible stopped runnable zombie processes
localdomain;localhost.localdomain:processes.idle.info The number of idle kernel threads (>= 4.2 kernels only).
localdomain;localhost.localdomain:processes.idle.graph_data_size normal
localdomain;localhost.localdomain:processes.idle.draw STACK
localdomain;localhost.localdomain:processes.idle.label idle
localdomain;localhost.localdomain:processes.idle.update_rate 300
localdomain;localhost.localdomain:processes.idle.colour 4169e1
localdomain;localhost.localdomain:processes.processes.info The total number of processes.
localdomain;localhost.localdomain:processes.processes.graph_data_size normal
localdomain;localhost.localdomain:processes.processes.label total
localdomain;localhost.localdomain:processes.processes.draw LINE1
localdomain;localhost.localdomain:processes.processes.colour c0c0c0
localdomain;localhost.localdomain:processes.processes.update_rate 300
localdomain;localhost.localdomain:processes.stopped.graph_data_size normal
localdomain;localhost.localdomain:processes.stopped.info The number of stopped or traced processes.
localdomain;localhost.localdomain:processes.stopped.update_rate 300
localdomain;localhost.localdomain:processes.stopped.colour cc0000
localdomain;localhost.localdomain:processes.stopped.draw STACK
localdomain;localhost.localdomain:processes.stopped.label stopped
localdomain;localhost.localdomain:processes.sleeping.colour 0022ff
localdomain;localhost.localdomain:processes.sleeping.update_rate 300
localdomain;localhost.localdomain:processes.sleeping.draw AREA
localdomain;localhost.localdomain:processes.sleeping.label sleeping
localdomain;localhost.localdomain:processes.sleeping.graph_data_size normal
localdomain;localhost.localdomain:processes.sleeping.info The number of sleeping processes.
localdomain;localhost.localdomain:processes.paging.update_rate 300
localdomain;localhost.localdomain:processes.paging.colour 00aaaa
localdomain;localhost.localdomain:processes.paging.draw STACK
localdomain;localhost.localdomain:processes.paging.label paging
localdomain;localhost.localdomain:processes.paging.graph_data_size normal
localdomain;localhost.localdomain:processes.paging.info The number of paging processes (<2.6 kernels only).
localdomain;localhost.localdomain:processes.zombie.colour 990000
localdomain;localhost.localdomain:processes.zombie.update_rate 300
localdomain;localhost.localdomain:processes.zombie.label zombie
localdomain;localhost.localdomain:processes.zombie.draw STACK
localdomain;localhost.localdomain:processes.zombie.graph_data_size normal
localdomain;localhost.localdomain:processes.zombie.info The number of defunct ('zombie') processes (process terminated and parent not waiting).
localdomain;localhost.localdomain:processes.dead.draw STACK
localdomain;localhost.localdomain:processes.dead.label dead
localdomain;localhost.localdomain:processes.dead.colour ff0000
localdomain;localhost.localdomain:processes.dead.update_rate 300
localdomain;localhost.localdomain:processes.dead.info The number of dead processes.
localdomain;localhost.localdomain:processes.dead.graph_data_size normal
localdomain;localhost.localdomain:processes.runnable.info The number of runnable processes (on the run queue).
localdomain;localhost.localdomain:processes.runnable.graph_data_size normal
localdomain;localhost.localdomain:processes.runnable.label runnable
localdomain;localhost.localdomain:processes.runnable.draw STACK
localdomain;localhost.localdomain:processes.runnable.update_rate 300
localdomain;localhost.localdomain:processes.runnable.colour 22ff22
localdomain;localhost.localdomain:processes.uninterruptible.colour ffa500
localdomain;localhost.localdomain:processes.uninterruptible.update_rate 300
localdomain;localhost.localdomain:processes.uninterruptible.label uninterruptible
localdomain;localhost.localdomain:processes.uninterruptible.draw STACK
localdomain;localhost.localdomain:processes.uninterruptible.graph_data_size normal
localdomain;localhost.localdomain:processes.uninterruptible.info The number of uninterruptible processes (usually IO).
localdomain;localhost.localdomain:cpu.graph_title CPU usage
localdomain;localhost.localdomain:cpu.graph_order system user nice idle iowait irq softirq system user nice idle iowait irq softirq steal guest
localdomain;localhost.localdomain:cpu.graph_args --base 1000 -r --lower-limit 0 --upper-limit 400
localdomain;localhost.localdomain:cpu.graph_vlabel %
localdomain;localhost.localdomain:cpu.graph_scale no
localdomain;localhost.localdomain:cpu.graph_info This graph shows how CPU time is spent.
localdomain;localhost.localdomain:cpu.graph_category system
localdomain;localhost.localdomain:cpu.graph_period second
localdomain;localhost.localdomain:cpu.idle.info Idle CPU time
localdomain;localhost.localdomain:cpu.idle.graph_data_size normal
localdomain;localhost.localdomain:cpu.idle.draw STACK
localdomain;localhost.localdomain:cpu.idle.label idle
localdomain;localhost.localdomain:cpu.idle.min 0
localdomain;localhost.localdomain:cpu.idle.type DERIVE
localdomain;localhost.localdomain:cpu.idle.update_rate 300
localdomain;localhost.localdomain:cpu.softirq.graph_data_size normal
localdomain;localhost.localdomain:cpu.softirq.info CPU time spent handling "batched" interrupts
localdomain;localhost.localdomain:cpu.softirq.update_rate 300
localdomain;localhost.localdomain:cpu.softirq.type DERIVE
localdomain;localhost.localdomain:cpu.softirq.min 0
localdomain;localhost.localdomain:cpu.softirq.label softirq
localdomain;localhost.localdomain:cpu.softirq.draw STACK
localdomain;localhost.localdomain:cpu.system.min 0
localdomain;localhost.localdomain:cpu.system.type DERIVE
localdomain;localhost.localdomain:cpu.system.update_rate 300
localdomain;localhost.localdomain:cpu.system.draw AREA
localdomain;localhost.localdomain:cpu.system.label system
localdomain;localhost.localdomain:cpu.system.graph_data_size normal
localdomain;localhost.localdomain:cpu.system.info CPU time spent by the kernel in system activities
localdomain;localhost.localdomain:cpu.nice.label nice
localdomain;localhost.localdomain:cpu.nice.draw STACK
localdomain;localhost.localdomain:cpu.nice.update_rate 300
localdomain;localhost.localdomain:cpu.nice.min 0
localdomain;localhost.localdomain:cpu.nice.type DERIVE
localdomain;localhost.localdomain:cpu.nice.info CPU time spent by nice(1)d programs
localdomain;localhost.localdomain:cpu.nice.graph_data_size normal
localdomain;localhost.localdomain:cpu.irq.label irq
localdomain;localhost.localdomain:cpu.irq.draw STACK
localdomain;localhost.localdomain:cpu.irq.update_rate 300
localdomain;localhost.localdomain:cpu.irq.min 0
localdomain;localhost.localdomain:cpu.irq.type DERIVE
localdomain;localhost.localdomain:cpu.irq.info CPU time spent handling interrupts
localdomain;localhost.localdomain:cpu.irq.graph_data_size normal
localdomain;localhost.localdomain:cpu.user.type DERIVE
localdomain;localhost.localdomain:cpu.user.min 0
localdomain;localhost.localdomain:cpu.user.update_rate 300
localdomain;localhost.localdomain:cpu.user.draw STACK
localdomain;localhost.localdomain:cpu.user.label user
localdomain;localhost.localdomain:cpu.user.graph_data_size normal
localdomain;localhost.localdomain:cpu.user.info CPU time spent by normal programs and daemons
localdomain;localhost.localdomain:cpu.steal.graph_data_size normal
localdomain;localhost.localdomain:cpu.steal.info The time that a virtual CPU had runnable tasks, but the virtual CPU itself was not running
localdomain;localhost.localdomain:cpu.steal.type DERIVE
localdomain;localhost.localdomain:cpu.steal.min 0
localdomain;localhost.localdomain:cpu.steal.update_rate 300
localdomain;localhost.localdomain:cpu.steal.draw STACK
localdomain;localhost.localdomain:cpu.steal.label steal
localdomain;localhost.localdomain:cpu.iowait.min 0
localdomain;localhost.localdomain:cpu.iowait.type DERIVE
localdomain;localhost.localdomain:cpu.iowait.update_rate 300
localdomain;localhost.localdomain:cpu.iowait.draw STACK
localdomain;localhost.localdomain:cpu.iowait.label iowait
localdomain;localhost.localdomain:cpu.iowait.graph_data_size normal
localdomain;localhost.localdomain:cpu.iowait.info CPU time spent waiting for I/O operations to finish when there is nothing else to do.
localdomain;localhost.localdomain:cpu.guest.info The time spent running a virtual CPU for guest operating systems under the control of the Linux kernel.
localdomain;localhost.localdomain:cpu.guest.graph_data_size normal
localdomain;localhost.localdomain:cpu.guest.draw STACK
localdomain;localhost.localdomain:cpu.guest.label guest
localdomain;localhost.localdomain:cpu.guest.type DERIVE
localdomain;localhost.localdomain:cpu.guest.min 0
localdomain;localhost.localdomain:cpu.guest.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop2.graph_title Average latency for /dev/loop2
localdomain;localhost.localdomain:diskstats_latency.loop2.graph_args --base 1000 --logarithmic
localdomain;localhost.localdomain:diskstats_latency.loop2.graph_vlabel seconds
localdomain;localhost.localdomain:diskstats_latency.loop2.graph_category disk
localdomain;localhost.localdomain:diskstats_latency.loop2.graph_info This graph shows average waiting time/latency for different categories of disk operations.   The times that include the queue times indicate how busy your system is.  If the waiting time hits 1 second then your I/O system is 100% busy.
localdomain;localhost.localdomain:diskstats_latency.loop2.graph_order svctm avgwait avgrdwait avgwrwait
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwait.label IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop2.svctm.label Device IO time
localdomain;localhost.localdomain:diskstats_latency.loop2.svctm.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop2.svctm.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop2.svctm.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop2.svctm.min 0
localdomain;localhost.localdomain:diskstats_latency.loop2.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request.
localdomain;localhost.localdomain:diskstats_latency.loop2.svctm.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.label Read IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop2.avgrdwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.label Write IO Wait time
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.draw LINE1
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.update_rate 300
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.type GAUGE
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.min 0
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.warning 0:3
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al)
localdomain;localhost.localdomain:diskstats_latency.loop2.avgwrwait.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.graph_title Utilization per device
localdomain;localhost.localdomain:diskstats_utilization.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid
localdomain;localhost.localdomain:diskstats_utilization.graph_vlabel % busy
localdomain;localhost.localdomain:diskstats_utilization.graph_category disk
localdomain;localhost.localdomain:diskstats_utilization.graph_width 400
localdomain;localhost.localdomain:diskstats_utilization.graph_scale no
localdomain;localhost.localdomain:diskstats_utilization.graph_order loop0_util loop1_util loop2_util loop3_util loop4_util loop5_util loop6_util loop7_util sda_util sdb_util sdc_util sdd_util
localdomain;localhost.localdomain:diskstats_utilization.loop3_util.label loop3
localdomain;localhost.localdomain:diskstats_utilization.loop3_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop3_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop3_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop3_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop3_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop3_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop5_util.label loop5
localdomain;localhost.localdomain:diskstats_utilization.loop5_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop5_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop5_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop5_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop5_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop5_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop4_util.label loop4
localdomain;localhost.localdomain:diskstats_utilization.loop4_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop4_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop4_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop4_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop4_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop4_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.sdb_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sdb_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.sdb_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sdb_util.label sdb
localdomain;localhost.localdomain:diskstats_utilization.sdb_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.sdb_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.sdb_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.sdc_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sdc_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sdc_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.sdc_util.label sdc
localdomain;localhost.localdomain:diskstats_utilization.sdc_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.sdc_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.sdc_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.sdd_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.sdd_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.sdd_util.label sdd
localdomain;localhost.localdomain:diskstats_utilization.sdd_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.sdd_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sdd_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sdd_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop7_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop7_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop7_util.label loop7
localdomain;localhost.localdomain:diskstats_utilization.loop7_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop7_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop7_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop7_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop1_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop1_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop1_util.label loop1
localdomain;localhost.localdomain:diskstats_utilization.loop1_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop1_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop1_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop1_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop6_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop6_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop6_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop6_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop6_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop6_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop6_util.label loop6
localdomain;localhost.localdomain:diskstats_utilization.loop0_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop0_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.loop0_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop0_util.label loop0
localdomain;localhost.localdomain:diskstats_utilization.loop0_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop0_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop0_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sda_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_utilization.sda_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.sda_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.sda_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.sda_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.sda_util.label sda
localdomain;localhost.localdomain:diskstats_utilization.sda_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop2_util.draw LINE1
localdomain;localhost.localdomain:diskstats_utilization.loop2_util.label loop2
localdomain;localhost.localdomain:diskstats_utilization.loop2_util.min 0
localdomain;localhost.localdomain:diskstats_utilization.loop2_util.type GAUGE
localdomain;localhost.localdomain:diskstats_utilization.loop2_util.update_rate 300
localdomain;localhost.localdomain:diskstats_utilization.loop2_util.info Utilization of the device
localdomain;localhost.localdomain:diskstats_utilization.loop2_util.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop1.graph_title IOs for /dev/loop1
localdomain;localhost.localdomain:diskstats_iops.loop1.graph_args --base 1000
localdomain;localhost.localdomain:diskstats_iops.loop1.graph_vlabel Units read (-) / write (+)
localdomain;localhost.localdomain:diskstats_iops.loop1.graph_category disk
localdomain;localhost.localdomain:diskstats_iops.loop1.graph_info This graph shows the number of IO operations pr second and the average size of these requests.  Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph).  Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3.  This is because the base for this graph is 1000 not 1024.
localdomain;localhost.localdomain:diskstats_iops.loop1.graph_order rdio wrio avgrdrqsz avgwrrqsz
localdomain;localhost.localdomain:diskstats_iops.loop1.avgrdrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop1.avgrdrqsz.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop1.avgrdrqsz.graph no
localdomain;localhost.localdomain:diskstats_iops.loop1.avgrdrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop1.avgrdrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop1.avgrdrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop1.avgrdrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop1.rdio.graph no
localdomain;localhost.localdomain:diskstats_iops.loop1.rdio.label dummy
localdomain;localhost.localdomain:diskstats_iops.loop1.rdio.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop1.rdio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop1.rdio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop1.rdio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop1.rdio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.min 0
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.label Req Size (KB)
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.draw LINE1
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.negative avgrdrqsz
localdomain;localhost.localdomain:diskstats_iops.loop1.avgwrrqsz.info Average Request Size in kilobytes (1000 based)
localdomain;localhost.localdomain:diskstats_iops.loop1.wrio.negative rdio
localdomain;localhost.localdomain:diskstats_iops.loop1.wrio.graph_data_size normal
localdomain;localhost.localdomain:diskstats_iops.loop1.wrio.update_rate 300
localdomain;localhost.localdomain:diskstats_iops.loop1.wrio.min 0
localdomain;localhost.localdomain:diskstats_iops.loop1.wrio.type GAUGE
localdomain;localhost.localdomain:diskstats_iops.loop1.wrio.label IO/sec
localdomain;localhost.localdomain:diskstats_iops.loop1.wrio.draw LINE1
localdomain;localhost.localdomain:smart_sdd.graph_title S.M.A.R.T values for drive sdd
localdomain;localhost.localdomain:smart_sdd.graph_vlabel Attribute S.M.A.R.T value
localdomain;localhost.localdomain:smart_sdd.graph_args --base 1000 --lower-limit 0
localdomain;localhost.localdomain:smart_sdd.graph_category disk
localdomain;localhost.localdomain:smart_sdd.graph_info This graph shows the value of all S.M.A.R.T attributes of drive sdd (ST8000NM000A-2KE101). smartctl_exit_status is the return value of smartctl. A non-zero return value indicates an error, a potential error, or a fault on the drive.
localdomain;localhost.localdomain:smart_sdd.graph_order Raw_Read_Error_Rate Spin_Up_Time Start_Stop_Count Reallocated_Sector_Ct Seek_Error_Rate Power_On_Hours Spin_Retry_Count Power_Cycle_Count Unknown_Attribute Reported_Uncorrect Command_Timeout Airflow_Temperature_Cel Power_Off_Retract_Count Load_Cycle_Count Temperature_Celsius Hardware_ECC_Recovered Current_Pending_Sector Offline_Uncorrectable UDMA_CRC_Error_Count Head_Flying_Hours Total_LBAs_Written Total_LBAs_Read smartctl_exit_status
localdomain;localhost.localdomain:smart_sdd.Reported_Uncorrect.label Reported_Uncorrect
localdomain;localhost.localdomain:smart_sdd.Reported_Uncorrect.critical 000:
localdomain;localhost.localdomain:smart_sdd.Reported_Uncorrect.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Reported_Uncorrect.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Current_Pending_Sector.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Current_Pending_Sector.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Current_Pending_Sector.label Current_Pending_Sector
localdomain;localhost.localdomain:smart_sdd.Current_Pending_Sector.critical 000:
localdomain;localhost.localdomain:smart_sdd.Power_Off_Retract_Count.critical 000:
localdomain;localhost.localdomain:smart_sdd.Power_Off_Retract_Count.label Power_Off_Retract_Count
localdomain;localhost.localdomain:smart_sdd.Power_Off_Retract_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Power_Off_Retract_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Airflow_Temperature_Cel.critical 040:
localdomain;localhost.localdomain:smart_sdd.Airflow_Temperature_Cel.label Airflow_Temperature_Cel
localdomain;localhost.localdomain:smart_sdd.Airflow_Temperature_Cel.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Airflow_Temperature_Cel.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Hardware_ECC_Recovered.label Hardware_ECC_Recovered
localdomain;localhost.localdomain:smart_sdd.Hardware_ECC_Recovered.critical 000:
localdomain;localhost.localdomain:smart_sdd.Hardware_ECC_Recovered.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Hardware_ECC_Recovered.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Load_Cycle_Count.label Load_Cycle_Count
localdomain;localhost.localdomain:smart_sdd.Load_Cycle_Count.critical 000:
localdomain;localhost.localdomain:smart_sdd.Load_Cycle_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Load_Cycle_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Written.label Total_LBAs_Written
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Written.critical 000:
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Written.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Written.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Spin_Retry_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Spin_Retry_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Spin_Retry_Count.label Spin_Retry_Count
localdomain;localhost.localdomain:smart_sdd.Spin_Retry_Count.critical 097:
localdomain;localhost.localdomain:smart_sdd.Power_On_Hours.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Power_On_Hours.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Power_On_Hours.label Power_On_Hours
localdomain;localhost.localdomain:smart_sdd.Power_On_Hours.critical 000:
localdomain;localhost.localdomain:smart_sdd.Start_Stop_Count.label Start_Stop_Count
localdomain;localhost.localdomain:smart_sdd.Start_Stop_Count.critical 020:
localdomain;localhost.localdomain:smart_sdd.Start_Stop_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Start_Stop_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Head_Flying_Hours.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Head_Flying_Hours.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Head_Flying_Hours.label Head_Flying_Hours
localdomain;localhost.localdomain:smart_sdd.Head_Flying_Hours.critical 000:
localdomain;localhost.localdomain:smart_sdd.Offline_Uncorrectable.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Offline_Uncorrectable.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Offline_Uncorrectable.label Offline_Uncorrectable
localdomain;localhost.localdomain:smart_sdd.Offline_Uncorrectable.critical 000:
localdomain;localhost.localdomain:smart_sdd.Power_Cycle_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Power_Cycle_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Power_Cycle_Count.critical 020:
localdomain;localhost.localdomain:smart_sdd.Power_Cycle_Count.label Power_Cycle_Count
localdomain;localhost.localdomain:smart_sdd.Raw_Read_Error_Rate.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Raw_Read_Error_Rate.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Raw_Read_Error_Rate.label Raw_Read_Error_Rate
localdomain;localhost.localdomain:smart_sdd.Raw_Read_Error_Rate.critical 044:
localdomain;localhost.localdomain:smart_sdd.Temperature_Celsius.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Temperature_Celsius.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Temperature_Celsius.label Temperature_Celsius
localdomain;localhost.localdomain:smart_sdd.Temperature_Celsius.critical 000:
localdomain;localhost.localdomain:smart_sdd.Unknown_Attribute.critical 050:
localdomain;localhost.localdomain:smart_sdd.Unknown_Attribute.label Unknown_Attribute
localdomain;localhost.localdomain:smart_sdd.Unknown_Attribute.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Unknown_Attribute.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Read.label Total_LBAs_Read
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Read.critical 000:
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Read.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Total_LBAs_Read.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.UDMA_CRC_Error_Count.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.UDMA_CRC_Error_Count.update_rate 300
localdomain;localhost.localdomain:smart_sdd.UDMA_CRC_Error_Count.critical 000:
localdomain;localhost.localdomain:smart_sdd.UDMA_CRC_Error_Count.label UDMA_CRC_Error_Count
localdomain;localhost.localdomain:smart_sdd.Reallocated_Sector_Ct.critical 010:
localdomain;localhost.localdomain:smart_sdd.Reallocated_Sector_Ct.label Reallocated_Sector_Ct
localdomain;localhost.localdomain:smart_sdd.Reallocated_Sector_Ct.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Reallocated_Sector_Ct.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Command_Timeout.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Command_Timeout.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Command_Timeout.label Command_Timeout
localdomain;localhost.localdomain:smart_sdd.Command_Timeout.critical 000:
localdomain;localhost.localdomain:smart_sdd.smartctl_exit_status.label smartctl_exit_status
localdomain;localhost.localdomain:smart_sdd.smartctl_exit_status.warning 1
localdomain;localhost.localdomain:smart_sdd.smartctl_exit_status.update_rate 300
localdomain;localhost.localdomain:smart_sdd.smartctl_exit_status.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Spin_Up_Time.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Spin_Up_Time.graph_data_size normal
localdomain;localhost.localdomain:smart_sdd.Spin_Up_Time.critical 000:
localdomain;localhost.localdomain:smart_sdd.Spin_Up_Time.label Spin_Up_Time
localdomain;localhost.localdomain:smart_sdd.Seek_Error_Rate.label Seek_Error_Rate
localdomain;localhost.localdomain:smart_sdd.Seek_Error_Rate.critical 045:
localdomain;localhost.localdomain:smart_sdd.Seek_Error_Rate.update_rate 300
localdomain;localhost.localdomain:smart_sdd.Seek_Error_Rate.graph_data_size normal
localdomain;localhost.localdomain:interrupts.graph_title Interrupts and context switches
localdomain;localhost.localdomain:interrupts.graph_args --base 1000 -l 0
localdomain;localhost.localdomain:interrupts.graph_vlabel interrupts & ctx switches / ${graph_period}
localdomain;localhost.localdomain:interrupts.graph_category system
localdomain;localhost.localdomain:interrupts.graph_info This graph shows the number of interrupts and context switches on the system. These are typically high on a busy system.
localdomain;localhost.localdomain:interrupts.graph_order intr ctx
localdomain;localhost.localdomain:interrupts.ctx.label context switches
localdomain;localhost.localdomain:interrupts.ctx.update_rate 300
localdomain;localhost.localdomain:interrupts.ctx.type DERIVE
localdomain;localhost.localdomain:interrupts.ctx.min 0
localdomain;localhost.localdomain:interrupts.ctx.info A context switch occurs when a multitasking operatings system suspends the currently running process, and starts executing another.
localdomain;localhost.localdomain:interrupts.ctx.graph_data_size normal
localdomain;localhost.localdomain:interrupts.intr.update_rate 300
localdomain;localhost.localdomain:interrupts.intr.min 0
localdomain;localhost.localdomain:interrupts.intr.type DERIVE
localdomain;localhost.localdomain:interrupts.intr.label interrupts
localdomain;localhost.localdomain:interrupts.intr.graph_data_size normal
localdomain;localhost.localdomain:interrupts.intr.info Interrupts are events that alter sequence of instructions executed by a processor. They can come from either hardware (exceptions, NMI, IRQ) or software.
localdomain;localhost.localdomain:swap.graph_title Swap in/out
localdomain;localhost.localdomain:swap.graph_args -l 0 --base 1000
localdomain;localhost.localdomain:swap.graph_vlabel pages per ${graph_period} in (-) / out (+)
localdomain;localhost.localdomain:swap.graph_category system
localdomain;localhost.localdomain:swap.graph_order swap_in swap_out
localdomain;localhost.localdomain:swap.swap_in.graph no
localdomain;localhost.localdomain:swap.swap_in.label swap
localdomain;localhost.localdomain:swap.swap_in.max 100000
localdomain;localhost.localdomain:swap.swap_in.min 0
localdomain;localhost.localdomain:swap.swap_in.type DERIVE
localdomain;localhost.localdomain:swap.swap_in.update_rate 300
localdomain;localhost.localdomain:swap.swap_in.graph_data_size normal
localdomain;localhost.localdomain:swap.swap_out.update_rate 300
localdomain;localhost.localdomain:swap.swap_out.min 0
localdomain;localhost.localdomain:swap.swap_out.type DERIVE
localdomain;localhost.localdomain:swap.swap_out.label swap
localdomain;localhost.localdomain:swap.swap_out.max 100000
localdomain;localhost.localdomain:swap.swap_out.graph_data_size normal
localdomain;localhost.localdomain:swap.swap_out.negative swap_in
localdomain;localhost.localdomain:diskstats_throughput.loop0.graph_title Disk throughput for /dev/loop0
localdomain;localhost.localdomain:diskstats_throughput.loop0.graph_args --base 1024
localdomain;localhost.localdomain:diskstats_throughput.loop0.graph_vlabel Pr ${graph_period} read (-) / write (+)
localdomain;localhost.localdomain:diskstats_throughput.loop0.graph_category disk
localdomain;localhost.localdomain:diskstats_throughput.loop0.graph_info This graph shows disk throughput in bytes pr ${graph_period}.  The graph base is 1024 so KB is for Kibi bytes and so on.
localdomain;localhost.localdomain:diskstats_throughput.loop0.graph_order rdbytes wrbytes
localdomain;localhost.localdomain:diskstats_throughput.loop0.wrbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop0.wrbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop0.wrbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop0.wrbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop0.wrbytes.label Bytes
localdomain;localhost.localdomain:diskstats_throughput.loop0.wrbytes.graph_data_size normal
localdomain;localhost.localdomain:diskstats_throughput.loop0.wrbytes.negative rdbytes
localdomain;localhost.localdomain:diskstats_throughput.loop0.rdbytes.graph no
localdomain;localhost.localdomain:diskstats_throughput.loop0.rdbytes.label invisible
localdomain;localhost.localdomain:diskstats_throughput.loop0.rdbytes.draw LINE1
localdomain;localhost.localdomain:diskstats_throughput.loop0.rdbytes.update_rate 300
localdomain;localhost.localdomain:diskstats_throughput.loop0.rdbytes.min 0
localdomain;localhost.localdomain:diskstats_throughput.loop0.rdbytes.type GAUGE
localdomain;localhost.localdomain:diskstats_throughput.loop0.rdbytes.graph_data_size normal
OPNsense;OPNsense:ntp_46_167_246_249.graph_title NTP statistics for peer 46.167.246.249
OPNsense;OPNsense:ntp_46_167_246_249.graph_args --base 1000 --vertical-label seconds --lower-limit 0
OPNsense;OPNsense:ntp_46_167_246_249.graph_category time
OPNsense;OPNsense:ntp_46_167_246_249.graph_order delay offset jitter
OPNsense;OPNsense:ntp_46_167_246_249.delay.update_rate 300
OPNsense;OPNsense:ntp_46_167_246_249.delay.graph_data_size normal
OPNsense;OPNsense:ntp_46_167_246_249.delay.cdef delay,1000,/
OPNsense;OPNsense:ntp_46_167_246_249.delay.label Delay
OPNsense;OPNsense:ntp_46_167_246_249.jitter.update_rate 300
OPNsense;OPNsense:ntp_46_167_246_249.jitter.graph_data_size normal
OPNsense;OPNsense:ntp_46_167_246_249.jitter.label Jitter
OPNsense;OPNsense:ntp_46_167_246_249.jitter.cdef jitter,1000,/
OPNsense;OPNsense:ntp_46_167_246_249.offset.graph_data_size normal
OPNsense;OPNsense:ntp_46_167_246_249.offset.update_rate 300
OPNsense;OPNsense:ntp_46_167_246_249.offset.label Offset
OPNsense;OPNsense:ntp_46_167_246_249.offset.cdef offset,1000,/
OPNsense;OPNsense:df_inode.graph_title Inode usage in percent
OPNsense;OPNsense:df_inode.graph_args --upper-limit 100 -l 0
OPNsense;OPNsense:df_inode.graph_vlabel %
OPNsense;OPNsense:df_inode.graph_category disk
OPNsense;OPNsense:df_inode.graph_scale no
OPNsense;OPNsense:df_inode.graph_info This graph shows the inode usage for the partitions of types that use inodes.
OPNsense;OPNsense:df_inode.graph_order zroot zroot_ROOT_default zroot_tmp zroot_usr_home zroot_usr_ports zroot_usr_src zroot_var_audit zroot_var_crash zroot_var_log zroot_var_mail zroot_var_tmp
OPNsense;OPNsense:df_inode.zroot_var_audit.label /var/audit
OPNsense;OPNsense:df_inode.zroot_var_audit.update_rate 300
OPNsense;OPNsense:df_inode.zroot_var_audit.info /var/audit -> zroot/var/audit
OPNsense;OPNsense:df_inode.zroot_var_audit.warning 92
OPNsense;OPNsense:df_inode.zroot_var_audit.critical 98
OPNsense;OPNsense:df_inode.zroot_var_audit.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_usr_home.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_usr_home.info /usr/home -> zroot/usr/home
OPNsense;OPNsense:df_inode.zroot_usr_home.critical 98
OPNsense;OPNsense:df_inode.zroot_usr_home.warning 92
OPNsense;OPNsense:df_inode.zroot_usr_home.update_rate 300
OPNsense;OPNsense:df_inode.zroot_usr_home.label /usr/home
OPNsense;OPNsense:df_inode.zroot_usr_ports.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_usr_ports.critical 98
OPNsense;OPNsense:df_inode.zroot_usr_ports.warning 92
OPNsense;OPNsense:df_inode.zroot_usr_ports.info /usr/ports -> zroot/usr/ports
OPNsense;OPNsense:df_inode.zroot_usr_ports.update_rate 300
OPNsense;OPNsense:df_inode.zroot_usr_ports.label /usr/ports
OPNsense;OPNsense:df_inode.zroot_ROOT_default.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_ROOT_default.info / -> zroot/ROOT/default
OPNsense;OPNsense:df_inode.zroot_ROOT_default.critical 98
OPNsense;OPNsense:df_inode.zroot_ROOT_default.warning 92
OPNsense;OPNsense:df_inode.zroot_ROOT_default.update_rate 300
OPNsense;OPNsense:df_inode.zroot_ROOT_default.label /
OPNsense;OPNsense:df_inode.zroot_tmp.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_tmp.info /tmp -> zroot/tmp
OPNsense;OPNsense:df_inode.zroot_tmp.warning 92
OPNsense;OPNsense:df_inode.zroot_tmp.critical 98
OPNsense;OPNsense:df_inode.zroot_tmp.update_rate 300
OPNsense;OPNsense:df_inode.zroot_tmp.label /tmp
OPNsense;OPNsense:df_inode.zroot.update_rate 300
OPNsense;OPNsense:df_inode.zroot.label /zroot
OPNsense;OPNsense:df_inode.zroot.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot.info /zroot -> zroot
OPNsense;OPNsense:df_inode.zroot.warning 92
OPNsense;OPNsense:df_inode.zroot.critical 98
OPNsense;OPNsense:df_inode.zroot_var_crash.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_var_crash.warning 92
OPNsense;OPNsense:df_inode.zroot_var_crash.critical 98
OPNsense;OPNsense:df_inode.zroot_var_crash.info /var/crash -> zroot/var/crash
OPNsense;OPNsense:df_inode.zroot_var_crash.update_rate 300
OPNsense;OPNsense:df_inode.zroot_var_crash.label /var/crash
OPNsense;OPNsense:df_inode.zroot_usr_src.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_usr_src.info /usr/src -> zroot/usr/src
OPNsense;OPNsense:df_inode.zroot_usr_src.critical 98
OPNsense;OPNsense:df_inode.zroot_usr_src.warning 92
OPNsense;OPNsense:df_inode.zroot_usr_src.update_rate 300
OPNsense;OPNsense:df_inode.zroot_usr_src.label /usr/src
OPNsense;OPNsense:df_inode.zroot_var_log.label /var/log
OPNsense;OPNsense:df_inode.zroot_var_log.update_rate 300
OPNsense;OPNsense:df_inode.zroot_var_log.info /var/log -> zroot/var/log
OPNsense;OPNsense:df_inode.zroot_var_log.critical 98
OPNsense;OPNsense:df_inode.zroot_var_log.warning 92
OPNsense;OPNsense:df_inode.zroot_var_log.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_var_mail.update_rate 300
OPNsense;OPNsense:df_inode.zroot_var_mail.label /var/mail
OPNsense;OPNsense:df_inode.zroot_var_mail.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_var_mail.info /var/mail -> zroot/var/mail
OPNsense;OPNsense:df_inode.zroot_var_mail.critical 98
OPNsense;OPNsense:df_inode.zroot_var_mail.warning 92
OPNsense;OPNsense:df_inode.zroot_var_tmp.update_rate 300
OPNsense;OPNsense:df_inode.zroot_var_tmp.label /var/tmp
OPNsense;OPNsense:df_inode.zroot_var_tmp.graph_data_size normal
OPNsense;OPNsense:df_inode.zroot_var_tmp.warning 92
OPNsense;OPNsense:df_inode.zroot_var_tmp.critical 98
OPNsense;OPNsense:df_inode.zroot_var_tmp.info /var/tmp -> zroot/var/tmp
OPNsense;OPNsense:ntp_82_113_53_41.graph_title NTP statistics for peer 82.113.53.41
OPNsense;OPNsense:ntp_82_113_53_41.graph_args --base 1000 --vertical-label seconds --lower-limit 0
OPNsense;OPNsense:ntp_82_113_53_41.graph_category time
OPNsense;OPNsense:ntp_82_113_53_41.graph_order delay offset jitter
OPNsense;OPNsense:ntp_82_113_53_41.offset.label Offset
OPNsense;OPNsense:ntp_82_113_53_41.offset.cdef offset,1000,/
OPNsense;OPNsense:ntp_82_113_53_41.offset.graph_data_size normal
OPNsense;OPNsense:ntp_82_113_53_41.offset.update_rate 300
OPNsense;OPNsense:ntp_82_113_53_41.delay.graph_data_size normal
OPNsense;OPNsense:ntp_82_113_53_41.delay.update_rate 300
OPNsense;OPNsense:ntp_82_113_53_41.delay.cdef delay,1000,/
OPNsense;OPNsense:ntp_82_113_53_41.delay.label Delay
OPNsense;OPNsense:ntp_82_113_53_41.jitter.graph_data_size normal
OPNsense;OPNsense:ntp_82_113_53_41.jitter.update_rate 300
OPNsense;OPNsense:ntp_82_113_53_41.jitter.label Jitter
OPNsense;OPNsense:ntp_82_113_53_41.jitter.cdef jitter,1000,/
OPNsense;OPNsense:swap.graph_title Swap in/out
OPNsense;OPNsense:swap.graph_args -l 0 --base 1000
OPNsense;OPNsense:swap.graph_vlabel pages per ${graph_period} in (-) / out (+)
OPNsense;OPNsense:swap.graph_category system
OPNsense;OPNsense:swap.graph_info This graph shows the swap activity of the system.
OPNsense;OPNsense:swap.graph_order swap_in swap_out
OPNsense;OPNsense:swap.swap_in.type DERIVE
OPNsense;OPNsense:swap.swap_in.min 0
OPNsense;OPNsense:swap.swap_in.update_rate 300
OPNsense;OPNsense:swap.swap_in.label swap
OPNsense;OPNsense:swap.swap_in.graph no
OPNsense;OPNsense:swap.swap_in.max 100000
OPNsense;OPNsense:swap.swap_in.graph_data_size normal
OPNsense;OPNsense:swap.swap_out.update_rate 300
OPNsense;OPNsense:swap.swap_out.type DERIVE
OPNsense;OPNsense:swap.swap_out.min 0
OPNsense;OPNsense:swap.swap_out.max 100000
OPNsense;OPNsense:swap.swap_out.label swap
OPNsense;OPNsense:swap.swap_out.negative swap_in
OPNsense;OPNsense:swap.swap_out.graph_data_size normal
OPNsense;OPNsense:if_errcoll_em0.graph_order ierrors oerrors collisions ierrors oerrors collisions
OPNsense;OPNsense:if_errcoll_em0.graph_title em0 Errors & Collisions
OPNsense;OPNsense:if_errcoll_em0.graph_args --base 1000
OPNsense;OPNsense:if_errcoll_em0.graph_vlabel events / ${graph_period}
OPNsense;OPNsense:if_errcoll_em0.graph_category network
OPNsense;OPNsense:if_errcoll_em0.graph_info This graph shows the amount of errors and collisions on the em0 network interface.
OPNsense;OPNsense:if_errcoll_em0.ierrors.update_rate 300
OPNsense;OPNsense:if_errcoll_em0.ierrors.graph_data_size normal
OPNsense;OPNsense:if_errcoll_em0.ierrors.type COUNTER
OPNsense;OPNsense:if_errcoll_em0.ierrors.label Input Errors
OPNsense;OPNsense:if_errcoll_em0.oerrors.label Output Errors
OPNsense;OPNsense:if_errcoll_em0.oerrors.update_rate 300
OPNsense;OPNsense:if_errcoll_em0.oerrors.type COUNTER
OPNsense;OPNsense:if_errcoll_em0.oerrors.graph_data_size normal
OPNsense;OPNsense:if_errcoll_em0.collisions.label Collisions
OPNsense;OPNsense:if_errcoll_em0.collisions.type COUNTER
OPNsense;OPNsense:if_errcoll_em0.collisions.graph_data_size normal
OPNsense;OPNsense:if_errcoll_em0.collisions.update_rate 300
OPNsense;OPNsense:ntp_81_27_192_20.graph_title NTP statistics for peer 81.27.192.20
OPNsense;OPNsense:ntp_81_27_192_20.graph_args --base 1000 --vertical-label seconds --lower-limit 0
OPNsense;OPNsense:ntp_81_27_192_20.graph_category time
OPNsense;OPNsense:ntp_81_27_192_20.graph_order delay offset jitter
OPNsense;OPNsense:ntp_81_27_192_20.offset.cdef offset,1000,/
OPNsense;OPNsense:ntp_81_27_192_20.offset.label Offset
OPNsense;OPNsense:ntp_81_27_192_20.offset.update_rate 300
OPNsense;OPNsense:ntp_81_27_192_20.offset.graph_data_size normal
OPNsense;OPNsense:ntp_81_27_192_20.delay.update_rate 300
OPNsense;OPNsense:ntp_81_27_192_20.delay.graph_data_size normal
OPNsense;OPNsense:ntp_81_27_192_20.delay.cdef delay,1000,/
OPNsense;OPNsense:ntp_81_27_192_20.delay.label Delay
OPNsense;OPNsense:ntp_81_27_192_20.jitter.cdef jitter,1000,/
OPNsense;OPNsense:ntp_81_27_192_20.jitter.label Jitter
OPNsense;OPNsense:ntp_81_27_192_20.jitter.graph_data_size normal
OPNsense;OPNsense:ntp_81_27_192_20.jitter.update_rate 300
OPNsense;OPNsense:cpu.graph_title CPU usage
OPNsense;OPNsense:cpu.graph_order system interrupt user nice idle system interrupt user nice idle
OPNsense;OPNsense:cpu.graph_args --base 1000 -r --lower-limit 0 --upper-limit 400
OPNsense;OPNsense:cpu.graph_vlabel %
OPNsense;OPNsense:cpu.graph_scale no
OPNsense;OPNsense:cpu.graph_info This graph shows how CPU time is spent.
OPNsense;OPNsense:cpu.graph_category system
OPNsense;OPNsense:cpu.graph_period second
OPNsense;OPNsense:cpu.interrupt.type DERIVE
OPNsense;OPNsense:cpu.interrupt.min 0
OPNsense;OPNsense:cpu.interrupt.update_rate 300
OPNsense;OPNsense:cpu.interrupt.draw STACK
OPNsense;OPNsense:cpu.interrupt.label interrupt
OPNsense;OPNsense:cpu.interrupt.max 5000
OPNsense;OPNsense:cpu.interrupt.cdef interrupt,127,/,100,*
OPNsense;OPNsense:cpu.interrupt.graph_data_size normal
OPNsense;OPNsense:cpu.interrupt.info CPU time spent by the kernel processing interrupts
OPNsense;OPNsense:cpu.system.graph_data_size normal
OPNsense;OPNsense:cpu.system.info CPU time spent by the kernel in system activities
OPNsense;OPNsense:cpu.system.update_rate 300
OPNsense;OPNsense:cpu.system.type DERIVE
OPNsense;OPNsense:cpu.system.min 0
OPNsense;OPNsense:cpu.system.cdef system,127,/,100,*
OPNsense;OPNsense:cpu.system.max 5000
OPNsense;OPNsense:cpu.system.label system
OPNsense;OPNsense:cpu.system.draw AREA
OPNsense;OPNsense:cpu.user.info CPU time spent by normal programs and daemons
OPNsense;OPNsense:cpu.user.graph_data_size normal
OPNsense;OPNsense:cpu.user.max 5000
OPNsense;OPNsense:cpu.user.cdef user,127,/,100,*
OPNsense;OPNsense:cpu.user.draw STACK
OPNsense;OPNsense:cpu.user.label user
OPNsense;OPNsense:cpu.user.type DERIVE
OPNsense;OPNsense:cpu.user.min 0
OPNsense;OPNsense:cpu.user.update_rate 300
OPNsense;OPNsense:cpu.nice.info CPU time spent by nice(1)d programs
OPNsense;OPNsense:cpu.nice.graph_data_size normal
OPNsense;OPNsense:cpu.nice.cdef nice,127,/,100,*
OPNsense;OPNsense:cpu.nice.max 5000
OPNsense;OPNsense:cpu.nice.draw STACK
OPNsense;OPNsense:cpu.nice.label nice
OPNsense;OPNsense:cpu.nice.type DERIVE
OPNsense;OPNsense:cpu.nice.min 0
OPNsense;OPNsense:cpu.nice.update_rate 300
OPNsense;OPNsense:cpu.idle.type DERIVE
OPNsense;OPNsense:cpu.idle.min 0
OPNsense;OPNsense:cpu.idle.update_rate 300
OPNsense;OPNsense:cpu.idle.draw STACK
OPNsense;OPNsense:cpu.idle.label idle
OPNsense;OPNsense:cpu.idle.cdef idle,127,/,100,*
OPNsense;OPNsense:cpu.idle.max 5000
OPNsense;OPNsense:cpu.idle.graph_data_size normal
OPNsense;OPNsense:cpu.idle.info Idle CPU time
OPNsense;OPNsense:netstat.graph_title Netstat
OPNsense;OPNsense:netstat.graph_args --base 1000 --logarithmic
OPNsense;OPNsense:netstat.graph_vlabel TCP connections
OPNsense;OPNsense:netstat.graph_category network
OPNsense;OPNsense:netstat.graph_period second
OPNsense;OPNsense:netstat.graph_info This graph shows the TCP activity of all the network interfaces combined.
OPNsense;OPNsense:netstat.graph_order active passive failed resets established
OPNsense;OPNsense:netstat.resets.update_rate 300
OPNsense;OPNsense:netstat.resets.min 0
OPNsense;OPNsense:netstat.resets.type DERIVE
OPNsense;OPNsense:netstat.resets.label resets
OPNsense;OPNsense:netstat.resets.max 50000
OPNsense;OPNsense:netstat.resets.graph_data_size normal
OPNsense;OPNsense:netstat.resets.info The number of TCP connection resets.
OPNsense;OPNsense:netstat.failed.update_rate 300
OPNsense;OPNsense:netstat.failed.min 0
OPNsense;OPNsense:netstat.failed.type DERIVE
OPNsense;OPNsense:netstat.failed.label failed
OPNsense;OPNsense:netstat.failed.max 50000
OPNsense;OPNsense:netstat.failed.graph_data_size normal
OPNsense;OPNsense:netstat.failed.info The number of failed TCP connection attempts per second.
OPNsense;OPNsense:netstat.passive.graph_data_size normal
OPNsense;OPNsense:netstat.passive.info The number of passive TCP openings per second.
OPNsense;OPNsense:netstat.passive.update_rate 300
OPNsense;OPNsense:netstat.passive.min 0
OPNsense;OPNsense:netstat.passive.type DERIVE
OPNsense;OPNsense:netstat.passive.max 50000
OPNsense;OPNsense:netstat.passive.label passive
OPNsense;OPNsense:netstat.active.info The number of active TCP openings per second.
OPNsense;OPNsense:netstat.active.graph_data_size normal
OPNsense;OPNsense:netstat.active.max 50000
OPNsense;OPNsense:netstat.active.label active
OPNsense;OPNsense:netstat.active.update_rate 300
OPNsense;OPNsense:netstat.active.min 0
OPNsense;OPNsense:netstat.active.type DERIVE
OPNsense;OPNsense:netstat.established.label established
OPNsense;OPNsense:netstat.established.max 50000
OPNsense;OPNsense:netstat.established.update_rate 300
OPNsense;OPNsense:netstat.established.min 0
OPNsense;OPNsense:netstat.established.type DERIVE
OPNsense;OPNsense:netstat.established.info The number of currently open connections.
OPNsense;OPNsense:netstat.established.graph_data_size normal
OPNsense;OPNsense:uptime.graph_title Uptime
OPNsense;OPNsense:uptime.graph_args --base 1000 -l 0
OPNsense;OPNsense:uptime.graph_vlabel uptime in days
OPNsense;OPNsense:uptime.graph_scale no
OPNsense;OPNsense:uptime.graph_category system
OPNsense;OPNsense:uptime.graph_order uptime
OPNsense;OPNsense:uptime.uptime.draw AREA
OPNsense;OPNsense:uptime.uptime.label uptime
OPNsense;OPNsense:uptime.uptime.graph_data_size normal
OPNsense;OPNsense:uptime.uptime.update_rate 300
OPNsense;OPNsense:df.graph_title Disk usage in percent
OPNsense;OPNsense:df.graph_args --upper-limit 100 -l 0
OPNsense;OPNsense:df.graph_vlabel %
OPNsense;OPNsense:df.graph_category disk
OPNsense;OPNsense:df.graph_scale no
OPNsense;OPNsense:df.graph_info This graph shows disk usage on the machine.
OPNsense;OPNsense:df.graph_order _dev_gpt_efiboot0 zroot zroot_ROOT_default zroot_tmp zroot_usr_home zroot_usr_ports zroot_usr_src zroot_var_audit zroot_var_crash zroot_var_log zroot_var_mail zroot_var_tmp
OPNsense;OPNsense:df.zroot_ROOT_default.graph_data_size normal
OPNsense;OPNsense:df.zroot_ROOT_default.warning 92
OPNsense;OPNsense:df.zroot_ROOT_default.critical 98
OPNsense;OPNsense:df.zroot_ROOT_default.update_rate 300
OPNsense;OPNsense:df.zroot_ROOT_default.label /
OPNsense;OPNsense:df.zroot_tmp.update_rate 300
OPNsense;OPNsense:df.zroot_tmp.label /tmp
OPNsense;OPNsense:df.zroot_tmp.graph_data_size normal
OPNsense;OPNsense:df.zroot_tmp.critical 98
OPNsense;OPNsense:df.zroot_tmp.warning 92
OPNsense;OPNsense:df.zroot.critical 98
OPNsense;OPNsense:df.zroot.warning 92
OPNsense;OPNsense:df.zroot.graph_data_size normal
OPNsense;OPNsense:df.zroot.label /zroot
OPNsense;OPNsense:df.zroot.update_rate 300
OPNsense;OPNsense:df._dev_gpt_efiboot0.graph_data_size normal
OPNsense;OPNsense:df._dev_gpt_efiboot0.warning 92
OPNsense;OPNsense:df._dev_gpt_efiboot0.critical 98
OPNsense;OPNsense:df._dev_gpt_efiboot0.update_rate 300
OPNsense;OPNsense:df._dev_gpt_efiboot0.label /boot/efi
OPNsense;OPNsense:df.zroot_var_audit.graph_data_size normal
OPNsense;OPNsense:df.zroot_var_audit.warning 92
OPNsense;OPNsense:df.zroot_var_audit.critical 98
OPNsense;OPNsense:df.zroot_var_audit.update_rate 300
OPNsense;OPNsense:df.zroot_var_audit.label /var/audit
OPNsense;OPNsense:df.zroot_usr_home.update_rate 300
OPNsense;OPNsense:df.zroot_usr_home.label /usr/home
OPNsense;OPNsense:df.zroot_usr_home.graph_data_size normal
OPNsense;OPNsense:df.zroot_usr_home.warning 92
OPNsense;OPNsense:df.zroot_usr_home.critical 98
OPNsense;OPNsense:df.zroot_usr_ports.update_rate 300
OPNsense;OPNsense:df.zroot_usr_ports.label /usr/ports
OPNsense;OPNsense:df.zroot_usr_ports.graph_data_size normal
OPNsense;OPNsense:df.zroot_usr_ports.critical 98
OPNsense;OPNsense:df.zroot_usr_ports.warning 92
OPNsense;OPNsense:df.zroot_var_log.label /var/log
OPNsense;OPNsense:df.zroot_var_log.update_rate 300
OPNsense;OPNsense:df.zroot_var_log.critical 98
OPNsense;OPNsense:df.zroot_var_log.warning 92
OPNsense;OPNsense:df.zroot_var_log.graph_data_size normal
OPNsense;OPNsense:df.zroot_var_mail.update_rate 300
OPNsense;OPNsense:df.zroot_var_mail.label /var/mail
OPNsense;OPNsense:df.zroot_var_mail.graph_data_size normal
OPNsense;OPNsense:df.zroot_var_mail.warning 92
OPNsense;OPNsense:df.zroot_var_mail.critical 98
OPNsense;OPNsense:df.zroot_var_tmp.label /var/tmp
OPNsense;OPNsense:df.zroot_var_tmp.update_rate 300
OPNsense;OPNsense:df.zroot_var_tmp.warning 92
OPNsense;OPNsense:df.zroot_var_tmp.critical 98
OPNsense;OPNsense:df.zroot_var_tmp.graph_data_size normal
OPNsense;OPNsense:df.zroot_var_crash.label /var/crash
OPNsense;OPNsense:df.zroot_var_crash.update_rate 300
OPNsense;OPNsense:df.zroot_var_crash.critical 98
OPNsense;OPNsense:df.zroot_var_crash.warning 92
OPNsense;OPNsense:df.zroot_var_crash.graph_data_size normal
OPNsense;OPNsense:df.zroot_usr_src.warning 92
OPNsense;OPNsense:df.zroot_usr_src.critical 98
OPNsense;OPNsense:df.zroot_usr_src.graph_data_size normal
OPNsense;OPNsense:df.zroot_usr_src.label /usr/src
OPNsense;OPNsense:df.zroot_usr_src.update_rate 300
OPNsense;OPNsense:if_packets_em0.graph_order rpackets opackets rpackets opackets
OPNsense;OPNsense:if_packets_em0.graph_title em0 pps
OPNsense;OPNsense:if_packets_em0.graph_args --base 1000
OPNsense;OPNsense:if_packets_em0.graph_vlabel packets per ${graph_period} in (-) / out (+)
OPNsense;OPNsense:if_packets_em0.graph_category network
OPNsense;OPNsense:if_packets_em0.graph_info This graph shows the packets counter of the em0 network interface. Please note that the traffic is shown in packets per second.
OPNsense;OPNsense:if_packets_em0.rpackets.label received
OPNsense;OPNsense:if_packets_em0.rpackets.graph no
OPNsense;OPNsense:if_packets_em0.rpackets.type COUNTER
OPNsense;OPNsense:if_packets_em0.rpackets.min 0
OPNsense;OPNsense:if_packets_em0.rpackets.update_rate 300
OPNsense;OPNsense:if_packets_em0.rpackets.graph_data_size normal
OPNsense;OPNsense:if_packets_em0.opackets.info Packets sent (+) and received (-) on the em0 network interface.
OPNsense;OPNsense:if_packets_em0.opackets.graph_data_size normal
OPNsense;OPNsense:if_packets_em0.opackets.negative rpackets
OPNsense;OPNsense:if_packets_em0.opackets.label pps
OPNsense;OPNsense:if_packets_em0.opackets.update_rate 300
OPNsense;OPNsense:if_packets_em0.opackets.min 0
OPNsense;OPNsense:if_packets_em0.opackets.type COUNTER
OPNsense;OPNsense:users.graph_title Logged in users
OPNsense;OPNsense:users.graph_args --base 1000 -l 0
OPNsense;OPNsense:users.graph_vlabel Users
OPNsense;OPNsense:users.graph_scale no
OPNsense;OPNsense:users.graph_category system
OPNsense;OPNsense:users.graph_printf %3.0lf
OPNsense;OPNsense:users.graph_order tty pty pts X other
OPNsense;OPNsense:users.pts.graph_data_size normal
OPNsense;OPNsense:users.pts.update_rate 300
OPNsense;OPNsense:users.pts.colour 00FFFF
OPNsense;OPNsense:users.pts.label pts
OPNsense;OPNsense:users.pts.draw AREASTACK
OPNsense;OPNsense:users.X.graph_data_size normal
OPNsense;OPNsense:users.X.info Users logged in on an X display
OPNsense;OPNsense:users.X.update_rate 300
OPNsense;OPNsense:users.X.colour 000000
OPNsense;OPNsense:users.X.label X displays
OPNsense;OPNsense:users.X.draw AREASTACK
OPNsense;OPNsense:users.tty.graph_data_size normal
OPNsense;OPNsense:users.tty.draw AREASTACK
OPNsense;OPNsense:users.tty.label tty
OPNsense;OPNsense:users.tty.colour 00FF00
OPNsense;OPNsense:users.tty.update_rate 300
OPNsense;OPNsense:users.pty.colour 0000FF
OPNsense;OPNsense:users.pty.update_rate 300
OPNsense;OPNsense:users.pty.draw AREASTACK
OPNsense;OPNsense:users.pty.label pty
OPNsense;OPNsense:users.pty.graph_data_size normal
OPNsense;OPNsense:users.other.graph_data_size normal
OPNsense;OPNsense:users.other.info Users logged in by indeterminate method
OPNsense;OPNsense:users.other.colour FF0000
OPNsense;OPNsense:users.other.update_rate 300
OPNsense;OPNsense:users.other.label Other users
OPNsense;OPNsense:ntp_77_48_28_248.graph_title NTP statistics for peer 77.48.28.248
OPNsense;OPNsense:ntp_77_48_28_248.graph_args --base 1000 --vertical-label seconds --lower-limit 0
OPNsense;OPNsense:ntp_77_48_28_248.graph_category time
OPNsense;OPNsense:ntp_77_48_28_248.graph_order delay offset jitter
OPNsense;OPNsense:ntp_77_48_28_248.offset.graph_data_size normal
OPNsense;OPNsense:ntp_77_48_28_248.offset.update_rate 300
OPNsense;OPNsense:ntp_77_48_28_248.offset.cdef offset,1000,/
OPNsense;OPNsense:ntp_77_48_28_248.offset.label Offset
OPNsense;OPNsense:ntp_77_48_28_248.jitter.graph_data_size normal
OPNsense;OPNsense:ntp_77_48_28_248.jitter.update_rate 300
OPNsense;OPNsense:ntp_77_48_28_248.jitter.label Jitter
OPNsense;OPNsense:ntp_77_48_28_248.jitter.cdef jitter,1000,/
OPNsense;OPNsense:ntp_77_48_28_248.delay.update_rate 300
OPNsense;OPNsense:ntp_77_48_28_248.delay.graph_data_size normal
OPNsense;OPNsense:ntp_77_48_28_248.delay.label Delay
OPNsense;OPNsense:ntp_77_48_28_248.delay.cdef delay,1000,/
OPNsense;OPNsense:ntp_kernel_pll_freq.graph_title NTP kernel PLL frequency (ppm + 0)
OPNsense;OPNsense:ntp_kernel_pll_freq.graph_args --alt-autoscale
OPNsense;OPNsense:ntp_kernel_pll_freq.graph_vlabel PLL frequency (ppm + 0)
OPNsense;OPNsense:ntp_kernel_pll_freq.graph_category time
OPNsense;OPNsense:ntp_kernel_pll_freq.graph_info The frequency for the kernel phase-locked loop used by NTP.
OPNsense;OPNsense:ntp_kernel_pll_freq.graph_order ntp_pll_freq
OPNsense;OPNsense:ntp_kernel_pll_freq.ntp_pll_freq.graph_data_size normal
OPNsense;OPNsense:ntp_kernel_pll_freq.ntp_pll_freq.update_rate 300
OPNsense;OPNsense:ntp_kernel_pll_freq.ntp_pll_freq.label pll-freq
OPNsense;OPNsense:ntp_kernel_pll_freq.ntp_pll_freq.info Phase-locked loop frequency in parts per million
OPNsense;OPNsense:systat.graph_title System Statistics
OPNsense;OPNsense:systat.graph_vlabel per second
OPNsense;OPNsense:systat.graph_scale no
OPNsense;OPNsense:systat.graph_category system
OPNsense;OPNsense:systat.graph_args --lower-limit 0
OPNsense;OPNsense:systat.graph_info FreeBSD systat plugin
OPNsense;OPNsense:systat.graph_order softint hardint syscall cs forks
OPNsense;OPNsense:systat.hardint.graph_data_size normal
OPNsense;OPNsense:systat.hardint.label Hardware interrupts
OPNsense;OPNsense:systat.hardint.update_rate 300
OPNsense;OPNsense:systat.hardint.type DERIVE
OPNsense;OPNsense:systat.hardint.min 0
OPNsense;OPNsense:systat.syscall.type DERIVE
OPNsense;OPNsense:systat.syscall.min 0
OPNsense;OPNsense:systat.syscall.update_rate 300
OPNsense;OPNsense:systat.syscall.label System calls
OPNsense;OPNsense:systat.syscall.graph_data_size normal
OPNsense;OPNsense:systat.cs.graph_data_size normal
OPNsense;OPNsense:systat.cs.min 0
OPNsense;OPNsense:systat.cs.type DERIVE
OPNsense;OPNsense:systat.cs.update_rate 300
OPNsense;OPNsense:systat.cs.label Context switches
OPNsense;OPNsense:systat.forks.graph_data_size normal
OPNsense;OPNsense:systat.forks.update_rate 300
OPNsense;OPNsense:systat.forks.min 0
OPNsense;OPNsense:systat.forks.type DERIVE
OPNsense;OPNsense:systat.forks.label Fork rate
OPNsense;OPNsense:systat.softint.label Software interrupts
OPNsense;OPNsense:systat.softint.type DERIVE
OPNsense;OPNsense:systat.softint.min 0
OPNsense;OPNsense:systat.softint.update_rate 300
OPNsense;OPNsense:systat.softint.graph_data_size normal
OPNsense;OPNsense:ntp_kernel_pll_off.graph_title NTP kernel PLL offset (secs)
OPNsense;OPNsense:ntp_kernel_pll_off.graph_vlabel PLL offset (secs)
OPNsense;OPNsense:ntp_kernel_pll_off.graph_category time
OPNsense;OPNsense:ntp_kernel_pll_off.graph_info The kernel offset for the phase-locked loop used by NTP
OPNsense;OPNsense:ntp_kernel_pll_off.graph_order ntp_pll_off
OPNsense;OPNsense:ntp_kernel_pll_off.ntp_pll_off.graph_data_size normal
OPNsense;OPNsense:ntp_kernel_pll_off.ntp_pll_off.update_rate 300
OPNsense;OPNsense:ntp_kernel_pll_off.ntp_pll_off.label pll-offset
OPNsense;OPNsense:ntp_kernel_pll_off.ntp_pll_off.info Phase-locked loop offset in seconds
OPNsense;OPNsense:iostat.graph_title IOstat by bytes
OPNsense;OPNsense:iostat.graph_args --base 1024 -l 0
OPNsense;OPNsense:iostat.graph_vlabel kB per ${graph_period} read+written
OPNsense;OPNsense:iostat.graph_category disk
OPNsense;OPNsense:iostat.graph_info This graph shows the I/O to and from block devices
OPNsense;OPNsense:iostat.graph_order ada0_read ada0_write ada1_read ada1_write pass0_read pass0_write pass1_read pass1_write pass2_read pass2_write ada0_read ada0_write ada1_read ada1_write pass0_read pass0_write pass1_read pass1_write pass2_read pass2_write
OPNsense;OPNsense:iostat.pass2_write.update_rate 300
OPNsense;OPNsense:iostat.pass2_write.min 0
OPNsense;OPNsense:iostat.pass2_write.type DERIVE
OPNsense;OPNsense:iostat.pass2_write.label pass2
OPNsense;OPNsense:iostat.pass2_write.negative pass2_read
OPNsense;OPNsense:iostat.pass2_write.graph_data_size normal
OPNsense;OPNsense:iostat.pass2_write.info I/O on device pass2
OPNsense;OPNsense:iostat.pass1_write.negative pass1_read
OPNsense;OPNsense:iostat.pass1_write.graph_data_size normal
OPNsense;OPNsense:iostat.pass1_write.info I/O on device pass1
OPNsense;OPNsense:iostat.pass1_write.update_rate 300
OPNsense;OPNsense:iostat.pass1_write.min 0
OPNsense;OPNsense:iostat.pass1_write.type DERIVE
OPNsense;OPNsense:iostat.pass1_write.label pass1
OPNsense;OPNsense:iostat.pass2_read.graph_data_size normal
OPNsense;OPNsense:iostat.pass2_read.update_rate 300
OPNsense;OPNsense:iostat.pass2_read.min 0
OPNsense;OPNsense:iostat.pass2_read.type DERIVE
OPNsense;OPNsense:iostat.pass2_read.label pass2
OPNsense;OPNsense:iostat.pass2_read.graph no
OPNsense;OPNsense:iostat.pass1_read.graph_data_size normal
OPNsense;OPNsense:iostat.pass1_read.label pass1
OPNsense;OPNsense:iostat.pass1_read.graph no
OPNsense;OPNsense:iostat.pass1_read.type DERIVE
OPNsense;OPNsense:iostat.pass1_read.min 0
OPNsense;OPNsense:iostat.pass1_read.update_rate 300
OPNsense;OPNsense:iostat.ada1_write.graph_data_size normal
OPNsense;OPNsense:iostat.ada1_write.negative ada1_read
OPNsense;OPNsense:iostat.ada1_write.info I/O on device ada1
OPNsense;OPNsense:iostat.ada1_write.update_rate 300
OPNsense;OPNsense:iostat.ada1_write.type DERIVE
OPNsense;OPNsense:iostat.ada1_write.min 0
OPNsense;OPNsense:iostat.ada1_write.label ada1
OPNsense;OPNsense:iostat.ada0_read.graph_data_size normal
OPNsense;OPNsense:iostat.ada0_read.label ada0
OPNsense;OPNsense:iostat.ada0_read.graph no
OPNsense;OPNsense:iostat.ada0_read.type DERIVE
OPNsense;OPNsense:iostat.ada0_read.min 0
OPNsense;OPNsense:iostat.ada0_read.update_rate 300
OPNsense;OPNsense:iostat.pass0_write.negative pass0_read
OPNsense;OPNsense:iostat.pass0_write.graph_data_size normal
OPNsense;OPNsense:iostat.pass0_write.info I/O on device pass0
OPNsense;OPNsense:iostat.pass0_write.type DERIVE
OPNsense;OPNsense:iostat.pass0_write.min 0
OPNsense;OPNsense:iostat.pass0_write.update_rate 300
OPNsense;OPNsense:iostat.pass0_write.label pass0
OPNsense;OPNsense:iostat.ada0_write.label ada0
OPNsense;OPNsense:iostat.ada0_write.update_rate 300
OPNsense;OPNsense:iostat.ada0_write.type DERIVE
OPNsense;OPNsense:iostat.ada0_write.min 0
OPNsense;OPNsense:iostat.ada0_write.info I/O on device ada0
OPNsense;OPNsense:iostat.ada0_write.negative ada0_read
OPNsense;OPNsense:iostat.ada0_write.graph_data_size normal
OPNsense;OPNsense:iostat.pass0_read.label pass0
OPNsense;OPNsense:iostat.pass0_read.graph no
OPNsense;OPNsense:iostat.pass0_read.min 0
OPNsense;OPNsense:iostat.pass0_read.type DERIVE
OPNsense;OPNsense:iostat.pass0_read.update_rate 300
OPNsense;OPNsense:iostat.pass0_read.graph_data_size normal
OPNsense;OPNsense:iostat.ada1_read.graph_data_size normal
OPNsense;OPNsense:iostat.ada1_read.update_rate 300
OPNsense;OPNsense:iostat.ada1_read.min 0
OPNsense;OPNsense:iostat.ada1_read.type DERIVE
OPNsense;OPNsense:iostat.ada1_read.graph no
OPNsense;OPNsense:iostat.ada1_read.label ada1
OPNsense;OPNsense:if_errcoll_igb0.graph_order ierrors oerrors collisions ierrors oerrors collisions
OPNsense;OPNsense:if_errcoll_igb0.graph_title igb0 Errors & Collisions
OPNsense;OPNsense:if_errcoll_igb0.graph_args --base 1000
OPNsense;OPNsense:if_errcoll_igb0.graph_vlabel events / ${graph_period}
OPNsense;OPNsense:if_errcoll_igb0.graph_category network
OPNsense;OPNsense:if_errcoll_igb0.graph_info This graph shows the amount of errors and collisions on the igb0 network interface.
OPNsense;OPNsense:if_errcoll_igb0.ierrors.label Input Errors
OPNsense;OPNsense:if_errcoll_igb0.ierrors.update_rate 300
OPNsense;OPNsense:if_errcoll_igb0.ierrors.type COUNTER
OPNsense;OPNsense:if_errcoll_igb0.ierrors.graph_data_size normal
OPNsense;OPNsense:if_errcoll_igb0.collisions.label Collisions
OPNsense;OPNsense:if_errcoll_igb0.collisions.graph_data_size normal
OPNsense;OPNsense:if_errcoll_igb0.collisions.type COUNTER
OPNsense;OPNsense:if_errcoll_igb0.collisions.update_rate 300
OPNsense;OPNsense:if_errcoll_igb0.oerrors.label Output Errors
OPNsense;OPNsense:if_errcoll_igb0.oerrors.update_rate 300
OPNsense;OPNsense:if_errcoll_igb0.oerrors.type COUNTER
OPNsense;OPNsense:if_errcoll_igb0.oerrors.graph_data_size normal
OPNsense;OPNsense:open_files.graph_title File table usage
OPNsense;OPNsense:open_files.graph_args --base 1000 -l 0
OPNsense;OPNsense:open_files.graph_vlabel number of open files
OPNsense;OPNsense:open_files.graph_category system
OPNsense;OPNsense:open_files.graph_info This graph monitors the number of open files.
OPNsense;OPNsense:open_files.graph_order used max
OPNsense;OPNsense:open_files.max.graph_data_size normal
OPNsense;OPNsense:open_files.max.update_rate 300
OPNsense;OPNsense:open_files.max.label max open files
OPNsense;OPNsense:open_files.max.info The maximum supported number of open files.
OPNsense;OPNsense:open_files.used.label open files
OPNsense;OPNsense:open_files.used.update_rate 300
OPNsense;OPNsense:open_files.used.critical 509522
OPNsense;OPNsense:open_files.used.warning 478327
OPNsense;OPNsense:open_files.used.info The number of currently open files.
OPNsense;OPNsense:open_files.used.graph_data_size normal
OPNsense;OPNsense:ntp_offset.graph_title NTP timing statistics for system peer
OPNsense;OPNsense:ntp_offset.graph_args --base 1000 --vertical-label seconds --lower-limit 0
OPNsense;OPNsense:ntp_offset.graph_category time
OPNsense;OPNsense:ntp_offset.graph_info Currently our peer is *78.108.96.197.  Please refer to ntp docs and ntpc docs for further explanations of these numbers.
OPNsense;OPNsense:ntp_offset.graph_order delay offset jitter
OPNsense;OPNsense:ntp_offset.jitter.graph_data_size normal
OPNsense;OPNsense:ntp_offset.jitter.update_rate 300
OPNsense;OPNsense:ntp_offset.jitter.label Jitter
OPNsense;OPNsense:ntp_offset.jitter.cdef jitter,1000,/
OPNsense;OPNsense:ntp_offset.delay.update_rate 300
OPNsense;OPNsense:ntp_offset.delay.graph_data_size normal
OPNsense;OPNsense:ntp_offset.delay.cdef delay,1000,/
OPNsense;OPNsense:ntp_offset.delay.label Delay
OPNsense;OPNsense:ntp_offset.offset.graph_data_size normal
OPNsense;OPNsense:ntp_offset.offset.update_rate 300
OPNsense;OPNsense:ntp_offset.offset.label Offset
OPNsense;OPNsense:ntp_offset.offset.cdef offset,1000,/
OPNsense;OPNsense:memory.graph_args --base 1024 -l 0 --vertical-label Bytes --upper-limit 16583933952
OPNsense;OPNsense:memory.graph_title Memory usage
OPNsense;OPNsense:memory.graph_category system
OPNsense;OPNsense:memory.graph_info This graph shows what the machine uses its memory for.
OPNsense;OPNsense:memory.graph_order active inactive wired buffers cached laundry free swap active inactive wired buffers cached laundry free swap
OPNsense;OPNsense:memory.free.update_rate 300
OPNsense;OPNsense:memory.free.label free
OPNsense;OPNsense:memory.free.draw STACK
OPNsense;OPNsense:memory.free.graph_data_size normal
OPNsense;OPNsense:memory.free.info pages without data content
OPNsense;OPNsense:memory.active.update_rate 300
OPNsense;OPNsense:memory.active.label active
OPNsense;OPNsense:memory.active.draw AREA
OPNsense;OPNsense:memory.active.graph_data_size normal
OPNsense;OPNsense:memory.active.info pages recently statistically used
OPNsense;OPNsense:memory.wired.info pages that are fixed into memory, usually for kernel purposes, but also sometimes for special use in processes
OPNsense;OPNsense:memory.wired.graph_data_size normal
OPNsense;OPNsense:memory.wired.draw STACK
OPNsense;OPNsense:memory.wired.label wired
OPNsense;OPNsense:memory.wired.update_rate 300
OPNsense;OPNsense:memory.buffers.label buffers
OPNsense;OPNsense:memory.buffers.draw STACK
OPNsense;OPNsense:memory.buffers.update_rate 300
OPNsense;OPNsense:memory.buffers.info pages used for filesystem buffers
OPNsense;OPNsense:memory.buffers.graph_data_size normal
OPNsense;OPNsense:memory.cached.draw STACK
OPNsense;OPNsense:memory.cached.label cache
OPNsense;OPNsense:memory.cached.update_rate 300
OPNsense;OPNsense:memory.cached.info pages that have percolated from inactive to a status where they maintain their data, but can often be immediately reused
OPNsense;OPNsense:memory.cached.graph_data_size normal
OPNsense;OPNsense:memory.laundry.graph_data_size normal
OPNsense;OPNsense:memory.laundry.info number of dirty bytes inactive
OPNsense;OPNsense:memory.laundry.update_rate 300
OPNsense;OPNsense:memory.laundry.label laundry
OPNsense;OPNsense:memory.laundry.draw STACK
OPNsense;OPNsense:memory.inactive.info pages recently statistically unused
OPNsense;OPNsense:memory.inactive.graph_data_size normal
OPNsense;OPNsense:memory.inactive.draw STACK
OPNsense;OPNsense:memory.inactive.label inactive
OPNsense;OPNsense:memory.inactive.update_rate 300
OPNsense;OPNsense:memory.swap.graph_data_size normal
OPNsense;OPNsense:memory.swap.info Swap space used
OPNsense;OPNsense:memory.swap.update_rate 300
OPNsense;OPNsense:memory.swap.draw STACK
OPNsense;OPNsense:memory.swap.label swap
OPNsense;OPNsense:ntp_states.graph_title NTP states
OPNsense;OPNsense:ntp_states.graph_args --base 1000 --vertical-label state --lower-limit 0
OPNsense;OPNsense:ntp_states.graph_category time
OPNsense;OPNsense:ntp_states.graph_info These are graphs of the states of this system's NTP peers. The states translate as follows: 0=reject, 1=falsetick, 2=excess, 3=backup, 4=outlyer, 5=candidate, 6=system peer, 7=PPS peer. See http://www.eecis.udel.edu/~mills/ntp/html/decode.html for more information on the meaning of these conditions. If show_syspeer_stratum is specified, the sys.peer stratum is also graphed.
OPNsense;OPNsense:ntp_states.graph_order peer_z_cincura_net peer_vip197_czela_net peer_0_0_0_0 peer_vitapavlik_cz peer_ntp_sloane_cz peer_time_cloudflare_com peer_ntp_suas_cz
OPNsense;OPNsense:ntp_states.peer_ntp_suas_cz.update_rate 300
OPNsense;OPNsense:ntp_states.peer_ntp_suas_cz.graph_data_size normal
OPNsense;OPNsense:ntp_states.peer_ntp_suas_cz.label ntp.suas.cz
OPNsense;OPNsense:ntp_states.peer_time_cloudflare_com.graph_data_size normal
OPNsense;OPNsense:ntp_states.peer_time_cloudflare_com.update_rate 300
OPNsense;OPNsense:ntp_states.peer_time_cloudflare_com.label time.cloudflare.com
OPNsense;OPNsense:ntp_states.peer_ntp_sloane_cz.label ntp.sloane.cz
OPNsense;OPNsense:ntp_states.peer_ntp_sloane_cz.graph_data_size normal
OPNsense;OPNsense:ntp_states.peer_ntp_sloane_cz.update_rate 300
OPNsense;OPNsense:ntp_states.peer_z_cincura_net.label z.cincura.net
OPNsense;OPNsense:ntp_states.peer_z_cincura_net.graph_data_size normal
OPNsense;OPNsense:ntp_states.peer_z_cincura_net.update_rate 300
OPNsense;OPNsense:ntp_states.peer_vip197_czela_net.update_rate 300
OPNsense;OPNsense:ntp_states.peer_vip197_czela_net.graph_data_size normal
OPNsense;OPNsense:ntp_states.peer_vip197_czela_net.label vip197.czela.net
OPNsense;OPNsense:ntp_states.peer_vitapavlik_cz.graph_data_size normal
OPNsense;OPNsense:ntp_states.peer_vitapavlik_cz.update_rate 300
OPNsense;OPNsense:ntp_states.peer_vitapavlik_cz.label vitapavlik.cz
OPNsense;OPNsense:ntp_states.peer_0_0_0_0.update_rate 300
OPNsense;OPNsense:ntp_states.peer_0_0_0_0.graph_data_size normal
OPNsense;OPNsense:ntp_states.peer_0_0_0_0.label 0.0.0.0
OPNsense;OPNsense:load.graph_title Load average
OPNsense;OPNsense:load.graph_args --base 1000 -l 0
OPNsense;OPNsense:load.graph_vlabel load
OPNsense;OPNsense:load.graph_noscale true
OPNsense;OPNsense:load.graph_category system
OPNsense;OPNsense:load.graph_info The load average of the machine describes how many processes are in the run-queue (scheduled to run "immediately").
OPNsense;OPNsense:load.graph_order load
OPNsense;OPNsense:load.load.update_rate 300
OPNsense;OPNsense:load.load.label load
OPNsense;OPNsense:load.load.graph_data_size normal
OPNsense;OPNsense:load.load.warning 10
OPNsense;OPNsense:load.load.critical 120
OPNsense;OPNsense:load.load.info Average load for the five minutes.
OPNsense;OPNsense:ntp_81_25_28_124.graph_title NTP statistics for peer 81.25.28.124
OPNsense;OPNsense:ntp_81_25_28_124.graph_args --base 1000 --vertical-label seconds --lower-limit 0
OPNsense;OPNsense:ntp_81_25_28_124.graph_category time
OPNsense;OPNsense:ntp_81_25_28_124.graph_order delay offset jitter
OPNsense;OPNsense:ntp_81_25_28_124.jitter.graph_data_size normal
OPNsense;OPNsense:ntp_81_25_28_124.jitter.update_rate 300
OPNsense;OPNsense:ntp_81_25_28_124.jitter.cdef jitter,1000,/
OPNsense;OPNsense:ntp_81_25_28_124.jitter.label Jitter
OPNsense;OPNsense:ntp_81_25_28_124.delay.update_rate 300
OPNsense;OPNsense:ntp_81_25_28_124.delay.graph_data_size normal
OPNsense;OPNsense:ntp_81_25_28_124.delay.label Delay
OPNsense;OPNsense:ntp_81_25_28_124.delay.cdef delay,1000,/
OPNsense;OPNsense:ntp_81_25_28_124.offset.cdef offset,1000,/
OPNsense;OPNsense:ntp_81_25_28_124.offset.label Offset
OPNsense;OPNsense:ntp_81_25_28_124.offset.graph_data_size normal
OPNsense;OPNsense:ntp_81_25_28_124.offset.update_rate 300
OPNsense;OPNsense:if_igb0.graph_order rbytes obytes rbytes obytes
OPNsense;OPNsense:if_igb0.graph_title igb0 traffic
OPNsense;OPNsense:if_igb0.graph_args --base 1000
OPNsense;OPNsense:if_igb0.graph_vlabel bits per ${graph_period} in (-) / out (+)
OPNsense;OPNsense:if_igb0.graph_category network
OPNsense;OPNsense:if_igb0.graph_info This graph shows the traffic of the igb0 network interface. Please note that the traffic is shown in bits per second, not bytes. IMPORTANT: On older BSD systems the data source for this plugin uses 32bit counters, this plugin is really unreliable and unsuitable for most 100Mb (or faster) interfaces, where bursts are expected to exceed 50Mbps. This means that this plugin is unsuitable for old production environments.
OPNsense;OPNsense:if_igb0.rbytes.cdef rbytes,8,*
OPNsense;OPNsense:if_igb0.rbytes.label received
OPNsense;OPNsense:if_igb0.rbytes.graph no
OPNsense;OPNsense:if_igb0.rbytes.type DERIVE
OPNsense;OPNsense:if_igb0.rbytes.min 0
OPNsense;OPNsense:if_igb0.rbytes.update_rate 300
OPNsense;OPNsense:if_igb0.rbytes.graph_data_size normal
OPNsense;OPNsense:if_igb0.obytes.negative rbytes
OPNsense;OPNsense:if_igb0.obytes.graph_data_size normal
OPNsense;OPNsense:if_igb0.obytes.info Traffic sent (+) and received (-) on the igb0 network interface.
OPNsense;OPNsense:if_igb0.obytes.update_rate 300
OPNsense;OPNsense:if_igb0.obytes.type DERIVE
OPNsense;OPNsense:if_igb0.obytes.min 0
OPNsense;OPNsense:if_igb0.obytes.cdef obytes,8,*
OPNsense;OPNsense:if_igb0.obytes.label bps
OPNsense;OPNsense:ntp_kernel_err.graph_title NTP kernel PLL estimated error (secs)
OPNsense;OPNsense:ntp_kernel_err.graph_vlabel est. err (secs)
OPNsense;OPNsense:ntp_kernel_err.graph_category time
OPNsense;OPNsense:ntp_kernel_err.graph_info The kernels estimated error for the phase-locked loop used by NTP.
OPNsense;OPNsense:ntp_kernel_err.graph_order ntp_err
OPNsense;OPNsense:ntp_kernel_err.ntp_err.update_rate 300
OPNsense;OPNsense:ntp_kernel_err.ntp_err.graph_data_size normal
OPNsense;OPNsense:ntp_kernel_err.ntp_err.label est-error
OPNsense;OPNsense:ntp_kernel_err.ntp_err.info Estimated error for the kernel PLL
OPNsense;OPNsense:if_em0.graph_order rbytes obytes rbytes obytes
OPNsense;OPNsense:if_em0.graph_title em0 traffic
OPNsense;OPNsense:if_em0.graph_args --base 1000
OPNsense;OPNsense:if_em0.graph_vlabel bits per ${graph_period} in (-) / out (+)
OPNsense;OPNsense:if_em0.graph_category network
OPNsense;OPNsense:if_em0.graph_info This graph shows the traffic of the em0 network interface. Please note that the traffic is shown in bits per second, not bytes. IMPORTANT: On older BSD systems the data source for this plugin uses 32bit counters, this plugin is really unreliable and unsuitable for most 100Mb (or faster) interfaces, where bursts are expected to exceed 50Mbps. This means that this plugin is unsuitable for old production environments.
OPNsense;OPNsense:if_em0.rbytes.graph_data_size normal
OPNsense;OPNsense:if_em0.rbytes.update_rate 300
OPNsense;OPNsense:if_em0.rbytes.type DERIVE
OPNsense;OPNsense:if_em0.rbytes.min 0
OPNsense;OPNsense:if_em0.rbytes.label received
OPNsense;OPNsense:if_em0.rbytes.graph no
OPNsense;OPNsense:if_em0.rbytes.cdef rbytes,8,*
OPNsense;OPNsense:if_em0.obytes.min 0
OPNsense;OPNsense:if_em0.obytes.type DERIVE
OPNsense;OPNsense:if_em0.obytes.update_rate 300
OPNsense;OPNsense:if_em0.obytes.label bps
OPNsense;OPNsense:if_em0.obytes.cdef obytes,8,*
OPNsense;OPNsense:if_em0.obytes.graph_data_size normal
OPNsense;OPNsense:if_em0.obytes.negative rbytes
OPNsense;OPNsense:if_em0.obytes.info Traffic sent (+) and received (-) on the em0 network interface.
OPNsense;OPNsense:ntp_78_108_96_197.graph_title NTP statistics for peer 78.108.96.197
OPNsense;OPNsense:ntp_78_108_96_197.graph_args --base 1000 --vertical-label seconds --lower-limit 0
OPNsense;OPNsense:ntp_78_108_96_197.graph_category time
OPNsense;OPNsense:ntp_78_108_96_197.graph_order delay offset jitter
OPNsense;OPNsense:ntp_78_108_96_197.delay.cdef delay,1000,/
OPNsense;OPNsense:ntp_78_108_96_197.delay.label Delay
OPNsense;OPNsense:ntp_78_108_96_197.delay.update_rate 300
OPNsense;OPNsense:ntp_78_108_96_197.delay.graph_data_size normal
OPNsense;OPNsense:ntp_78_108_96_197.jitter.label Jitter
OPNsense;OPNsense:ntp_78_108_96_197.jitter.cdef jitter,1000,/
OPNsense;OPNsense:ntp_78_108_96_197.jitter.graph_data_size normal
OPNsense;OPNsense:ntp_78_108_96_197.jitter.update_rate 300
OPNsense;OPNsense:ntp_78_108_96_197.offset.update_rate 300
OPNsense;OPNsense:ntp_78_108_96_197.offset.graph_data_size normal
OPNsense;OPNsense:ntp_78_108_96_197.offset.label Offset
OPNsense;OPNsense:ntp_78_108_96_197.offset.cdef offset,1000,/
OPNsense;OPNsense:if_packets_igb0.graph_order rpackets opackets rpackets opackets
OPNsense;OPNsense:if_packets_igb0.graph_title igb0 pps
OPNsense;OPNsense:if_packets_igb0.graph_args --base 1000
OPNsense;OPNsense:if_packets_igb0.graph_vlabel packets per ${graph_period} in (-) / out (+)
OPNsense;OPNsense:if_packets_igb0.graph_category network
OPNsense;OPNsense:if_packets_igb0.graph_info This graph shows the packets counter of the igb0 network interface. Please note that the traffic is shown in packets per second.
OPNsense;OPNsense:if_packets_igb0.rpackets.update_rate 300
OPNsense;OPNsense:if_packets_igb0.rpackets.type COUNTER
OPNsense;OPNsense:if_packets_igb0.rpackets.min 0
OPNsense;OPNsense:if_packets_igb0.rpackets.label received
OPNsense;OPNsense:if_packets_igb0.rpackets.graph no
OPNsense;OPNsense:if_packets_igb0.rpackets.graph_data_size normal
OPNsense;OPNsense:if_packets_igb0.opackets.label pps
OPNsense;OPNsense:if_packets_igb0.opackets.min 0
OPNsense;OPNsense:if_packets_igb0.opackets.type COUNTER
OPNsense;OPNsense:if_packets_igb0.opackets.update_rate 300
OPNsense;OPNsense:if_packets_igb0.opackets.info Packets sent (+) and received (-) on the igb0 network interface.
OPNsense;OPNsense:if_packets_igb0.opackets.graph_data_size normal
OPNsense;OPNsense:if_packets_igb0.opackets.negative rpackets

Zerion Mini Shell 1.0