%PDF- %PDF-
Direktori : /proc/985914/root/data/old/var/lib/munin/ |
Current File : //proc/985914/root/data/old/var/lib/munin/datafile |
version 2.0.69 localhost;localhost:open_inodes.graph_title Inode table usage localhost;localhost:open_inodes.graph_args --base 1000 -l 0 localhost;localhost:open_inodes.graph_vlabel number of open inodes localhost;localhost:open_inodes.graph_category system localhost;localhost:open_inodes.graph_info This graph monitors the Linux open inode table. localhost;localhost:open_inodes.graph_order used max localhost;localhost:open_inodes.used.info The number of currently open inodes. localhost;localhost:open_inodes.used.update_rate 300 localhost;localhost:open_inodes.used.graph_data_size normal localhost;localhost:open_inodes.used.label open inodes localhost;localhost:open_inodes.max.info The size of the system inode table. This is dynamically adjusted by the kernel. localhost;localhost:open_inodes.max.update_rate 300 localhost;localhost:open_inodes.max.graph_data_size normal localhost;localhost:open_inodes.max.label inode table size localhost;localhost:irqstats.graph_title Individual interrupts localhost;localhost:irqstats.graph_args --base 1000 --logarithmic localhost;localhost:irqstats.graph_vlabel interrupts / ${graph_period} localhost;localhost:irqstats.graph_category system localhost;localhost:irqstats.graph_info Shows the number of different IRQs received by the kernel. High disk or network traffic can cause a high number of interrupts (with good hardware and drivers this will be less so). Sudden high interrupt activity with no associated higher system activity is not normal. localhost;localhost:irqstats.graph_order i0 i7 i8 i9 i14 i16 i120 i121 i122 i123 i124 i125 i329 iNMI iLOC iSPU iPMI iIWI iRTR iRES iCAL iTLB iTRM iTHR iDFR iMCE iMCP iERR iMIS iPIN iNPI iPIW i0 i7 i8 i9 i14 i16 i120 i121 i122 i123 i124 i125 i329 iNMI iLOC iSPU iPMI iIWI iRTR iRES iCAL iTLB iTRM iTHR iDFR iMCE iMCP iERR iMIS iPIN iNPI iPIW localhost;localhost:irqstats.iRES.info Interrupt RES, for device(s): Rescheduling interrupts localhost;localhost:irqstats.iRES.update_rate 300 localhost;localhost:irqstats.iRES.min 0 localhost;localhost:irqstats.iRES.graph_data_size normal localhost;localhost:irqstats.iRES.label Rescheduling interrupts localhost;localhost:irqstats.iRES.type DERIVE localhost;localhost:irqstats.i122.info Interrupt 122, for device(s): xhci_hcd localhost;localhost:irqstats.i122.update_rate 300 localhost;localhost:irqstats.i122.min 0 localhost;localhost:irqstats.i122.graph_data_size normal localhost;localhost:irqstats.i122.label xhci_hcd localhost;localhost:irqstats.i122.type DERIVE localhost;localhost:irqstats.iIWI.info Interrupt IWI, for device(s): IRQ work interrupts localhost;localhost:irqstats.iIWI.update_rate 300 localhost;localhost:irqstats.iIWI.min 0 localhost;localhost:irqstats.iIWI.graph_data_size normal localhost;localhost:irqstats.iIWI.label IRQ work interrupts localhost;localhost:irqstats.iIWI.type DERIVE localhost;localhost:irqstats.i16.info Interrupt 16, for device(s): i801_smbus localhost;localhost:irqstats.i16.update_rate 300 localhost;localhost:irqstats.i16.min 0 localhost;localhost:irqstats.i16.graph_data_size normal localhost;localhost:irqstats.i16.label i801_smbus localhost;localhost:irqstats.i16.type DERIVE localhost;localhost:irqstats.iCAL.info Interrupt CAL, for device(s): Function call interrupts localhost;localhost:irqstats.iCAL.update_rate 300 localhost;localhost:irqstats.iCAL.min 0 localhost;localhost:irqstats.iCAL.graph_data_size normal localhost;localhost:irqstats.iCAL.label Function call interrupts localhost;localhost:irqstats.iCAL.type DERIVE localhost;localhost:irqstats.i123.info Interrupt 123, for device(s): eno1 localhost;localhost:irqstats.i123.update_rate 300 localhost;localhost:irqstats.i123.min 0 localhost;localhost:irqstats.i123.graph_data_size normal localhost;localhost:irqstats.i123.label eno1 localhost;localhost:irqstats.i123.type DERIVE localhost;localhost:irqstats.i329.info Interrupt 329, for device(s): mei_me localhost;localhost:irqstats.i329.update_rate 300 localhost;localhost:irqstats.i329.min 0 localhost;localhost:irqstats.i329.graph_data_size normal localhost;localhost:irqstats.i329.label mei_me localhost;localhost:irqstats.i329.type DERIVE localhost;localhost:irqstats.iMIS.update_rate 300 localhost;localhost:irqstats.iMIS.min 0 localhost;localhost:irqstats.iMIS.graph_data_size normal localhost;localhost:irqstats.iMIS.label MIS localhost;localhost:irqstats.iMIS.type DERIVE localhost;localhost:irqstats.i0.info Interrupt 0, for device(s): timer localhost;localhost:irqstats.i0.update_rate 300 localhost;localhost:irqstats.i0.min 0 localhost;localhost:irqstats.i0.graph_data_size normal localhost;localhost:irqstats.i0.label timer localhost;localhost:irqstats.i0.type DERIVE localhost;localhost:irqstats.iPIW.info Interrupt PIW, for device(s): Posted-interrupt wakeup event localhost;localhost:irqstats.iPIW.update_rate 300 localhost;localhost:irqstats.iPIW.min 0 localhost;localhost:irqstats.iPIW.graph_data_size normal localhost;localhost:irqstats.iPIW.label Posted-interrupt wakeup event localhost;localhost:irqstats.iPIW.type DERIVE localhost;localhost:irqstats.i125.info Interrupt 125, for device(s): i915 localhost;localhost:irqstats.i125.update_rate 300 localhost;localhost:irqstats.i125.min 0 localhost;localhost:irqstats.i125.graph_data_size normal localhost;localhost:irqstats.i125.label i915 localhost;localhost:irqstats.i125.type DERIVE localhost;localhost:irqstats.iDFR.info Interrupt DFR, for device(s): Deferred Error APIC interrupts localhost;localhost:irqstats.iDFR.update_rate 300 localhost;localhost:irqstats.iDFR.min 0 localhost;localhost:irqstats.iDFR.graph_data_size normal localhost;localhost:irqstats.iDFR.label Deferred Error APIC interrupts localhost;localhost:irqstats.iDFR.type DERIVE localhost;localhost:irqstats.iSPU.info Interrupt SPU, for device(s): Spurious interrupts localhost;localhost:irqstats.iSPU.update_rate 300 localhost;localhost:irqstats.iSPU.min 0 localhost;localhost:irqstats.iSPU.graph_data_size normal localhost;localhost:irqstats.iSPU.label Spurious interrupts localhost;localhost:irqstats.iSPU.type DERIVE localhost;localhost:irqstats.i7.update_rate 300 localhost;localhost:irqstats.i7.min 0 localhost;localhost:irqstats.i7.graph_data_size normal localhost;localhost:irqstats.i7.label 7 localhost;localhost:irqstats.i7.type DERIVE localhost;localhost:irqstats.iNMI.info Interrupt NMI, for device(s): Non-maskable interrupts localhost;localhost:irqstats.iNMI.update_rate 300 localhost;localhost:irqstats.iNMI.min 0 localhost;localhost:irqstats.iNMI.graph_data_size normal localhost;localhost:irqstats.iNMI.label Non-maskable interrupts localhost;localhost:irqstats.iNMI.type DERIVE localhost;localhost:irqstats.iLOC.info Interrupt LOC, for device(s): Local timer interrupts localhost;localhost:irqstats.iLOC.update_rate 300 localhost;localhost:irqstats.iLOC.min 0 localhost;localhost:irqstats.iLOC.graph_data_size normal localhost;localhost:irqstats.iLOC.label Local timer interrupts localhost;localhost:irqstats.iLOC.type DERIVE localhost;localhost:irqstats.i121.info Interrupt 121, for device(s): dmar1 localhost;localhost:irqstats.i121.update_rate 300 localhost;localhost:irqstats.i121.min 0 localhost;localhost:irqstats.i121.graph_data_size normal localhost;localhost:irqstats.i121.label dmar1 localhost;localhost:irqstats.i121.type DERIVE localhost;localhost:irqstats.iERR.update_rate 300 localhost;localhost:irqstats.iERR.min 0 localhost;localhost:irqstats.iERR.graph_data_size normal localhost;localhost:irqstats.iERR.label ERR localhost;localhost:irqstats.iERR.type DERIVE localhost;localhost:irqstats.iPIN.info Interrupt PIN, for device(s): Posted-interrupt notification event localhost;localhost:irqstats.iPIN.update_rate 300 localhost;localhost:irqstats.iPIN.min 0 localhost;localhost:irqstats.iPIN.graph_data_size normal localhost;localhost:irqstats.iPIN.label Posted-interrupt notification event localhost;localhost:irqstats.iPIN.type DERIVE localhost;localhost:irqstats.i124.info Interrupt 124, for device(s): 0000:00:17.0 localhost;localhost:irqstats.i124.update_rate 300 localhost;localhost:irqstats.i124.min 0 localhost;localhost:irqstats.i124.graph_data_size normal localhost;localhost:irqstats.i124.label 0000:00:17.0 localhost;localhost:irqstats.i124.type DERIVE localhost;localhost:irqstats.iTRM.info Interrupt TRM, for device(s): Thermal event interrupts localhost;localhost:irqstats.iTRM.update_rate 300 localhost;localhost:irqstats.iTRM.min 0 localhost;localhost:irqstats.iTRM.graph_data_size normal localhost;localhost:irqstats.iTRM.label Thermal event interrupts localhost;localhost:irqstats.iTRM.type DERIVE localhost;localhost:irqstats.iTHR.info Interrupt THR, for device(s): Threshold APIC interrupts localhost;localhost:irqstats.iTHR.update_rate 300 localhost;localhost:irqstats.iTHR.min 0 localhost;localhost:irqstats.iTHR.graph_data_size normal localhost;localhost:irqstats.iTHR.label Threshold APIC interrupts localhost;localhost:irqstats.iTHR.type DERIVE localhost;localhost:irqstats.iNPI.info Interrupt NPI, for device(s): Nested posted-interrupt event localhost;localhost:irqstats.iNPI.update_rate 300 localhost;localhost:irqstats.iNPI.min 0 localhost;localhost:irqstats.iNPI.graph_data_size normal localhost;localhost:irqstats.iNPI.label Nested posted-interrupt event localhost;localhost:irqstats.iNPI.type DERIVE localhost;localhost:irqstats.iTLB.info Interrupt TLB, for device(s): TLB shootdowns localhost;localhost:irqstats.iTLB.update_rate 300 localhost;localhost:irqstats.iTLB.min 0 localhost;localhost:irqstats.iTLB.graph_data_size normal localhost;localhost:irqstats.iTLB.label TLB shootdowns localhost;localhost:irqstats.iTLB.type DERIVE localhost;localhost:irqstats.iPMI.info Interrupt PMI, for device(s): Performance monitoring interrupts localhost;localhost:irqstats.iPMI.update_rate 300 localhost;localhost:irqstats.iPMI.min 0 localhost;localhost:irqstats.iPMI.graph_data_size normal localhost;localhost:irqstats.iPMI.label Performance monitoring interrupts localhost;localhost:irqstats.iPMI.type DERIVE localhost;localhost:irqstats.iMCP.info Interrupt MCP, for device(s): Machine check polls localhost;localhost:irqstats.iMCP.update_rate 300 localhost;localhost:irqstats.iMCP.min 0 localhost;localhost:irqstats.iMCP.graph_data_size normal localhost;localhost:irqstats.iMCP.label Machine check polls localhost;localhost:irqstats.iMCP.type DERIVE localhost;localhost:irqstats.i120.info Interrupt 120, for device(s): dmar0 localhost;localhost:irqstats.i120.update_rate 300 localhost;localhost:irqstats.i120.min 0 localhost;localhost:irqstats.i120.graph_data_size normal localhost;localhost:irqstats.i120.label dmar0 localhost;localhost:irqstats.i120.type DERIVE localhost;localhost:irqstats.iMCE.info Interrupt MCE, for device(s): Machine check exceptions localhost;localhost:irqstats.iMCE.update_rate 300 localhost;localhost:irqstats.iMCE.min 0 localhost;localhost:irqstats.iMCE.graph_data_size normal localhost;localhost:irqstats.iMCE.label Machine check exceptions localhost;localhost:irqstats.iMCE.type DERIVE localhost;localhost:irqstats.iRTR.info Interrupt RTR, for device(s): APIC ICR read retries localhost;localhost:irqstats.iRTR.update_rate 300 localhost;localhost:irqstats.iRTR.min 0 localhost;localhost:irqstats.iRTR.graph_data_size normal localhost;localhost:irqstats.iRTR.label APIC ICR read retries localhost;localhost:irqstats.iRTR.type DERIVE localhost;localhost:irqstats.i8.info Interrupt 8, for device(s): rtc0 localhost;localhost:irqstats.i8.update_rate 300 localhost;localhost:irqstats.i8.min 0 localhost;localhost:irqstats.i8.graph_data_size normal localhost;localhost:irqstats.i8.label rtc0 localhost;localhost:irqstats.i8.type DERIVE localhost;localhost:irqstats.i14.info Interrupt 14, for device(s): INT345D:00 localhost;localhost:irqstats.i14.update_rate 300 localhost;localhost:irqstats.i14.min 0 localhost;localhost:irqstats.i14.graph_data_size normal localhost;localhost:irqstats.i14.label INT345D:00 localhost;localhost:irqstats.i14.type DERIVE localhost;localhost:irqstats.i9.info Interrupt 9, for device(s): acpi localhost;localhost:irqstats.i9.update_rate 300 localhost;localhost:irqstats.i9.min 0 localhost;localhost:irqstats.i9.graph_data_size normal localhost;localhost:irqstats.i9.label acpi localhost;localhost:irqstats.i9.type DERIVE localhost;localhost:varnish4_hit_rate.graph_category varnish localhost;localhost:varnish4_hit_rate.graph_title Hit rates localhost;localhost:varnish4_hit_rate.graph_order client_req cache_hit cache_miss cache_hitpass cache_hit cache_hitpass cache_miss client_req localhost;localhost:varnish4_hit_rate.graph_scale no localhost;localhost:varnish4_hit_rate.graph_vlabel % localhost;localhost:varnish4_hit_rate.graph_args -l 0 -u 100 --rigid localhost;localhost:varnish4_hit_rate.client_req.update_rate 300 localhost;localhost:varnish4_hit_rate.client_req.min 0 localhost;localhost:varnish4_hit_rate.client_req.graph_data_size normal localhost;localhost:varnish4_hit_rate.client_req.label Good client requests received localhost;localhost:varnish4_hit_rate.client_req.type DERIVE localhost;localhost:varnish4_hit_rate.client_req.graph off localhost;localhost:varnish4_hit_rate.cache_miss.cdef cache_miss,client_req,/,100,* localhost;localhost:varnish4_hit_rate.cache_miss.update_rate 300 localhost;localhost:varnish4_hit_rate.cache_miss.draw STACK localhost;localhost:varnish4_hit_rate.cache_miss.min 0 localhost;localhost:varnish4_hit_rate.cache_miss.graph_data_size normal localhost;localhost:varnish4_hit_rate.cache_miss.label Cache misses localhost;localhost:varnish4_hit_rate.cache_miss.type DERIVE localhost;localhost:varnish4_hit_rate.cache_hit.cdef cache_hit,client_req,/,100,* localhost;localhost:varnish4_hit_rate.cache_hit.update_rate 300 localhost;localhost:varnish4_hit_rate.cache_hit.draw AREA localhost;localhost:varnish4_hit_rate.cache_hit.min 0 localhost;localhost:varnish4_hit_rate.cache_hit.graph_data_size normal localhost;localhost:varnish4_hit_rate.cache_hit.label Cache hits localhost;localhost:varnish4_hit_rate.cache_hit.type DERIVE localhost;localhost:varnish4_hit_rate.cache_hitpass.cdef cache_hitpass,client_req,/,100,* localhost;localhost:varnish4_hit_rate.cache_hitpass.update_rate 300 localhost;localhost:varnish4_hit_rate.cache_hitpass.draw STACK localhost;localhost:varnish4_hit_rate.cache_hitpass.min 0 localhost;localhost:varnish4_hit_rate.cache_hitpass.graph_data_size normal localhost;localhost:varnish4_hit_rate.cache_hitpass.label Cache hits for pass localhost;localhost:varnish4_hit_rate.cache_hitpass.type DERIVE localhost;localhost:diskstats_utilization.sdd.graph_title Disk utilization for /dev/sdd localhost;localhost:diskstats_utilization.sdd.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid localhost;localhost:diskstats_utilization.sdd.graph_vlabel % busy localhost;localhost:diskstats_utilization.sdd.graph_category disk localhost;localhost:diskstats_utilization.sdd.graph_scale no localhost;localhost:diskstats_utilization.sdd.graph_order util localhost;localhost:diskstats_utilization.sdd.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated. localhost;localhost:diskstats_utilization.sdd.util.update_rate 300 localhost;localhost:diskstats_utilization.sdd.util.draw LINE1 localhost;localhost:diskstats_utilization.sdd.util.min 0 localhost;localhost:diskstats_utilization.sdd.util.graph_data_size normal localhost;localhost:diskstats_utilization.sdd.util.label Utilization localhost;localhost:diskstats_utilization.sdd.util.type GAUGE localhost;localhost:diskstats_iops.sdc.graph_title IOs for /dev/sdc localhost;localhost:diskstats_iops.sdc.graph_args --base 1000 localhost;localhost:diskstats_iops.sdc.graph_vlabel Units read (-) / write (+) localhost;localhost:diskstats_iops.sdc.graph_category disk localhost;localhost:diskstats_iops.sdc.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024. localhost;localhost:diskstats_iops.sdc.graph_order rdio wrio avgrdrqsz avgwrrqsz localhost;localhost:diskstats_iops.sdc.rdio.update_rate 300 localhost;localhost:diskstats_iops.sdc.rdio.draw LINE1 localhost;localhost:diskstats_iops.sdc.rdio.min 0 localhost;localhost:diskstats_iops.sdc.rdio.graph_data_size normal localhost;localhost:diskstats_iops.sdc.rdio.label dummy localhost;localhost:diskstats_iops.sdc.rdio.type GAUGE localhost;localhost:diskstats_iops.sdc.rdio.graph no localhost;localhost:diskstats_iops.sdc.wrio.update_rate 300 localhost;localhost:diskstats_iops.sdc.wrio.draw LINE1 localhost;localhost:diskstats_iops.sdc.wrio.min 0 localhost;localhost:diskstats_iops.sdc.wrio.graph_data_size normal localhost;localhost:diskstats_iops.sdc.wrio.label IO/sec localhost;localhost:diskstats_iops.sdc.wrio.type GAUGE localhost;localhost:diskstats_iops.sdc.wrio.negative rdio localhost;localhost:diskstats_iops.sdc.avgwrrqsz.info Average Request Size in kilobytes (1000 based) localhost;localhost:diskstats_iops.sdc.avgwrrqsz.update_rate 300 localhost;localhost:diskstats_iops.sdc.avgwrrqsz.draw LINE1 localhost;localhost:diskstats_iops.sdc.avgwrrqsz.min 0 localhost;localhost:diskstats_iops.sdc.avgwrrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sdc.avgwrrqsz.negative avgrdrqsz localhost;localhost:diskstats_iops.sdc.avgwrrqsz.type GAUGE localhost;localhost:diskstats_iops.sdc.avgwrrqsz.label Req Size (KB) localhost;localhost:diskstats_iops.sdc.avgrdrqsz.update_rate 300 localhost;localhost:diskstats_iops.sdc.avgrdrqsz.draw LINE1 localhost;localhost:diskstats_iops.sdc.avgrdrqsz.min 0 localhost;localhost:diskstats_iops.sdc.avgrdrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sdc.avgrdrqsz.label dummy localhost;localhost:diskstats_iops.sdc.avgrdrqsz.type GAUGE localhost;localhost:diskstats_iops.sdc.avgrdrqsz.graph no localhost;localhost:varnish4_expunge.graph_category varnish localhost;localhost:varnish4_expunge.graph_title Object expunging localhost;localhost:varnish4_expunge.graph_order n_expired n_lru_nuked n_expired n_lru_nuked localhost;localhost:varnish4_expunge.n_lru_nuked.update_rate 300 localhost;localhost:varnish4_expunge.n_lru_nuked.min 0 localhost;localhost:varnish4_expunge.n_lru_nuked.graph_data_size normal localhost;localhost:varnish4_expunge.n_lru_nuked.label Number of LRU nuked objects localhost;localhost:varnish4_expunge.n_lru_nuked.type DERIVE localhost;localhost:varnish4_expunge.n_expired.update_rate 300 localhost;localhost:varnish4_expunge.n_expired.min 0 localhost;localhost:varnish4_expunge.n_expired.graph_data_size normal localhost;localhost:varnish4_expunge.n_expired.label Number of expired objects localhost;localhost:varnish4_expunge.n_expired.type DERIVE localhost;localhost:diskstats_throughput.sdc.graph_title Disk throughput for /dev/sdc localhost;localhost:diskstats_throughput.sdc.graph_args --base 1024 localhost;localhost:diskstats_throughput.sdc.graph_vlabel Pr ${graph_period} read (-) / write (+) localhost;localhost:diskstats_throughput.sdc.graph_category disk localhost;localhost:diskstats_throughput.sdc.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on. localhost;localhost:diskstats_throughput.sdc.graph_order rdbytes wrbytes localhost;localhost:diskstats_throughput.sdc.rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdc.rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdc.rdbytes.min 0 localhost;localhost:diskstats_throughput.sdc.rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdc.rdbytes.label invisible localhost;localhost:diskstats_throughput.sdc.rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sdc.rdbytes.graph no localhost;localhost:diskstats_throughput.sdc.wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdc.wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdc.wrbytes.min 0 localhost;localhost:diskstats_throughput.sdc.wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdc.wrbytes.label Bytes localhost;localhost:diskstats_throughput.sdc.wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sdc.wrbytes.negative rdbytes localhost;localhost:sensors_temp.graph_title Temperatures localhost;localhost:sensors_temp.graph_vlabel degrees Celsius localhost;localhost:sensors_temp.graph_args --base 1000 -l 0 localhost;localhost:sensors_temp.graph_category sensors localhost;localhost:sensors_temp.graph_order temp1 temp2 temp3 temp4 temp5 temp6 temp7 localhost;localhost:sensors_temp.temp5.critical 100.0 localhost;localhost:sensors_temp.temp5.update_rate 300 localhost;localhost:sensors_temp.temp5.warning 80.0 localhost;localhost:sensors_temp.temp5.graph_data_size normal localhost;localhost:sensors_temp.temp5.label Core 1 localhost;localhost:sensors_temp.temp7.critical 100.0 localhost;localhost:sensors_temp.temp7.update_rate 300 localhost;localhost:sensors_temp.temp7.warning 80.0 localhost;localhost:sensors_temp.temp7.graph_data_size normal localhost;localhost:sensors_temp.temp7.label Core 3 localhost;localhost:sensors_temp.temp1.critical 119.0 localhost;localhost:sensors_temp.temp1.update_rate 300 localhost;localhost:sensors_temp.temp1.graph_data_size normal localhost;localhost:sensors_temp.temp1.label temp1 localhost;localhost:sensors_temp.temp4.critical 100.0 localhost;localhost:sensors_temp.temp4.update_rate 300 localhost;localhost:sensors_temp.temp4.warning 80.0 localhost;localhost:sensors_temp.temp4.graph_data_size normal localhost;localhost:sensors_temp.temp4.label Core 0 localhost;localhost:sensors_temp.temp6.critical 100.0 localhost;localhost:sensors_temp.temp6.update_rate 300 localhost;localhost:sensors_temp.temp6.warning 80.0 localhost;localhost:sensors_temp.temp6.graph_data_size normal localhost;localhost:sensors_temp.temp6.label Core 2 localhost;localhost:sensors_temp.temp3.critical 100.0 localhost;localhost:sensors_temp.temp3.update_rate 300 localhost;localhost:sensors_temp.temp3.warning 80.0 localhost;localhost:sensors_temp.temp3.graph_data_size normal localhost;localhost:sensors_temp.temp3.label Package id 0 localhost;localhost:sensors_temp.temp2.critical 119.0 localhost;localhost:sensors_temp.temp2.update_rate 300 localhost;localhost:sensors_temp.temp2.graph_data_size normal localhost;localhost:sensors_temp.temp2.label temp2 localhost;localhost:df.graph_title Disk usage in percent localhost;localhost:df.graph_args --upper-limit 100 -l 0 localhost;localhost:df.graph_vlabel % localhost;localhost:df.graph_scale no localhost;localhost:df.graph_category disk localhost;localhost:df.graph_order _dev_sda2 _dev_shm _run _dev_sda1 _dev_md126p2 _dev_md126p1 _dev_md126p3 _dev_sdb1 _dev_sdb2 _dev_sdb3 _dev_sdb5 localhost;localhost:df._dev_md126p2.critical 98 localhost;localhost:df._dev_md126p2.update_rate 300 localhost;localhost:df._dev_md126p2.warning 92 localhost;localhost:df._dev_md126p2.graph_data_size normal localhost;localhost:df._dev_md126p2.label /var/spool/mail localhost;localhost:df._dev_sdb3.critical 98 localhost;localhost:df._dev_sdb3.update_rate 300 localhost;localhost:df._dev_sdb3.warning 92 localhost;localhost:df._dev_sdb3.graph_data_size normal localhost;localhost:df._dev_sdb3.label /data localhost;localhost:df._run.critical 98 localhost;localhost:df._run.update_rate 300 localhost;localhost:df._run.warning 92 localhost;localhost:df._run.graph_data_size normal localhost;localhost:df._run.label /run localhost;localhost:df._dev_md126p3.critical 98 localhost;localhost:df._dev_md126p3.update_rate 300 localhost;localhost:df._dev_md126p3.warning 92 localhost;localhost:df._dev_md126p3.graph_data_size normal localhost;localhost:df._dev_md126p3.label /www localhost;localhost:df._dev_sdb5.critical 98 localhost;localhost:df._dev_sdb5.update_rate 300 localhost;localhost:df._dev_sdb5.warning 92 localhost;localhost:df._dev_sdb5.graph_data_size normal localhost;localhost:df._dev_sdb5.label /backups localhost;localhost:df._dev_shm.critical 98 localhost;localhost:df._dev_shm.update_rate 300 localhost;localhost:df._dev_shm.warning 92 localhost;localhost:df._dev_shm.graph_data_size normal localhost;localhost:df._dev_shm.label /dev/shm localhost;localhost:df._dev_sdb1.critical 98 localhost;localhost:df._dev_sdb1.update_rate 300 localhost;localhost:df._dev_sdb1.warning 92 localhost;localhost:df._dev_sdb1.graph_data_size normal localhost;localhost:df._dev_sdb1.label /ludek1 localhost;localhost:df._dev_sda1.critical 98 localhost;localhost:df._dev_sda1.update_rate 300 localhost;localhost:df._dev_sda1.warning 92 localhost;localhost:df._dev_sda1.graph_data_size normal localhost;localhost:df._dev_sda1.label /boot localhost;localhost:df._dev_sdb2.critical 98 localhost;localhost:df._dev_sdb2.update_rate 300 localhost;localhost:df._dev_sdb2.warning 92 localhost;localhost:df._dev_sdb2.graph_data_size normal localhost;localhost:df._dev_sdb2.label /var/log localhost;localhost:df._dev_sda2.critical 98 localhost;localhost:df._dev_sda2.update_rate 300 localhost;localhost:df._dev_sda2.warning 92 localhost;localhost:df._dev_sda2.graph_data_size normal localhost;localhost:df._dev_sda2.label / localhost;localhost:df._dev_md126p1.critical 98 localhost;localhost:df._dev_md126p1.update_rate 300 localhost;localhost:df._dev_md126p1.warning 92 localhost;localhost:df._dev_md126p1.graph_data_size normal localhost;localhost:df._dev_md126p1.label /var/lib localhost;localhost:swap.graph_title Swap in/out localhost;localhost:swap.graph_args -l 0 --base 1000 localhost;localhost:swap.graph_vlabel pages per ${graph_period} in (-) / out (+) localhost;localhost:swap.graph_category system localhost;localhost:swap.graph_order swap_in swap_out localhost;localhost:swap.swap_out.update_rate 300 localhost;localhost:swap.swap_out.min 0 localhost;localhost:swap.swap_out.max 100000 localhost;localhost:swap.swap_out.graph_data_size normal localhost;localhost:swap.swap_out.label swap localhost;localhost:swap.swap_out.type DERIVE localhost;localhost:swap.swap_out.negative swap_in localhost;localhost:swap.swap_in.update_rate 300 localhost;localhost:swap.swap_in.min 0 localhost;localhost:swap.swap_in.max 100000 localhost;localhost:swap.swap_in.graph_data_size normal localhost;localhost:swap.swap_in.label swap localhost;localhost:swap.swap_in.type DERIVE localhost;localhost:swap.swap_in.graph no localhost;localhost:ping_ovh_fr.graph_title IPv4 ping times to ovh.fr localhost;localhost:ping_ovh_fr.graph_args --base 1000 -l 0 localhost;localhost:ping_ovh_fr.graph_vlabel roundtrip time (seconds) localhost;localhost:ping_ovh_fr.graph_category network localhost;localhost:ping_ovh_fr.graph_info This graph shows ping RTT statistics. localhost;localhost:ping_ovh_fr.graph_order ping packetloss localhost;localhost:ping_ovh_fr.ping.info Ping RTT statistics for ovh.fr. localhost;localhost:ping_ovh_fr.ping.update_rate 300 localhost;localhost:ping_ovh_fr.ping.graph_data_size normal localhost;localhost:ping_ovh_fr.ping.label ovh.fr localhost;localhost:ping_ovh_fr.packetloss.update_rate 300 localhost;localhost:ping_ovh_fr.packetloss.graph_data_size normal localhost;localhost:ping_ovh_fr.packetloss.label packet loss localhost;localhost:ping_ovh_fr.packetloss.graph no localhost;localhost:load.graph_title Load average localhost;localhost:load.graph_args --base 1000 -l 0 localhost;localhost:load.graph_vlabel load localhost;localhost:load.graph_scale no localhost;localhost:load.graph_category system localhost;localhost:load.graph_info The load average of the machine describes how many processes are in the run-queue (scheduled to run "immediately"). localhost;localhost:load.graph_order load localhost;localhost:load.load.info 5 minute load average localhost;localhost:load.load.update_rate 300 localhost;localhost:load.load.warning 100 localhost;localhost:load.load.graph_data_size normal localhost;localhost:load.load.label load localhost;localhost:varnish4_bad.graph_category varnish localhost;localhost:varnish4_bad.graph_title Misbehavior localhost;localhost:varnish4_bad.graph_order SMA_Transient_c_fail SMA_s0_c_fail backend_busy backend_unhealthy esi_errors esi_warnings fetch_failed losthdr sess_drop sess_fail sess_pipe_overflow thread_queue_len threads_destroyed threads_failed threads_limited localhost;localhost:varnish4_bad.sess_drop.update_rate 300 localhost;localhost:varnish4_bad.sess_drop.graph_data_size normal localhost;localhost:varnish4_bad.sess_drop.label Sessions dropped localhost;localhost:varnish4_bad.sess_drop.type DERIVE localhost;localhost:varnish4_bad.sess_pipe_overflow.update_rate 300 localhost;localhost:varnish4_bad.sess_pipe_overflow.graph_data_size normal localhost;localhost:varnish4_bad.sess_pipe_overflow.label Session pipe overflow localhost;localhost:varnish4_bad.sess_pipe_overflow.type DERIVE localhost;localhost:varnish4_bad.thread_queue_len.update_rate 300 localhost;localhost:varnish4_bad.thread_queue_len.graph_data_size normal localhost;localhost:varnish4_bad.thread_queue_len.label Length of session queue localhost;localhost:varnish4_bad.thread_queue_len.type GAUGE localhost;localhost:varnish4_bad.threads_destroyed.update_rate 300 localhost;localhost:varnish4_bad.threads_destroyed.graph_data_size normal localhost;localhost:varnish4_bad.threads_destroyed.label Threads destroyed localhost;localhost:varnish4_bad.threads_destroyed.type DERIVE localhost;localhost:varnish4_bad.threads_failed.update_rate 300 localhost;localhost:varnish4_bad.threads_failed.graph_data_size normal localhost;localhost:varnish4_bad.threads_failed.label Thread creation failed localhost;localhost:varnish4_bad.threads_failed.type DERIVE localhost;localhost:varnish4_bad.SMA_s0_c_fail.update_rate 300 localhost;localhost:varnish4_bad.SMA_s0_c_fail.graph_data_size normal localhost;localhost:varnish4_bad.SMA_s0_c_fail.label Allocator failures SMA s0 localhost;localhost:varnish4_bad.SMA_s0_c_fail.type DERIVE localhost;localhost:varnish4_bad.esi_warnings.update_rate 300 localhost;localhost:varnish4_bad.esi_warnings.graph_data_size normal localhost;localhost:varnish4_bad.esi_warnings.label ESI parse warnings (unlock) localhost;localhost:varnish4_bad.esi_warnings.type DERIVE localhost;localhost:varnish4_bad.sess_fail.update_rate 300 localhost;localhost:varnish4_bad.sess_fail.graph_data_size normal localhost;localhost:varnish4_bad.sess_fail.label Session accept failures localhost;localhost:varnish4_bad.sess_fail.type DERIVE localhost;localhost:varnish4_bad.backend_busy.update_rate 300 localhost;localhost:varnish4_bad.backend_busy.graph_data_size normal localhost;localhost:varnish4_bad.backend_busy.label Backend conn. too many localhost;localhost:varnish4_bad.backend_busy.type DERIVE localhost;localhost:varnish4_bad.SMA_Transient_c_fail.update_rate 300 localhost;localhost:varnish4_bad.SMA_Transient_c_fail.graph_data_size normal localhost;localhost:varnish4_bad.SMA_Transient_c_fail.label Allocator failures SMA Transient localhost;localhost:varnish4_bad.SMA_Transient_c_fail.type DERIVE localhost;localhost:varnish4_bad.esi_errors.update_rate 300 localhost;localhost:varnish4_bad.esi_errors.graph_data_size normal localhost;localhost:varnish4_bad.esi_errors.label ESI parse errors (unlock) localhost;localhost:varnish4_bad.esi_errors.type DERIVE localhost;localhost:varnish4_bad.losthdr.update_rate 300 localhost;localhost:varnish4_bad.losthdr.graph_data_size normal localhost;localhost:varnish4_bad.losthdr.label HTTP header overflows localhost;localhost:varnish4_bad.losthdr.type DERIVE localhost;localhost:varnish4_bad.backend_unhealthy.update_rate 300 localhost;localhost:varnish4_bad.backend_unhealthy.graph_data_size normal localhost;localhost:varnish4_bad.backend_unhealthy.label Backend conn. not attempted localhost;localhost:varnish4_bad.backend_unhealthy.type DERIVE localhost;localhost:varnish4_bad.threads_limited.update_rate 300 localhost;localhost:varnish4_bad.threads_limited.graph_data_size normal localhost;localhost:varnish4_bad.threads_limited.label Threads hit max localhost;localhost:varnish4_bad.threads_limited.type DERIVE localhost;localhost:varnish4_bad.fetch_failed.update_rate 300 localhost;localhost:varnish4_bad.fetch_failed.graph_data_size normal localhost;localhost:varnish4_bad.fetch_failed.label Fetch failed (all causes) localhost;localhost:varnish4_bad.fetch_failed.type DERIVE localhost;localhost:nginx_request.graph_title Nginx requests localhost;localhost:nginx_request.graph_args --base 1000 localhost;localhost:nginx_request.graph_category webserver localhost;localhost:nginx_request.graph_vlabel Requests per ${graph_period} localhost;localhost:nginx_request.graph_order request localhost;localhost:nginx_request.request.update_rate 300 localhost;localhost:nginx_request.request.draw LINE localhost;localhost:nginx_request.request.min 0 localhost;localhost:nginx_request.request.graph_data_size normal localhost;localhost:nginx_request.request.type DERIVE localhost;localhost:nginx_request.request.label requests localhost;localhost:open_files.graph_title File table usage localhost;localhost:open_files.graph_args --base 1000 -l 0 localhost;localhost:open_files.graph_vlabel number of open files localhost;localhost:open_files.graph_category system localhost;localhost:open_files.graph_info This graph monitors the Linux open files table. localhost;localhost:open_files.graph_order used localhost;localhost:open_files.used.info The number of currently open files. localhost;localhost:open_files.used.critical 3152972 localhost;localhost:open_files.used.update_rate 300 localhost;localhost:open_files.used.warning 2959933 localhost;localhost:open_files.used.graph_data_size normal localhost;localhost:open_files.used.label open files localhost;localhost:varnish4_objects.graph_category varnish localhost;localhost:varnish4_objects.graph_title Number of objects localhost;localhost:varnish4_objects.graph_order n_object n_objectcore n_vampireobject n_objecthead n_object n_objectcore n_objecthead n_vampireobject localhost;localhost:varnish4_objects.n_object.update_rate 300 localhost;localhost:varnish4_objects.n_object.graph_data_size normal localhost;localhost:varnish4_objects.n_object.label Number of objects localhost;localhost:varnish4_objects.n_object.type GAUGE localhost;localhost:varnish4_objects.n_vampireobject.update_rate 300 localhost;localhost:varnish4_objects.n_vampireobject.graph_data_size normal localhost;localhost:varnish4_objects.n_vampireobject.label Number of unresurrected objects localhost;localhost:varnish4_objects.n_vampireobject.type GAUGE localhost;localhost:varnish4_objects.n_objectcore.update_rate 300 localhost;localhost:varnish4_objects.n_objectcore.graph_data_size normal localhost;localhost:varnish4_objects.n_objectcore.label Number of object cores localhost;localhost:varnish4_objects.n_objectcore.type GAUGE localhost;localhost:varnish4_objects.n_objecthead.info Each object head can have one or more object attached, typically based on the Vary: header localhost;localhost:varnish4_objects.n_objecthead.update_rate 300 localhost;localhost:varnish4_objects.n_objecthead.graph_data_size normal localhost;localhost:varnish4_objects.n_objecthead.label Number of object heads localhost;localhost:varnish4_objects.n_objecthead.type GAUGE localhost;localhost:varnish4_uptime.graph_category varnish localhost;localhost:varnish4_uptime.graph_title Varnish uptime localhost;localhost:varnish4_uptime.graph_scale no localhost;localhost:varnish4_uptime.graph_vlabel days localhost;localhost:varnish4_uptime.graph_order uptime localhost;localhost:varnish4_uptime.uptime.cdef uptime,86400,/ localhost;localhost:varnish4_uptime.uptime.update_rate 300 localhost;localhost:varnish4_uptime.uptime.graph_data_size normal localhost;localhost:varnish4_uptime.uptime.label Management process uptime localhost;localhost:varnish4_uptime.uptime.type GAUGE localhost;localhost:diskstats_latency.graph_title Disk latency per device localhost;localhost:diskstats_latency.graph_args --base 1000 localhost;localhost:diskstats_latency.graph_vlabel Average IO Wait (seconds) localhost;localhost:diskstats_latency.graph_category disk localhost;localhost:diskstats_latency.graph_width 400 localhost;localhost:diskstats_latency.graph_order sda_avgwait sdb_avgwait sdc_avgwait sdd_avgwait localhost;localhost:diskstats_latency.sda_avgwait.info Average wait time for an I/O request localhost;localhost:diskstats_latency.sda_avgwait.update_rate 300 localhost;localhost:diskstats_latency.sda_avgwait.draw LINE1 localhost;localhost:diskstats_latency.sda_avgwait.min 0 localhost;localhost:diskstats_latency.sda_avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sda_avgwait.label sda localhost;localhost:diskstats_latency.sda_avgwait.type GAUGE localhost;localhost:diskstats_latency.sdb_avgwait.info Average wait time for an I/O request localhost;localhost:diskstats_latency.sdb_avgwait.update_rate 300 localhost;localhost:diskstats_latency.sdb_avgwait.draw LINE1 localhost;localhost:diskstats_latency.sdb_avgwait.min 0 localhost;localhost:diskstats_latency.sdb_avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sdb_avgwait.label sdb localhost;localhost:diskstats_latency.sdb_avgwait.type GAUGE localhost;localhost:diskstats_latency.sdc_avgwait.info Average wait time for an I/O request localhost;localhost:diskstats_latency.sdc_avgwait.update_rate 300 localhost;localhost:diskstats_latency.sdc_avgwait.draw LINE1 localhost;localhost:diskstats_latency.sdc_avgwait.min 0 localhost;localhost:diskstats_latency.sdc_avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sdc_avgwait.label sdc localhost;localhost:diskstats_latency.sdc_avgwait.type GAUGE localhost;localhost:diskstats_latency.sdd_avgwait.info Average wait time for an I/O request localhost;localhost:diskstats_latency.sdd_avgwait.update_rate 300 localhost;localhost:diskstats_latency.sdd_avgwait.draw LINE1 localhost;localhost:diskstats_latency.sdd_avgwait.min 0 localhost;localhost:diskstats_latency.sdd_avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sdd_avgwait.label sdd localhost;localhost:diskstats_latency.sdd_avgwait.type GAUGE localhost;localhost:diskstats_iops.sdd.graph_title IOs for /dev/sdd localhost;localhost:diskstats_iops.sdd.graph_args --base 1000 localhost;localhost:diskstats_iops.sdd.graph_vlabel Units read (-) / write (+) localhost;localhost:diskstats_iops.sdd.graph_category disk localhost;localhost:diskstats_iops.sdd.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024. localhost;localhost:diskstats_iops.sdd.graph_order rdio wrio avgrdrqsz avgwrrqsz localhost;localhost:diskstats_iops.sdd.rdio.update_rate 300 localhost;localhost:diskstats_iops.sdd.rdio.draw LINE1 localhost;localhost:diskstats_iops.sdd.rdio.min 0 localhost;localhost:diskstats_iops.sdd.rdio.graph_data_size normal localhost;localhost:diskstats_iops.sdd.rdio.label dummy localhost;localhost:diskstats_iops.sdd.rdio.type GAUGE localhost;localhost:diskstats_iops.sdd.rdio.graph no localhost;localhost:diskstats_iops.sdd.wrio.update_rate 300 localhost;localhost:diskstats_iops.sdd.wrio.draw LINE1 localhost;localhost:diskstats_iops.sdd.wrio.min 0 localhost;localhost:diskstats_iops.sdd.wrio.graph_data_size normal localhost;localhost:diskstats_iops.sdd.wrio.label IO/sec localhost;localhost:diskstats_iops.sdd.wrio.type GAUGE localhost;localhost:diskstats_iops.sdd.wrio.negative rdio localhost;localhost:diskstats_iops.sdd.avgwrrqsz.info Average Request Size in kilobytes (1000 based) localhost;localhost:diskstats_iops.sdd.avgwrrqsz.update_rate 300 localhost;localhost:diskstats_iops.sdd.avgwrrqsz.draw LINE1 localhost;localhost:diskstats_iops.sdd.avgwrrqsz.min 0 localhost;localhost:diskstats_iops.sdd.avgwrrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sdd.avgwrrqsz.negative avgrdrqsz localhost;localhost:diskstats_iops.sdd.avgwrrqsz.type GAUGE localhost;localhost:diskstats_iops.sdd.avgwrrqsz.label Req Size (KB) localhost;localhost:diskstats_iops.sdd.avgrdrqsz.update_rate 300 localhost;localhost:diskstats_iops.sdd.avgrdrqsz.draw LINE1 localhost;localhost:diskstats_iops.sdd.avgrdrqsz.min 0 localhost;localhost:diskstats_iops.sdd.avgrdrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sdd.avgrdrqsz.label dummy localhost;localhost:diskstats_iops.sdd.avgrdrqsz.type GAUGE localhost;localhost:diskstats_iops.sdd.avgrdrqsz.graph no localhost;localhost:vmstat.graph_title VMstat localhost;localhost:vmstat.graph_args --base 1000 -l 0 localhost;localhost:vmstat.graph_vlabel process states localhost;localhost:vmstat.graph_category processes localhost;localhost:vmstat.graph_order wait sleep localhost;localhost:vmstat.wait.update_rate 300 localhost;localhost:vmstat.wait.max 500000 localhost;localhost:vmstat.wait.graph_data_size normal localhost;localhost:vmstat.wait.label running localhost;localhost:vmstat.wait.type GAUGE localhost;localhost:vmstat.sleep.update_rate 300 localhost;localhost:vmstat.sleep.max 500000 localhost;localhost:vmstat.sleep.graph_data_size normal localhost;localhost:vmstat.sleep.label I/O sleep localhost;localhost:vmstat.sleep.type GAUGE localhost;localhost:tomcat_access.graph_title Tomcat accesses localhost;localhost:tomcat_access.graph_args --base 1000 localhost;localhost:tomcat_access.graph_vlabel accesses / ${graph_period} localhost;localhost:tomcat_access.graph_category tomcat localhost;localhost:tomcat_access.graph_order accesses localhost;localhost:tomcat_access.accesses.update_rate 300 localhost;localhost:tomcat_access.accesses.min 0 localhost;localhost:tomcat_access.accesses.max 1000000 localhost;localhost:tomcat_access.accesses.graph_data_size normal localhost;localhost:tomcat_access.accesses.label Accesses localhost;localhost:tomcat_access.accesses.type DERIVE localhost;localhost:diskstats_latency.sdc.graph_title Average latency for /dev/sdc localhost;localhost:diskstats_latency.sdc.graph_args --base 1000 --logarithmic localhost;localhost:diskstats_latency.sdc.graph_vlabel seconds localhost;localhost:diskstats_latency.sdc.graph_category disk localhost;localhost:diskstats_latency.sdc.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy. localhost;localhost:diskstats_latency.sdc.graph_order svctm avgwait avgrdwait avgwrwait localhost;localhost:diskstats_latency.sdc.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdc.avgrdwait.update_rate 300 localhost;localhost:diskstats_latency.sdc.avgrdwait.draw LINE1 localhost;localhost:diskstats_latency.sdc.avgrdwait.min 0 localhost;localhost:diskstats_latency.sdc.avgrdwait.graph_data_size normal localhost;localhost:diskstats_latency.sdc.avgrdwait.warning 0:3 localhost;localhost:diskstats_latency.sdc.avgrdwait.type GAUGE localhost;localhost:diskstats_latency.sdc.avgrdwait.label Read IO Wait time localhost;localhost:diskstats_latency.sdc.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request. localhost;localhost:diskstats_latency.sdc.svctm.update_rate 300 localhost;localhost:diskstats_latency.sdc.svctm.draw LINE1 localhost;localhost:diskstats_latency.sdc.svctm.min 0 localhost;localhost:diskstats_latency.sdc.svctm.graph_data_size normal localhost;localhost:diskstats_latency.sdc.svctm.label Device IO time localhost;localhost:diskstats_latency.sdc.svctm.type GAUGE localhost;localhost:diskstats_latency.sdc.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdc.avgwait.update_rate 300 localhost;localhost:diskstats_latency.sdc.avgwait.draw LINE1 localhost;localhost:diskstats_latency.sdc.avgwait.min 0 localhost;localhost:diskstats_latency.sdc.avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sdc.avgwait.label IO Wait time localhost;localhost:diskstats_latency.sdc.avgwait.type GAUGE localhost;localhost:diskstats_latency.sdc.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdc.avgwrwait.update_rate 300 localhost;localhost:diskstats_latency.sdc.avgwrwait.draw LINE1 localhost;localhost:diskstats_latency.sdc.avgwrwait.min 0 localhost;localhost:diskstats_latency.sdc.avgwrwait.graph_data_size normal localhost;localhost:diskstats_latency.sdc.avgwrwait.warning 0:3 localhost;localhost:diskstats_latency.sdc.avgwrwait.type GAUGE localhost;localhost:diskstats_latency.sdc.avgwrwait.label Write IO Wait time localhost;localhost:processes.graph_title Processes localhost;localhost:processes.graph_info This graph shows the number of processes localhost;localhost:processes.graph_category processes localhost;localhost:processes.graph_args --base 1000 -l 0 localhost;localhost:processes.graph_vlabel Number of processes localhost;localhost:processes.graph_order sleeping idle stopped zombie dead paging uninterruptible runnable processes dead paging idle sleeping uninterruptible stopped runnable zombie processes localhost;localhost:processes.uninterruptible.info The number of uninterruptible processes (usually IO). localhost;localhost:processes.uninterruptible.update_rate 300 localhost;localhost:processes.uninterruptible.draw STACK localhost;localhost:processes.uninterruptible.colour ffa500 localhost;localhost:processes.uninterruptible.graph_data_size normal localhost;localhost:processes.uninterruptible.label uninterruptible localhost;localhost:processes.paging.info The number of paging processes (<2.6 kernels only). localhost;localhost:processes.paging.update_rate 300 localhost;localhost:processes.paging.draw STACK localhost;localhost:processes.paging.colour 00aaaa localhost;localhost:processes.paging.graph_data_size normal localhost;localhost:processes.paging.label paging localhost;localhost:processes.processes.info The total number of processes. localhost;localhost:processes.processes.update_rate 300 localhost;localhost:processes.processes.draw LINE1 localhost;localhost:processes.processes.colour c0c0c0 localhost;localhost:processes.processes.graph_data_size normal localhost;localhost:processes.processes.label total localhost;localhost:processes.idle.info The number of idle kernel threads (>= 4.2 kernels only). localhost;localhost:processes.idle.update_rate 300 localhost;localhost:processes.idle.draw STACK localhost;localhost:processes.idle.colour 4169e1 localhost;localhost:processes.idle.graph_data_size normal localhost;localhost:processes.idle.label idle localhost;localhost:processes.stopped.info The number of stopped or traced processes. localhost;localhost:processes.stopped.update_rate 300 localhost;localhost:processes.stopped.draw STACK localhost;localhost:processes.stopped.colour cc0000 localhost;localhost:processes.stopped.graph_data_size normal localhost;localhost:processes.stopped.label stopped localhost;localhost:processes.zombie.info The number of defunct ('zombie') processes (process terminated and parent not waiting). localhost;localhost:processes.zombie.update_rate 300 localhost;localhost:processes.zombie.draw STACK localhost;localhost:processes.zombie.colour 990000 localhost;localhost:processes.zombie.graph_data_size normal localhost;localhost:processes.zombie.label zombie localhost;localhost:processes.dead.info The number of dead processes. localhost;localhost:processes.dead.update_rate 300 localhost;localhost:processes.dead.draw STACK localhost;localhost:processes.dead.colour ff0000 localhost;localhost:processes.dead.graph_data_size normal localhost;localhost:processes.dead.label dead localhost;localhost:processes.sleeping.info The number of sleeping processes. localhost;localhost:processes.sleeping.update_rate 300 localhost;localhost:processes.sleeping.draw AREA localhost;localhost:processes.sleeping.colour 0022ff localhost;localhost:processes.sleeping.graph_data_size normal localhost;localhost:processes.sleeping.label sleeping localhost;localhost:processes.runnable.info The number of runnable processes (on the run queue). localhost;localhost:processes.runnable.update_rate 300 localhost;localhost:processes.runnable.draw STACK localhost;localhost:processes.runnable.colour 22ff22 localhost;localhost:processes.runnable.graph_data_size normal localhost;localhost:processes.runnable.label runnable localhost;localhost:users.graph_title Logged in users localhost;localhost:users.graph_args --base 1000 -l 0 localhost;localhost:users.graph_vlabel Users localhost;localhost:users.graph_scale no localhost;localhost:users.graph_category system localhost;localhost:users.graph_printf %3.0lf localhost;localhost:users.graph_order tty pty pts X other localhost;localhost:users.pty.update_rate 300 localhost;localhost:users.pty.draw AREASTACK localhost;localhost:users.pty.colour 0000FF localhost;localhost:users.pty.graph_data_size normal localhost;localhost:users.pty.label pty localhost;localhost:users.pts.update_rate 300 localhost;localhost:users.pts.draw AREASTACK localhost;localhost:users.pts.colour 00FFFF localhost;localhost:users.pts.graph_data_size normal localhost;localhost:users.pts.label pts localhost;localhost:users.X.info Users logged in on an X display localhost;localhost:users.X.update_rate 300 localhost;localhost:users.X.draw AREASTACK localhost;localhost:users.X.colour 000000 localhost;localhost:users.X.graph_data_size normal localhost;localhost:users.X.label X displays localhost;localhost:users.other.info Users logged in by indeterminate method localhost;localhost:users.other.update_rate 300 localhost;localhost:users.other.colour FF0000 localhost;localhost:users.other.graph_data_size normal localhost;localhost:users.other.label Other users localhost;localhost:users.tty.update_rate 300 localhost;localhost:users.tty.draw AREASTACK localhost;localhost:users.tty.colour 00FF00 localhost;localhost:users.tty.graph_data_size normal localhost;localhost:users.tty.label tty localhost;localhost:if_eno1.graph_order down up down up localhost;localhost:if_eno1.graph_title eno1 traffic localhost;localhost:if_eno1.graph_args --base 1000 localhost;localhost:if_eno1.graph_vlabel bits in (-) / out (+) per ${graph_period} localhost;localhost:if_eno1.graph_category network localhost;localhost:if_eno1.graph_info This graph shows the traffic of the eno1 network interface. Please note that the traffic is shown in bits per second, not bytes. IMPORTANT: On 32-bit systems the data source for this plugin uses 32-bit counters, which makes the plugin unreliable and unsuitable for most 100-Mb/s (or faster) interfaces, where traffic is expected to exceed 50 Mb/s over a 5 minute period. This means that this plugin is unsuitable for most 32-bit production environments. To avoid this problem, use the ip_ plugin instead. There should be no problems on 64-bit systems running 64-bit kernels. localhost;localhost:if_eno1.down.cdef down,8,* localhost;localhost:if_eno1.down.update_rate 300 localhost;localhost:if_eno1.down.min 0 localhost;localhost:if_eno1.down.max 1000000000 localhost;localhost:if_eno1.down.graph_data_size normal localhost;localhost:if_eno1.down.type DERIVE localhost;localhost:if_eno1.down.label received localhost;localhost:if_eno1.down.graph no localhost;localhost:if_eno1.up.info Traffic of the eno1 interface. Maximum speed is 1000 Mb/s. localhost;localhost:if_eno1.up.cdef up,8,* localhost;localhost:if_eno1.up.update_rate 300 localhost;localhost:if_eno1.up.min 0 localhost;localhost:if_eno1.up.max 1000000000 localhost;localhost:if_eno1.up.graph_data_size normal localhost;localhost:if_eno1.up.negative down localhost;localhost:if_eno1.up.type DERIVE localhost;localhost:if_eno1.up.label bps localhost;localhost:diskstats_iops.sda.graph_title IOs for /dev/sda localhost;localhost:diskstats_iops.sda.graph_args --base 1000 localhost;localhost:diskstats_iops.sda.graph_vlabel Units read (-) / write (+) localhost;localhost:diskstats_iops.sda.graph_category disk localhost;localhost:diskstats_iops.sda.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024. localhost;localhost:diskstats_iops.sda.graph_order rdio wrio avgrdrqsz avgwrrqsz localhost;localhost:diskstats_iops.sda.rdio.update_rate 300 localhost;localhost:diskstats_iops.sda.rdio.draw LINE1 localhost;localhost:diskstats_iops.sda.rdio.min 0 localhost;localhost:diskstats_iops.sda.rdio.graph_data_size normal localhost;localhost:diskstats_iops.sda.rdio.label dummy localhost;localhost:diskstats_iops.sda.rdio.type GAUGE localhost;localhost:diskstats_iops.sda.rdio.graph no localhost;localhost:diskstats_iops.sda.wrio.update_rate 300 localhost;localhost:diskstats_iops.sda.wrio.draw LINE1 localhost;localhost:diskstats_iops.sda.wrio.min 0 localhost;localhost:diskstats_iops.sda.wrio.graph_data_size normal localhost;localhost:diskstats_iops.sda.wrio.label IO/sec localhost;localhost:diskstats_iops.sda.wrio.type GAUGE localhost;localhost:diskstats_iops.sda.wrio.negative rdio localhost;localhost:diskstats_iops.sda.avgwrrqsz.info Average Request Size in kilobytes (1000 based) localhost;localhost:diskstats_iops.sda.avgwrrqsz.update_rate 300 localhost;localhost:diskstats_iops.sda.avgwrrqsz.draw LINE1 localhost;localhost:diskstats_iops.sda.avgwrrqsz.min 0 localhost;localhost:diskstats_iops.sda.avgwrrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sda.avgwrrqsz.negative avgrdrqsz localhost;localhost:diskstats_iops.sda.avgwrrqsz.type GAUGE localhost;localhost:diskstats_iops.sda.avgwrrqsz.label Req Size (KB) localhost;localhost:diskstats_iops.sda.avgrdrqsz.update_rate 300 localhost;localhost:diskstats_iops.sda.avgrdrqsz.draw LINE1 localhost;localhost:diskstats_iops.sda.avgrdrqsz.min 0 localhost;localhost:diskstats_iops.sda.avgrdrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sda.avgrdrqsz.label dummy localhost;localhost:diskstats_iops.sda.avgrdrqsz.type GAUGE localhost;localhost:diskstats_iops.sda.avgrdrqsz.graph no localhost;localhost:netstat.graph_title Netstat, combined localhost;localhost:netstat.graph_args --units=si -l 1 --base 1000 --logarithmic localhost;localhost:netstat.graph_vlabel TCP connections localhost;localhost:netstat.graph_category network localhost;localhost:netstat.graph_period second localhost;localhost:netstat.graph_info This graph shows the TCP activity of all the network interfaces combined. localhost;localhost:netstat.graph_order active passive failed resets established localhost;localhost:netstat.established.info The number of currently open connections. localhost;localhost:netstat.established.update_rate 300 localhost;localhost:netstat.established.graph_data_size normal localhost;localhost:netstat.established.label established localhost;localhost:netstat.established.type GAUGE localhost;localhost:netstat.active.info The number of active TCP openings per second. localhost;localhost:netstat.active.update_rate 300 localhost;localhost:netstat.active.min 0 localhost;localhost:netstat.active.max 50000 localhost;localhost:netstat.active.graph_data_size normal localhost;localhost:netstat.active.label active localhost;localhost:netstat.active.type DERIVE localhost;localhost:netstat.passive.info The number of passive TCP openings per second. localhost;localhost:netstat.passive.update_rate 300 localhost;localhost:netstat.passive.min 0 localhost;localhost:netstat.passive.max 50000 localhost;localhost:netstat.passive.graph_data_size normal localhost;localhost:netstat.passive.label passive localhost;localhost:netstat.passive.type DERIVE localhost;localhost:netstat.failed.info The number of failed TCP connection attempts per second. localhost;localhost:netstat.failed.update_rate 300 localhost;localhost:netstat.failed.min 0 localhost;localhost:netstat.failed.max 50000 localhost;localhost:netstat.failed.graph_data_size normal localhost;localhost:netstat.failed.label failed localhost;localhost:netstat.failed.type DERIVE localhost;localhost:netstat.resets.info The number of TCP connection resets. localhost;localhost:netstat.resets.update_rate 300 localhost;localhost:netstat.resets.min 0 localhost;localhost:netstat.resets.max 50000 localhost;localhost:netstat.resets.graph_data_size normal localhost;localhost:netstat.resets.label resets localhost;localhost:netstat.resets.type DERIVE localhost;localhost:nginx_status.graph_title Nginx status localhost;localhost:nginx_status.graph_args --base 1000 localhost;localhost:nginx_status.graph_category webserver localhost;localhost:nginx_status.graph_vlabel Connections localhost;localhost:nginx_status.graph_order total reading writing waiting localhost;localhost:nginx_status.waiting.info Waiting localhost;localhost:nginx_status.waiting.update_rate 300 localhost;localhost:nginx_status.waiting.draw LINE localhost;localhost:nginx_status.waiting.graph_data_size normal localhost;localhost:nginx_status.waiting.label Waiting localhost;localhost:nginx_status.reading.info Reading localhost;localhost:nginx_status.reading.update_rate 300 localhost;localhost:nginx_status.reading.draw LINE localhost;localhost:nginx_status.reading.graph_data_size normal localhost;localhost:nginx_status.reading.label Reading localhost;localhost:nginx_status.total.info Active connections localhost;localhost:nginx_status.total.update_rate 300 localhost;localhost:nginx_status.total.draw LINE localhost;localhost:nginx_status.total.graph_data_size normal localhost;localhost:nginx_status.total.label Active connections localhost;localhost:nginx_status.writing.info Writing localhost;localhost:nginx_status.writing.update_rate 300 localhost;localhost:nginx_status.writing.draw LINE localhost;localhost:nginx_status.writing.graph_data_size normal localhost;localhost:nginx_status.writing.label Writing localhost;localhost:diskstats_throughput.md126.graph_title Disk throughput for /dev/md126 localhost;localhost:diskstats_throughput.md126.graph_args --base 1024 localhost;localhost:diskstats_throughput.md126.graph_vlabel Pr ${graph_period} read (-) / write (+) localhost;localhost:diskstats_throughput.md126.graph_category disk localhost;localhost:diskstats_throughput.md126.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on. localhost;localhost:diskstats_throughput.md126.graph_order rdbytes wrbytes localhost;localhost:diskstats_throughput.md126.rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.md126.rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.md126.rdbytes.min 0 localhost;localhost:diskstats_throughput.md126.rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.md126.rdbytes.label invisible localhost;localhost:diskstats_throughput.md126.rdbytes.type GAUGE localhost;localhost:diskstats_throughput.md126.rdbytes.graph no localhost;localhost:diskstats_throughput.md126.wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.md126.wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.md126.wrbytes.min 0 localhost;localhost:diskstats_throughput.md126.wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.md126.wrbytes.label Bytes localhost;localhost:diskstats_throughput.md126.wrbytes.type GAUGE localhost;localhost:diskstats_throughput.md126.wrbytes.negative rdbytes localhost;localhost:netstat_established.graph_title Netstat, established only localhost;localhost:netstat_established.graph_args --lower-limit 0 localhost;localhost:netstat_established.graph_vlabel TCP connections localhost;localhost:netstat_established.graph_category network localhost;localhost:netstat_established.graph_period second localhost;localhost:netstat_established.graph_info This graph shows the TCP activity of all the network interfaces combined. localhost;localhost:netstat_established.graph_order established localhost;localhost:netstat_established.established.info The number of currently open connections. localhost;localhost:netstat_established.established.update_rate 300 localhost;localhost:netstat_established.established.graph_data_size normal localhost;localhost:netstat_established.established.label established localhost;localhost:netstat_established.established.type GAUGE localhost;localhost:smart_sdc.graph_title S.M.A.R.T values for drive sdc localhost;localhost:smart_sdc.graph_vlabel Attribute S.M.A.R.T value localhost;localhost:smart_sdc.graph_args --base 1000 --lower-limit 0 localhost;localhost:smart_sdc.graph_category disk localhost;localhost:smart_sdc.graph_info This graph shows the value of all S.M.A.R.T attributes of drive sdc (ST1000DM003-1SB102). smartctl_exit_status is the return value of smartctl. A non-zero return value indicates an error, a potential error, or a fault on the drive. localhost;localhost:smart_sdc.graph_order Head_Flying_Hours Spin_Retry_Count Command_Timeout Reported_Uncorrect smartctl_exit_status Offline_Uncorrectable Seek_Error_Rate End_to_End_Error High_Fly_Writes Total_LBAs_Read Current_Pending_Sector Power_Cycle_Count UDMA_CRC_Error_Count Runtime_Bad_Block Temperature_Celsius Load_Cycle_Count Raw_Read_Error_Rate Hardware_ECC_Recovered Total_LBAs_Written Start_Stop_Count Reallocated_Sector_Ct Power_On_Hours Spin_Up_Time Airflow_Temperature_Cel localhost;localhost:smart_sdc.Power_On_Hours.critical 000: localhost;localhost:smart_sdc.Power_On_Hours.update_rate 300 localhost;localhost:smart_sdc.Power_On_Hours.graph_data_size normal localhost;localhost:smart_sdc.Power_On_Hours.label Power_On_Hours localhost;localhost:smart_sdc.UDMA_CRC_Error_Count.critical 000: localhost;localhost:smart_sdc.UDMA_CRC_Error_Count.update_rate 300 localhost;localhost:smart_sdc.UDMA_CRC_Error_Count.graph_data_size normal localhost;localhost:smart_sdc.UDMA_CRC_Error_Count.label UDMA_CRC_Error_Count localhost;localhost:smart_sdc.Power_Cycle_Count.critical 020: localhost;localhost:smart_sdc.Power_Cycle_Count.update_rate 300 localhost;localhost:smart_sdc.Power_Cycle_Count.graph_data_size normal localhost;localhost:smart_sdc.Power_Cycle_Count.label Power_Cycle_Count localhost;localhost:smart_sdc.Spin_Retry_Count.critical 097: localhost;localhost:smart_sdc.Spin_Retry_Count.update_rate 300 localhost;localhost:smart_sdc.Spin_Retry_Count.graph_data_size normal localhost;localhost:smart_sdc.Spin_Retry_Count.label Spin_Retry_Count localhost;localhost:smart_sdc.Airflow_Temperature_Cel.critical 040: localhost;localhost:smart_sdc.Airflow_Temperature_Cel.update_rate 300 localhost;localhost:smart_sdc.Airflow_Temperature_Cel.graph_data_size normal localhost;localhost:smart_sdc.Airflow_Temperature_Cel.label Airflow_Temperature_Cel localhost;localhost:smart_sdc.Command_Timeout.critical 000: localhost;localhost:smart_sdc.Command_Timeout.update_rate 300 localhost;localhost:smart_sdc.Command_Timeout.graph_data_size normal localhost;localhost:smart_sdc.Command_Timeout.label Command_Timeout localhost;localhost:smart_sdc.End_to_End_Error.critical 099: localhost;localhost:smart_sdc.End_to_End_Error.update_rate 300 localhost;localhost:smart_sdc.End_to_End_Error.graph_data_size normal localhost;localhost:smart_sdc.End_to_End_Error.label End_to_End_Error localhost;localhost:smart_sdc.Hardware_ECC_Recovered.critical 000: localhost;localhost:smart_sdc.Hardware_ECC_Recovered.update_rate 300 localhost;localhost:smart_sdc.Hardware_ECC_Recovered.graph_data_size normal localhost;localhost:smart_sdc.Hardware_ECC_Recovered.label Hardware_ECC_Recovered localhost;localhost:smart_sdc.High_Fly_Writes.critical 000: localhost;localhost:smart_sdc.High_Fly_Writes.update_rate 300 localhost;localhost:smart_sdc.High_Fly_Writes.graph_data_size normal localhost;localhost:smart_sdc.High_Fly_Writes.label High_Fly_Writes localhost;localhost:smart_sdc.Reallocated_Sector_Ct.critical 010: localhost;localhost:smart_sdc.Reallocated_Sector_Ct.update_rate 300 localhost;localhost:smart_sdc.Reallocated_Sector_Ct.graph_data_size normal localhost;localhost:smart_sdc.Reallocated_Sector_Ct.label Reallocated_Sector_Ct localhost;localhost:smart_sdc.Seek_Error_Rate.critical 045: localhost;localhost:smart_sdc.Seek_Error_Rate.update_rate 300 localhost;localhost:smart_sdc.Seek_Error_Rate.graph_data_size normal localhost;localhost:smart_sdc.Seek_Error_Rate.label Seek_Error_Rate localhost;localhost:smart_sdc.smartctl_exit_status.update_rate 300 localhost;localhost:smart_sdc.smartctl_exit_status.warning 1 localhost;localhost:smart_sdc.smartctl_exit_status.graph_data_size normal localhost;localhost:smart_sdc.smartctl_exit_status.label smartctl_exit_status localhost;localhost:smart_sdc.Head_Flying_Hours.critical 000: localhost;localhost:smart_sdc.Head_Flying_Hours.update_rate 300 localhost;localhost:smart_sdc.Head_Flying_Hours.graph_data_size normal localhost;localhost:smart_sdc.Head_Flying_Hours.label Head_Flying_Hours localhost;localhost:smart_sdc.Temperature_Celsius.critical 000: localhost;localhost:smart_sdc.Temperature_Celsius.update_rate 300 localhost;localhost:smart_sdc.Temperature_Celsius.graph_data_size normal localhost;localhost:smart_sdc.Temperature_Celsius.label Temperature_Celsius localhost;localhost:smart_sdc.Load_Cycle_Count.critical 000: localhost;localhost:smart_sdc.Load_Cycle_Count.update_rate 300 localhost;localhost:smart_sdc.Load_Cycle_Count.graph_data_size normal localhost;localhost:smart_sdc.Load_Cycle_Count.label Load_Cycle_Count localhost;localhost:smart_sdc.Offline_Uncorrectable.critical 000: localhost;localhost:smart_sdc.Offline_Uncorrectable.update_rate 300 localhost;localhost:smart_sdc.Offline_Uncorrectable.graph_data_size normal localhost;localhost:smart_sdc.Offline_Uncorrectable.label Offline_Uncorrectable localhost;localhost:smart_sdc.Total_LBAs_Read.critical 000: localhost;localhost:smart_sdc.Total_LBAs_Read.update_rate 300 localhost;localhost:smart_sdc.Total_LBAs_Read.graph_data_size normal localhost;localhost:smart_sdc.Total_LBAs_Read.label Total_LBAs_Read localhost;localhost:smart_sdc.Total_LBAs_Written.critical 000: localhost;localhost:smart_sdc.Total_LBAs_Written.update_rate 300 localhost;localhost:smart_sdc.Total_LBAs_Written.graph_data_size normal localhost;localhost:smart_sdc.Total_LBAs_Written.label Total_LBAs_Written localhost;localhost:smart_sdc.Runtime_Bad_Block.critical 000: localhost;localhost:smart_sdc.Runtime_Bad_Block.update_rate 300 localhost;localhost:smart_sdc.Runtime_Bad_Block.graph_data_size normal localhost;localhost:smart_sdc.Runtime_Bad_Block.label Runtime_Bad_Block localhost;localhost:smart_sdc.Current_Pending_Sector.critical 000: localhost;localhost:smart_sdc.Current_Pending_Sector.update_rate 300 localhost;localhost:smart_sdc.Current_Pending_Sector.graph_data_size normal localhost;localhost:smart_sdc.Current_Pending_Sector.label Current_Pending_Sector localhost;localhost:smart_sdc.Raw_Read_Error_Rate.critical 006: localhost;localhost:smart_sdc.Raw_Read_Error_Rate.update_rate 300 localhost;localhost:smart_sdc.Raw_Read_Error_Rate.graph_data_size normal localhost;localhost:smart_sdc.Raw_Read_Error_Rate.label Raw_Read_Error_Rate localhost;localhost:smart_sdc.Reported_Uncorrect.critical 000: localhost;localhost:smart_sdc.Reported_Uncorrect.update_rate 300 localhost;localhost:smart_sdc.Reported_Uncorrect.graph_data_size normal localhost;localhost:smart_sdc.Reported_Uncorrect.label Reported_Uncorrect localhost;localhost:smart_sdc.Spin_Up_Time.critical 000: localhost;localhost:smart_sdc.Spin_Up_Time.update_rate 300 localhost;localhost:smart_sdc.Spin_Up_Time.graph_data_size normal localhost;localhost:smart_sdc.Spin_Up_Time.label Spin_Up_Time localhost;localhost:smart_sdc.Start_Stop_Count.critical 020: localhost;localhost:smart_sdc.Start_Stop_Count.update_rate 300 localhost;localhost:smart_sdc.Start_Stop_Count.graph_data_size normal localhost;localhost:smart_sdc.Start_Stop_Count.label Start_Stop_Count localhost;localhost:tomcat_volume.graph_title Tomcat volume localhost;localhost:tomcat_volume.graph_args --base 1000 localhost;localhost:tomcat_volume.graph_vlabel bytes per ${graph_period} localhost;localhost:tomcat_volume.graph_category tomcat localhost;localhost:tomcat_volume.graph_order volume localhost;localhost:tomcat_volume.volume.update_rate 300 localhost;localhost:tomcat_volume.volume.min 0 localhost;localhost:tomcat_volume.volume.max 1000000000 localhost;localhost:tomcat_volume.volume.graph_data_size normal localhost;localhost:tomcat_volume.volume.label bytes localhost;localhost:tomcat_volume.volume.type DERIVE localhost;localhost:tomcat_jvm.graph_title Tomcat JVM memory localhost;localhost:tomcat_jvm.graph_args --base 1024 -l 0 localhost;localhost:tomcat_jvm.graph_vlabel Bytes localhost;localhost:tomcat_jvm.graph_category tomcat localhost;localhost:tomcat_jvm.graph_order free used max free used max localhost;localhost:tomcat_jvm.free.update_rate 300 localhost;localhost:tomcat_jvm.free.draw AREA localhost;localhost:tomcat_jvm.free.graph_data_size normal localhost;localhost:tomcat_jvm.free.label free bytes localhost;localhost:tomcat_jvm.used.update_rate 300 localhost;localhost:tomcat_jvm.used.draw STACK localhost;localhost:tomcat_jvm.used.graph_data_size normal localhost;localhost:tomcat_jvm.used.label used bytes localhost;localhost:tomcat_jvm.max.update_rate 300 localhost;localhost:tomcat_jvm.max.draw LINE2 localhost;localhost:tomcat_jvm.max.graph_data_size normal localhost;localhost:tomcat_jvm.max.label maximum bytes localhost;localhost:diskstats_latency.sdb.graph_title Average latency for /dev/sdb localhost;localhost:diskstats_latency.sdb.graph_args --base 1000 --logarithmic localhost;localhost:diskstats_latency.sdb.graph_vlabel seconds localhost;localhost:diskstats_latency.sdb.graph_category disk localhost;localhost:diskstats_latency.sdb.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy. localhost;localhost:diskstats_latency.sdb.graph_order svctm avgwait avgrdwait avgwrwait localhost;localhost:diskstats_latency.sdb.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdb.avgrdwait.update_rate 300 localhost;localhost:diskstats_latency.sdb.avgrdwait.draw LINE1 localhost;localhost:diskstats_latency.sdb.avgrdwait.min 0 localhost;localhost:diskstats_latency.sdb.avgrdwait.graph_data_size normal localhost;localhost:diskstats_latency.sdb.avgrdwait.warning 0:3 localhost;localhost:diskstats_latency.sdb.avgrdwait.type GAUGE localhost;localhost:diskstats_latency.sdb.avgrdwait.label Read IO Wait time localhost;localhost:diskstats_latency.sdb.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request. localhost;localhost:diskstats_latency.sdb.svctm.update_rate 300 localhost;localhost:diskstats_latency.sdb.svctm.draw LINE1 localhost;localhost:diskstats_latency.sdb.svctm.min 0 localhost;localhost:diskstats_latency.sdb.svctm.graph_data_size normal localhost;localhost:diskstats_latency.sdb.svctm.label Device IO time localhost;localhost:diskstats_latency.sdb.svctm.type GAUGE localhost;localhost:diskstats_latency.sdb.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdb.avgwait.update_rate 300 localhost;localhost:diskstats_latency.sdb.avgwait.draw LINE1 localhost;localhost:diskstats_latency.sdb.avgwait.min 0 localhost;localhost:diskstats_latency.sdb.avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sdb.avgwait.label IO Wait time localhost;localhost:diskstats_latency.sdb.avgwait.type GAUGE localhost;localhost:diskstats_latency.sdb.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdb.avgwrwait.update_rate 300 localhost;localhost:diskstats_latency.sdb.avgwrwait.draw LINE1 localhost;localhost:diskstats_latency.sdb.avgwrwait.min 0 localhost;localhost:diskstats_latency.sdb.avgwrwait.graph_data_size normal localhost;localhost:diskstats_latency.sdb.avgwrwait.warning 0:3 localhost;localhost:diskstats_latency.sdb.avgwrwait.type GAUGE localhost;localhost:diskstats_latency.sdb.avgwrwait.label Write IO Wait time localhost;localhost:ping_google_com.graph_title IPv4 ping times to google.com localhost;localhost:ping_google_com.graph_args --base 1000 -l 0 localhost;localhost:ping_google_com.graph_vlabel roundtrip time (seconds) localhost;localhost:ping_google_com.graph_category network localhost;localhost:ping_google_com.graph_info This graph shows ping RTT statistics. localhost;localhost:ping_google_com.graph_order ping packetloss localhost;localhost:ping_google_com.ping.info Ping RTT statistics for google.com. localhost;localhost:ping_google_com.ping.update_rate 300 localhost;localhost:ping_google_com.ping.graph_data_size normal localhost;localhost:ping_google_com.ping.label google.com localhost;localhost:ping_google_com.packetloss.update_rate 300 localhost;localhost:ping_google_com.packetloss.graph_data_size normal localhost;localhost:ping_google_com.packetloss.label packet loss localhost;localhost:ping_google_com.packetloss.graph no localhost;localhost:varnish4_request_rate.graph_category varnish localhost;localhost:varnish4_request_rate.graph_title Request rates localhost;localhost:varnish4_request_rate.graph_order cache_hit cache_hitpass cache_miss backend_conn backend_unhealthy client_req client_conn backend_conn backend_unhealthy cache_hit cache_hitpass cache_miss client_req s_pass s_pipe sess_conn localhost;localhost:varnish4_request_rate.client_req.update_rate 300 localhost;localhost:varnish4_request_rate.client_req.min 0 localhost;localhost:varnish4_request_rate.client_req.colour 111111 localhost;localhost:varnish4_request_rate.client_req.graph_data_size normal localhost;localhost:varnish4_request_rate.client_req.label Good client requests received localhost;localhost:varnish4_request_rate.client_req.type DERIVE localhost;localhost:varnish4_request_rate.s_pipe.update_rate 300 localhost;localhost:varnish4_request_rate.s_pipe.min 0 localhost;localhost:varnish4_request_rate.s_pipe.colour 1d2bdf localhost;localhost:varnish4_request_rate.s_pipe.graph_data_size normal localhost;localhost:varnish4_request_rate.s_pipe.label Total pipe sessions seen localhost;localhost:varnish4_request_rate.s_pipe.type DERIVE localhost;localhost:varnish4_request_rate.sess_conn.update_rate 300 localhost;localhost:varnish4_request_rate.sess_conn.min 0 localhost;localhost:varnish4_request_rate.sess_conn.colour 444444 localhost;localhost:varnish4_request_rate.sess_conn.graph_data_size normal localhost;localhost:varnish4_request_rate.sess_conn.label Sessions accepted localhost;localhost:varnish4_request_rate.sess_conn.type DERIVE localhost;localhost:varnish4_request_rate.sess_conn.graph ON localhost;localhost:varnish4_request_rate.cache_miss.update_rate 300 localhost;localhost:varnish4_request_rate.cache_miss.draw STACK localhost;localhost:varnish4_request_rate.cache_miss.min 0 localhost;localhost:varnish4_request_rate.cache_miss.colour FF0000 localhost;localhost:varnish4_request_rate.cache_miss.graph_data_size normal localhost;localhost:varnish4_request_rate.cache_miss.label Cache misses localhost;localhost:varnish4_request_rate.cache_miss.type DERIVE localhost;localhost:varnish4_request_rate.backend_conn.update_rate 300 localhost;localhost:varnish4_request_rate.backend_conn.min 0 localhost;localhost:varnish4_request_rate.backend_conn.colour 995599 localhost;localhost:varnish4_request_rate.backend_conn.graph_data_size normal localhost;localhost:varnish4_request_rate.backend_conn.label Backend conn. success localhost;localhost:varnish4_request_rate.backend_conn.type DERIVE localhost;localhost:varnish4_request_rate.s_pass.update_rate 300 localhost;localhost:varnish4_request_rate.s_pass.min 0 localhost;localhost:varnish4_request_rate.s_pass.colour 785d0d localhost;localhost:varnish4_request_rate.s_pass.graph_data_size normal localhost;localhost:varnish4_request_rate.s_pass.label Total pass-ed requests seen localhost;localhost:varnish4_request_rate.s_pass.type DERIVE localhost;localhost:varnish4_request_rate.backend_unhealthy.update_rate 300 localhost;localhost:varnish4_request_rate.backend_unhealthy.min 0 localhost;localhost:varnish4_request_rate.backend_unhealthy.colour FF55FF localhost;localhost:varnish4_request_rate.backend_unhealthy.graph_data_size normal localhost;localhost:varnish4_request_rate.backend_unhealthy.label Backend conn. not attempted localhost;localhost:varnish4_request_rate.backend_unhealthy.type DERIVE localhost;localhost:varnish4_request_rate.cache_hitpass.info Hitpass are cached passes: An entry in the cache instructing Varnish to pass. Typically achieved after a pass in vcl_fetch. localhost;localhost:varnish4_request_rate.cache_hitpass.update_rate 300 localhost;localhost:varnish4_request_rate.cache_hitpass.draw STACK localhost;localhost:varnish4_request_rate.cache_hitpass.min 0 localhost;localhost:varnish4_request_rate.cache_hitpass.colour FFFF00 localhost;localhost:varnish4_request_rate.cache_hitpass.graph_data_size normal localhost;localhost:varnish4_request_rate.cache_hitpass.type DERIVE localhost;localhost:varnish4_request_rate.cache_hitpass.label Cache hits for pass localhost;localhost:varnish4_request_rate.cache_hit.update_rate 300 localhost;localhost:varnish4_request_rate.cache_hit.draw AREA localhost;localhost:varnish4_request_rate.cache_hit.min 0 localhost;localhost:varnish4_request_rate.cache_hit.colour 00FF00 localhost;localhost:varnish4_request_rate.cache_hit.graph_data_size normal localhost;localhost:varnish4_request_rate.cache_hit.label Cache hits localhost;localhost:varnish4_request_rate.cache_hit.type DERIVE localhost;localhost:smart_sdd.graph_title S.M.A.R.T values for drive sdd localhost;localhost:smart_sdd.graph_vlabel Attribute S.M.A.R.T value localhost;localhost:smart_sdd.graph_args --base 1000 --lower-limit 0 localhost;localhost:smart_sdd.graph_category disk localhost;localhost:smart_sdd.graph_info This graph shows the value of all S.M.A.R.T attributes of drive sdd (ST1000DM003-1SB102). smartctl_exit_status is the return value of smartctl. A non-zero return value indicates an error, a potential error, or a fault on the drive. localhost;localhost:smart_sdd.graph_order Head_Flying_Hours Spin_Retry_Count Command_Timeout Reported_Uncorrect smartctl_exit_status Offline_Uncorrectable Seek_Error_Rate End_to_End_Error High_Fly_Writes Total_LBAs_Read Current_Pending_Sector Power_Cycle_Count UDMA_CRC_Error_Count Runtime_Bad_Block Temperature_Celsius Load_Cycle_Count Raw_Read_Error_Rate Hardware_ECC_Recovered Total_LBAs_Written Start_Stop_Count Reallocated_Sector_Ct Power_On_Hours Spin_Up_Time Airflow_Temperature_Cel localhost;localhost:smart_sdd.Power_On_Hours.critical 000: localhost;localhost:smart_sdd.Power_On_Hours.update_rate 300 localhost;localhost:smart_sdd.Power_On_Hours.graph_data_size normal localhost;localhost:smart_sdd.Power_On_Hours.label Power_On_Hours localhost;localhost:smart_sdd.UDMA_CRC_Error_Count.critical 000: localhost;localhost:smart_sdd.UDMA_CRC_Error_Count.update_rate 300 localhost;localhost:smart_sdd.UDMA_CRC_Error_Count.graph_data_size normal localhost;localhost:smart_sdd.UDMA_CRC_Error_Count.label UDMA_CRC_Error_Count localhost;localhost:smart_sdd.Power_Cycle_Count.critical 020: localhost;localhost:smart_sdd.Power_Cycle_Count.update_rate 300 localhost;localhost:smart_sdd.Power_Cycle_Count.graph_data_size normal localhost;localhost:smart_sdd.Power_Cycle_Count.label Power_Cycle_Count localhost;localhost:smart_sdd.Spin_Retry_Count.critical 097: localhost;localhost:smart_sdd.Spin_Retry_Count.update_rate 300 localhost;localhost:smart_sdd.Spin_Retry_Count.graph_data_size normal localhost;localhost:smart_sdd.Spin_Retry_Count.label Spin_Retry_Count localhost;localhost:smart_sdd.Airflow_Temperature_Cel.critical 040: localhost;localhost:smart_sdd.Airflow_Temperature_Cel.update_rate 300 localhost;localhost:smart_sdd.Airflow_Temperature_Cel.graph_data_size normal localhost;localhost:smart_sdd.Airflow_Temperature_Cel.label Airflow_Temperature_Cel localhost;localhost:smart_sdd.Command_Timeout.critical 000: localhost;localhost:smart_sdd.Command_Timeout.update_rate 300 localhost;localhost:smart_sdd.Command_Timeout.graph_data_size normal localhost;localhost:smart_sdd.Command_Timeout.label Command_Timeout localhost;localhost:smart_sdd.End_to_End_Error.critical 099: localhost;localhost:smart_sdd.End_to_End_Error.update_rate 300 localhost;localhost:smart_sdd.End_to_End_Error.graph_data_size normal localhost;localhost:smart_sdd.End_to_End_Error.label End_to_End_Error localhost;localhost:smart_sdd.Hardware_ECC_Recovered.critical 000: localhost;localhost:smart_sdd.Hardware_ECC_Recovered.update_rate 300 localhost;localhost:smart_sdd.Hardware_ECC_Recovered.graph_data_size normal localhost;localhost:smart_sdd.Hardware_ECC_Recovered.label Hardware_ECC_Recovered localhost;localhost:smart_sdd.High_Fly_Writes.critical 000: localhost;localhost:smart_sdd.High_Fly_Writes.update_rate 300 localhost;localhost:smart_sdd.High_Fly_Writes.graph_data_size normal localhost;localhost:smart_sdd.High_Fly_Writes.label High_Fly_Writes localhost;localhost:smart_sdd.Reallocated_Sector_Ct.critical 010: localhost;localhost:smart_sdd.Reallocated_Sector_Ct.update_rate 300 localhost;localhost:smart_sdd.Reallocated_Sector_Ct.graph_data_size normal localhost;localhost:smart_sdd.Reallocated_Sector_Ct.label Reallocated_Sector_Ct localhost;localhost:smart_sdd.Seek_Error_Rate.critical 045: localhost;localhost:smart_sdd.Seek_Error_Rate.update_rate 300 localhost;localhost:smart_sdd.Seek_Error_Rate.graph_data_size normal localhost;localhost:smart_sdd.Seek_Error_Rate.label Seek_Error_Rate localhost;localhost:smart_sdd.smartctl_exit_status.update_rate 300 localhost;localhost:smart_sdd.smartctl_exit_status.warning 1 localhost;localhost:smart_sdd.smartctl_exit_status.graph_data_size normal localhost;localhost:smart_sdd.smartctl_exit_status.label smartctl_exit_status localhost;localhost:smart_sdd.Head_Flying_Hours.critical 000: localhost;localhost:smart_sdd.Head_Flying_Hours.update_rate 300 localhost;localhost:smart_sdd.Head_Flying_Hours.graph_data_size normal localhost;localhost:smart_sdd.Head_Flying_Hours.label Head_Flying_Hours localhost;localhost:smart_sdd.Temperature_Celsius.critical 000: localhost;localhost:smart_sdd.Temperature_Celsius.update_rate 300 localhost;localhost:smart_sdd.Temperature_Celsius.graph_data_size normal localhost;localhost:smart_sdd.Temperature_Celsius.label Temperature_Celsius localhost;localhost:smart_sdd.Load_Cycle_Count.critical 000: localhost;localhost:smart_sdd.Load_Cycle_Count.update_rate 300 localhost;localhost:smart_sdd.Load_Cycle_Count.graph_data_size normal localhost;localhost:smart_sdd.Load_Cycle_Count.label Load_Cycle_Count localhost;localhost:smart_sdd.Offline_Uncorrectable.critical 000: localhost;localhost:smart_sdd.Offline_Uncorrectable.update_rate 300 localhost;localhost:smart_sdd.Offline_Uncorrectable.graph_data_size normal localhost;localhost:smart_sdd.Offline_Uncorrectable.label Offline_Uncorrectable localhost;localhost:smart_sdd.Total_LBAs_Read.critical 000: localhost;localhost:smart_sdd.Total_LBAs_Read.update_rate 300 localhost;localhost:smart_sdd.Total_LBAs_Read.graph_data_size normal localhost;localhost:smart_sdd.Total_LBAs_Read.label Total_LBAs_Read localhost;localhost:smart_sdd.Total_LBAs_Written.critical 000: localhost;localhost:smart_sdd.Total_LBAs_Written.update_rate 300 localhost;localhost:smart_sdd.Total_LBAs_Written.graph_data_size normal localhost;localhost:smart_sdd.Total_LBAs_Written.label Total_LBAs_Written localhost;localhost:smart_sdd.Runtime_Bad_Block.critical 000: localhost;localhost:smart_sdd.Runtime_Bad_Block.update_rate 300 localhost;localhost:smart_sdd.Runtime_Bad_Block.graph_data_size normal localhost;localhost:smart_sdd.Runtime_Bad_Block.label Runtime_Bad_Block localhost;localhost:smart_sdd.Current_Pending_Sector.critical 000: localhost;localhost:smart_sdd.Current_Pending_Sector.update_rate 300 localhost;localhost:smart_sdd.Current_Pending_Sector.graph_data_size normal localhost;localhost:smart_sdd.Current_Pending_Sector.label Current_Pending_Sector localhost;localhost:smart_sdd.Raw_Read_Error_Rate.critical 006: localhost;localhost:smart_sdd.Raw_Read_Error_Rate.update_rate 300 localhost;localhost:smart_sdd.Raw_Read_Error_Rate.graph_data_size normal localhost;localhost:smart_sdd.Raw_Read_Error_Rate.label Raw_Read_Error_Rate localhost;localhost:smart_sdd.Reported_Uncorrect.critical 000: localhost;localhost:smart_sdd.Reported_Uncorrect.update_rate 300 localhost;localhost:smart_sdd.Reported_Uncorrect.graph_data_size normal localhost;localhost:smart_sdd.Reported_Uncorrect.label Reported_Uncorrect localhost;localhost:smart_sdd.Spin_Up_Time.critical 000: localhost;localhost:smart_sdd.Spin_Up_Time.update_rate 300 localhost;localhost:smart_sdd.Spin_Up_Time.graph_data_size normal localhost;localhost:smart_sdd.Spin_Up_Time.label Spin_Up_Time localhost;localhost:smart_sdd.Start_Stop_Count.critical 020: localhost;localhost:smart_sdd.Start_Stop_Count.update_rate 300 localhost;localhost:smart_sdd.Start_Stop_Count.graph_data_size normal localhost;localhost:smart_sdd.Start_Stop_Count.label Start_Stop_Count localhost;localhost:varnish4_transfer_rates.graph_category varnish localhost;localhost:varnish4_transfer_rates.graph_title Transfer rates localhost;localhost:varnish4_transfer_rates.graph_order s_resp_bodybytes s_resp_hdrbytes s_resp_bodybytes s_resp_hdrbytes localhost;localhost:varnish4_transfer_rates.graph_vlabel bit/s localhost;localhost:varnish4_transfer_rates.graph_args -l 0 localhost;localhost:varnish4_transfer_rates.s_resp_bodybytes.cdef s_resp_bodybytes,8,* localhost;localhost:varnish4_transfer_rates.s_resp_bodybytes.update_rate 300 localhost;localhost:varnish4_transfer_rates.s_resp_bodybytes.draw AREA localhost;localhost:varnish4_transfer_rates.s_resp_bodybytes.min 0 localhost;localhost:varnish4_transfer_rates.s_resp_bodybytes.graph_data_size normal localhost;localhost:varnish4_transfer_rates.s_resp_bodybytes.label Body traffic localhost;localhost:varnish4_transfer_rates.s_resp_bodybytes.type DERIVE localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.info HTTP Header traffic. TCP/IP overhead is not included. localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.cdef s_resp_hdrbytes,8,* localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.update_rate 300 localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.draw STACK localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.min 0 localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.graph_data_size normal localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.type DERIVE localhost;localhost:varnish4_transfer_rates.s_resp_hdrbytes.label Header traffic localhost;localhost:spamassassin2.host_name mrazitko.varak.net localhost;localhost:spamassassin2.graph_title SpamAssassin throughput localhost;localhost:spamassassin2.graph_args --base 1000 -l 0 localhost;localhost:spamassassin2.graph_vlabel mails/${graph_period} localhost;localhost:spamassassin2.graph_order ham spam ham spam localhost;localhost:spamassassin2.graph_category mail localhost;localhost:spamassassin2.spam.update_rate 300 localhost;localhost:spamassassin2.spam.draw STACK localhost;localhost:spamassassin2.spam.min 0 localhost;localhost:spamassassin2.spam.graph_data_size normal localhost;localhost:spamassassin2.spam.label spam localhost;localhost:spamassassin2.spam.type DERIVE localhost;localhost:spamassassin2.ham.update_rate 300 localhost;localhost:spamassassin2.ham.draw AREA localhost;localhost:spamassassin2.ham.min 0 localhost;localhost:spamassassin2.ham.graph_data_size normal localhost;localhost:spamassassin2.ham.label ham localhost;localhost:spamassassin2.ham.type DERIVE localhost;localhost:diskstats_latency.sdd.graph_title Average latency for /dev/sdd localhost;localhost:diskstats_latency.sdd.graph_args --base 1000 --logarithmic localhost;localhost:diskstats_latency.sdd.graph_vlabel seconds localhost;localhost:diskstats_latency.sdd.graph_category disk localhost;localhost:diskstats_latency.sdd.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy. localhost;localhost:diskstats_latency.sdd.graph_order svctm avgwait avgrdwait avgwrwait localhost;localhost:diskstats_latency.sdd.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdd.avgrdwait.update_rate 300 localhost;localhost:diskstats_latency.sdd.avgrdwait.draw LINE1 localhost;localhost:diskstats_latency.sdd.avgrdwait.min 0 localhost;localhost:diskstats_latency.sdd.avgrdwait.graph_data_size normal localhost;localhost:diskstats_latency.sdd.avgrdwait.warning 0:3 localhost;localhost:diskstats_latency.sdd.avgrdwait.type GAUGE localhost;localhost:diskstats_latency.sdd.avgrdwait.label Read IO Wait time localhost;localhost:diskstats_latency.sdd.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request. localhost;localhost:diskstats_latency.sdd.svctm.update_rate 300 localhost;localhost:diskstats_latency.sdd.svctm.draw LINE1 localhost;localhost:diskstats_latency.sdd.svctm.min 0 localhost;localhost:diskstats_latency.sdd.svctm.graph_data_size normal localhost;localhost:diskstats_latency.sdd.svctm.label Device IO time localhost;localhost:diskstats_latency.sdd.svctm.type GAUGE localhost;localhost:diskstats_latency.sdd.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdd.avgwait.update_rate 300 localhost;localhost:diskstats_latency.sdd.avgwait.draw LINE1 localhost;localhost:diskstats_latency.sdd.avgwait.min 0 localhost;localhost:diskstats_latency.sdd.avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sdd.avgwait.label IO Wait time localhost;localhost:diskstats_latency.sdd.avgwait.type GAUGE localhost;localhost:diskstats_latency.sdd.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sdd.avgwrwait.update_rate 300 localhost;localhost:diskstats_latency.sdd.avgwrwait.draw LINE1 localhost;localhost:diskstats_latency.sdd.avgwrwait.min 0 localhost;localhost:diskstats_latency.sdd.avgwrwait.graph_data_size normal localhost;localhost:diskstats_latency.sdd.avgwrwait.warning 0:3 localhost;localhost:diskstats_latency.sdd.avgwrwait.type GAUGE localhost;localhost:diskstats_latency.sdd.avgwrwait.label Write IO Wait time localhost;localhost:uptime.graph_title Uptime localhost;localhost:uptime.graph_args --base 1000 -l 0 localhost;localhost:uptime.graph_scale no localhost;localhost:uptime.graph_vlabel uptime in days localhost;localhost:uptime.graph_category system localhost;localhost:uptime.graph_order uptime localhost;localhost:uptime.uptime.update_rate 300 localhost;localhost:uptime.uptime.draw AREA localhost;localhost:uptime.uptime.graph_data_size normal localhost;localhost:uptime.uptime.label uptime localhost;localhost:df_inode.graph_title Inode usage in percent localhost;localhost:df_inode.graph_args --upper-limit 100 -l 0 localhost;localhost:df_inode.graph_vlabel % localhost;localhost:df_inode.graph_scale no localhost;localhost:df_inode.graph_category disk localhost;localhost:df_inode.graph_order _dev_sda2 _dev_shm _run _dev_sda1 _dev_md126p2 _dev_md126p1 _dev_md126p3 _dev_sdb1 _dev_sdb2 _dev_sdb3 _dev_sdb5 localhost;localhost:df_inode._dev_md126p2.critical 98 localhost;localhost:df_inode._dev_md126p2.update_rate 300 localhost;localhost:df_inode._dev_md126p2.warning 92 localhost;localhost:df_inode._dev_md126p2.graph_data_size normal localhost;localhost:df_inode._dev_md126p2.label /var/spool/mail localhost;localhost:df_inode._dev_sdb3.critical 98 localhost;localhost:df_inode._dev_sdb3.update_rate 300 localhost;localhost:df_inode._dev_sdb3.warning 92 localhost;localhost:df_inode._dev_sdb3.graph_data_size normal localhost;localhost:df_inode._dev_sdb3.label /data localhost;localhost:df_inode._run.critical 98 localhost;localhost:df_inode._run.update_rate 300 localhost;localhost:df_inode._run.warning 92 localhost;localhost:df_inode._run.graph_data_size normal localhost;localhost:df_inode._run.label /run localhost;localhost:df_inode._dev_md126p3.critical 98 localhost;localhost:df_inode._dev_md126p3.update_rate 300 localhost;localhost:df_inode._dev_md126p3.warning 92 localhost;localhost:df_inode._dev_md126p3.graph_data_size normal localhost;localhost:df_inode._dev_md126p3.label /www localhost;localhost:df_inode._dev_sdb5.critical 98 localhost;localhost:df_inode._dev_sdb5.update_rate 300 localhost;localhost:df_inode._dev_sdb5.warning 92 localhost;localhost:df_inode._dev_sdb5.graph_data_size normal localhost;localhost:df_inode._dev_sdb5.label /backups localhost;localhost:df_inode._dev_shm.critical 98 localhost;localhost:df_inode._dev_shm.update_rate 300 localhost;localhost:df_inode._dev_shm.warning 92 localhost;localhost:df_inode._dev_shm.graph_data_size normal localhost;localhost:df_inode._dev_shm.label /dev/shm localhost;localhost:df_inode._dev_sdb1.critical 98 localhost;localhost:df_inode._dev_sdb1.update_rate 300 localhost;localhost:df_inode._dev_sdb1.warning 92 localhost;localhost:df_inode._dev_sdb1.graph_data_size normal localhost;localhost:df_inode._dev_sdb1.label /ludek1 localhost;localhost:df_inode._dev_sda1.critical 98 localhost;localhost:df_inode._dev_sda1.update_rate 300 localhost;localhost:df_inode._dev_sda1.warning 92 localhost;localhost:df_inode._dev_sda1.graph_data_size normal localhost;localhost:df_inode._dev_sda1.label /boot localhost;localhost:df_inode._dev_sdb2.critical 98 localhost;localhost:df_inode._dev_sdb2.update_rate 300 localhost;localhost:df_inode._dev_sdb2.warning 92 localhost;localhost:df_inode._dev_sdb2.graph_data_size normal localhost;localhost:df_inode._dev_sdb2.label /var/log localhost;localhost:df_inode._dev_sda2.critical 98 localhost;localhost:df_inode._dev_sda2.update_rate 300 localhost;localhost:df_inode._dev_sda2.warning 92 localhost;localhost:df_inode._dev_sda2.graph_data_size normal localhost;localhost:df_inode._dev_sda2.label / localhost;localhost:df_inode._dev_md126p1.critical 98 localhost;localhost:df_inode._dev_md126p1.update_rate 300 localhost;localhost:df_inode._dev_md126p1.warning 92 localhost;localhost:df_inode._dev_md126p1.graph_data_size normal localhost;localhost:df_inode._dev_md126p1.label /var/lib localhost;localhost:cpu.graph_title CPU usage localhost;localhost:cpu.graph_order system user nice idle iowait irq softirq system user nice idle iowait irq softirq steal guest localhost;localhost:cpu.graph_args --base 1000 -r --lower-limit 0 --upper-limit 400 localhost;localhost:cpu.graph_vlabel % localhost;localhost:cpu.graph_scale no localhost;localhost:cpu.graph_info This graph shows how CPU time is spent. localhost;localhost:cpu.graph_category system localhost;localhost:cpu.graph_period second localhost;localhost:cpu.system.info CPU time spent by the kernel in system activities localhost;localhost:cpu.system.update_rate 300 localhost;localhost:cpu.system.draw AREA localhost;localhost:cpu.system.min 0 localhost;localhost:cpu.system.graph_data_size normal localhost;localhost:cpu.system.label system localhost;localhost:cpu.system.type DERIVE localhost;localhost:cpu.softirq.info CPU time spent handling "batched" interrupts localhost;localhost:cpu.softirq.update_rate 300 localhost;localhost:cpu.softirq.draw STACK localhost;localhost:cpu.softirq.min 0 localhost;localhost:cpu.softirq.graph_data_size normal localhost;localhost:cpu.softirq.label softirq localhost;localhost:cpu.softirq.type DERIVE localhost;localhost:cpu.idle.info Idle CPU time localhost;localhost:cpu.idle.update_rate 300 localhost;localhost:cpu.idle.draw STACK localhost;localhost:cpu.idle.min 0 localhost;localhost:cpu.idle.graph_data_size normal localhost;localhost:cpu.idle.label idle localhost;localhost:cpu.idle.type DERIVE localhost;localhost:cpu.nice.info CPU time spent by nice(1)d programs localhost;localhost:cpu.nice.update_rate 300 localhost;localhost:cpu.nice.draw STACK localhost;localhost:cpu.nice.min 0 localhost;localhost:cpu.nice.graph_data_size normal localhost;localhost:cpu.nice.label nice localhost;localhost:cpu.nice.type DERIVE localhost;localhost:cpu.steal.info The time that a virtual CPU had runnable tasks, but the virtual CPU itself was not running localhost;localhost:cpu.steal.update_rate 300 localhost;localhost:cpu.steal.draw STACK localhost;localhost:cpu.steal.min 0 localhost;localhost:cpu.steal.graph_data_size normal localhost;localhost:cpu.steal.label steal localhost;localhost:cpu.steal.type DERIVE localhost;localhost:cpu.irq.info CPU time spent handling interrupts localhost;localhost:cpu.irq.update_rate 300 localhost;localhost:cpu.irq.draw STACK localhost;localhost:cpu.irq.min 0 localhost;localhost:cpu.irq.graph_data_size normal localhost;localhost:cpu.irq.label irq localhost;localhost:cpu.irq.type DERIVE localhost;localhost:cpu.user.info CPU time spent by normal programs and daemons localhost;localhost:cpu.user.update_rate 300 localhost;localhost:cpu.user.draw STACK localhost;localhost:cpu.user.min 0 localhost;localhost:cpu.user.graph_data_size normal localhost;localhost:cpu.user.label user localhost;localhost:cpu.user.type DERIVE localhost;localhost:cpu.guest.info The time spent running a virtual CPU for guest operating systems under the control of the Linux kernel. localhost;localhost:cpu.guest.update_rate 300 localhost;localhost:cpu.guest.draw STACK localhost;localhost:cpu.guest.min 0 localhost;localhost:cpu.guest.graph_data_size normal localhost;localhost:cpu.guest.label guest localhost;localhost:cpu.guest.type DERIVE localhost;localhost:cpu.iowait.info CPU time spent waiting for I/O operations to finish when there is nothing else to do. localhost;localhost:cpu.iowait.update_rate 300 localhost;localhost:cpu.iowait.draw STACK localhost;localhost:cpu.iowait.min 0 localhost;localhost:cpu.iowait.graph_data_size normal localhost;localhost:cpu.iowait.label iowait localhost;localhost:cpu.iowait.type DERIVE localhost;localhost:if_err_eno1.graph_order rcvd trans rcvd trans rxdrop txdrop collisions localhost;localhost:if_err_eno1.graph_title eno1 errors localhost;localhost:if_err_eno1.graph_args --base 1000 localhost;localhost:if_err_eno1.graph_vlabel packets in (-) / out (+) per ${graph_period} localhost;localhost:if_err_eno1.graph_category network localhost;localhost:if_err_eno1.graph_info This graph shows the amount of errors, packet drops, and collisions on the eno1 network interface. localhost;localhost:if_err_eno1.trans.update_rate 300 localhost;localhost:if_err_eno1.trans.warning 1 localhost;localhost:if_err_eno1.trans.graph_data_size normal localhost;localhost:if_err_eno1.trans.label errors localhost;localhost:if_err_eno1.trans.type COUNTER localhost;localhost:if_err_eno1.trans.negative rcvd localhost;localhost:if_err_eno1.rcvd.update_rate 300 localhost;localhost:if_err_eno1.rcvd.warning 1 localhost;localhost:if_err_eno1.rcvd.graph_data_size normal localhost;localhost:if_err_eno1.rcvd.label errors localhost;localhost:if_err_eno1.rcvd.type COUNTER localhost;localhost:if_err_eno1.rcvd.graph no localhost;localhost:if_err_eno1.txdrop.update_rate 300 localhost;localhost:if_err_eno1.txdrop.graph_data_size normal localhost;localhost:if_err_eno1.txdrop.label drops localhost;localhost:if_err_eno1.txdrop.type COUNTER localhost;localhost:if_err_eno1.txdrop.negative rxdrop localhost;localhost:if_err_eno1.collisions.update_rate 300 localhost;localhost:if_err_eno1.collisions.graph_data_size normal localhost;localhost:if_err_eno1.collisions.label collisions localhost;localhost:if_err_eno1.collisions.type COUNTER localhost;localhost:if_err_eno1.rxdrop.update_rate 300 localhost;localhost:if_err_eno1.rxdrop.graph_data_size normal localhost;localhost:if_err_eno1.rxdrop.label drops localhost;localhost:if_err_eno1.rxdrop.type COUNTER localhost;localhost:if_err_eno1.rxdrop.graph no localhost;localhost:forks.graph_title Fork rate localhost;localhost:forks.graph_args --base 1000 -l 0 localhost;localhost:forks.graph_vlabel forks / ${graph_period} localhost;localhost:forks.graph_category processes localhost;localhost:forks.graph_info This graph shows the number of forks (new processes started) per second. localhost;localhost:forks.graph_order forks localhost;localhost:forks.forks.info The number of forks per second. localhost;localhost:forks.forks.update_rate 300 localhost;localhost:forks.forks.min 0 localhost;localhost:forks.forks.max 100000 localhost;localhost:forks.forks.graph_data_size normal localhost;localhost:forks.forks.label forks localhost;localhost:forks.forks.type DERIVE localhost;localhost:memory.graph_args --base 1024 -l 0 --upper-limit 33460670464 localhost;localhost:memory.graph_vlabel Bytes localhost;localhost:memory.graph_title Memory usage localhost;localhost:memory.graph_category system localhost;localhost:memory.graph_info This graph shows what the machine uses memory for. localhost;localhost:memory.graph_order apps page_tables per_cpu swap_cache slab shmem cached buffers free swap apps buffers swap cached free shmem slab swap_cache page_tables per_cpu vmalloc_used committed mapped active inactive localhost;localhost:memory.swap_cache.info A piece of memory that keeps track of pages that have been fetched from swap but not yet been modified. localhost;localhost:memory.swap_cache.update_rate 300 localhost;localhost:memory.swap_cache.draw STACK localhost;localhost:memory.swap_cache.colour COLOUR2 localhost;localhost:memory.swap_cache.graph_data_size normal localhost;localhost:memory.swap_cache.label swap_cache localhost;localhost:memory.buffers.info Block device (e.g. harddisk) cache. Also where "dirty" blocks are stored until written. localhost;localhost:memory.buffers.update_rate 300 localhost;localhost:memory.buffers.draw STACK localhost;localhost:memory.buffers.colour COLOUR5 localhost;localhost:memory.buffers.graph_data_size normal localhost;localhost:memory.buffers.label buffers localhost;localhost:memory.shmem.info Shared Memory (SYSV SHM segments, tmpfs). localhost;localhost:memory.shmem.update_rate 300 localhost;localhost:memory.shmem.draw STACK localhost;localhost:memory.shmem.colour COLOUR9 localhost;localhost:memory.shmem.graph_data_size normal localhost;localhost:memory.shmem.label shmem localhost;localhost:memory.inactive.info Memory not currently used. localhost;localhost:memory.inactive.update_rate 300 localhost;localhost:memory.inactive.draw LINE2 localhost;localhost:memory.inactive.colour COLOUR15 localhost;localhost:memory.inactive.graph_data_size normal localhost;localhost:memory.inactive.label inactive localhost;localhost:memory.apps.info Memory used by user-space applications. localhost;localhost:memory.apps.update_rate 300 localhost;localhost:memory.apps.draw AREA localhost;localhost:memory.apps.colour COLOUR0 localhost;localhost:memory.apps.graph_data_size normal localhost;localhost:memory.apps.label apps localhost;localhost:memory.slab.info Memory used by the kernel (major users are caches like inode, dentry, etc). localhost;localhost:memory.slab.update_rate 300 localhost;localhost:memory.slab.draw STACK localhost;localhost:memory.slab.colour COLOUR3 localhost;localhost:memory.slab.graph_data_size normal localhost;localhost:memory.slab.label slab_cache localhost;localhost:memory.active.info Memory recently used. Not reclaimed unless absolutely necessary. localhost;localhost:memory.active.update_rate 300 localhost;localhost:memory.active.draw LINE2 localhost;localhost:memory.active.colour COLOUR12 localhost;localhost:memory.active.graph_data_size normal localhost;localhost:memory.active.label active localhost;localhost:memory.vmalloc_used.info 'VMalloc' (kernel) memory used localhost;localhost:memory.vmalloc_used.update_rate 300 localhost;localhost:memory.vmalloc_used.draw LINE2 localhost;localhost:memory.vmalloc_used.colour COLOUR8 localhost;localhost:memory.vmalloc_used.graph_data_size normal localhost;localhost:memory.vmalloc_used.label vmalloc_used localhost;localhost:memory.page_tables.info Memory used to map between virtual and physical memory addresses. localhost;localhost:memory.page_tables.update_rate 300 localhost;localhost:memory.page_tables.draw STACK localhost;localhost:memory.page_tables.colour COLOUR1 localhost;localhost:memory.page_tables.graph_data_size normal localhost;localhost:memory.page_tables.label page_tables localhost;localhost:memory.per_cpu.info Per CPU allocations localhost;localhost:memory.per_cpu.update_rate 300 localhost;localhost:memory.per_cpu.draw STACK localhost;localhost:memory.per_cpu.colour COLOUR20 localhost;localhost:memory.per_cpu.graph_data_size normal localhost;localhost:memory.per_cpu.label per_cpu localhost;localhost:memory.free.info Wasted memory. Memory that is not used for anything at all. localhost;localhost:memory.free.update_rate 300 localhost;localhost:memory.free.draw STACK localhost;localhost:memory.free.colour COLOUR6 localhost;localhost:memory.free.graph_data_size normal localhost;localhost:memory.free.label unused localhost;localhost:memory.committed.info The amount of memory allocated to programs. Overcommitting is normal, but may indicate memory leaks. localhost;localhost:memory.committed.update_rate 300 localhost;localhost:memory.committed.draw LINE2 localhost;localhost:memory.committed.colour COLOUR10 localhost;localhost:memory.committed.graph_data_size normal localhost;localhost:memory.committed.label committed localhost;localhost:memory.swap.info Swap space used. localhost;localhost:memory.swap.update_rate 300 localhost;localhost:memory.swap.draw STACK localhost;localhost:memory.swap.colour COLOUR7 localhost;localhost:memory.swap.graph_data_size normal localhost;localhost:memory.swap.label swap localhost;localhost:memory.cached.info Parked file data (file content) cache. localhost;localhost:memory.cached.update_rate 300 localhost;localhost:memory.cached.draw STACK localhost;localhost:memory.cached.colour COLOUR4 localhost;localhost:memory.cached.graph_data_size normal localhost;localhost:memory.cached.label cache localhost;localhost:memory.mapped.info All mmap()ed pages. localhost;localhost:memory.mapped.update_rate 300 localhost;localhost:memory.mapped.draw LINE2 localhost;localhost:memory.mapped.colour COLOUR11 localhost;localhost:memory.mapped.graph_data_size normal localhost;localhost:memory.mapped.label mapped localhost;localhost:varnish4_backend_traffic.graph_category varnish localhost;localhost:varnish4_backend_traffic.graph_title Backend traffic localhost;localhost:varnish4_backend_traffic.graph_order backend_busy backend_conn backend_fail backend_recycle backend_req backend_retry backend_reuse backend_toolate backend_unhealthy localhost;localhost:varnish4_backend_traffic.backend_busy.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_busy.min 0 localhost;localhost:varnish4_backend_traffic.backend_busy.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_busy.label Backend conn. too many localhost;localhost:varnish4_backend_traffic.backend_busy.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_conn.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_conn.min 0 localhost;localhost:varnish4_backend_traffic.backend_conn.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_conn.label Backend conn. success localhost;localhost:varnish4_backend_traffic.backend_conn.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_retry.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_retry.min 0 localhost;localhost:varnish4_backend_traffic.backend_retry.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_retry.label Backend conn. retry localhost;localhost:varnish4_backend_traffic.backend_retry.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_unhealthy.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_unhealthy.min 0 localhost;localhost:varnish4_backend_traffic.backend_unhealthy.warning :1 localhost;localhost:varnish4_backend_traffic.backend_unhealthy.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_unhealthy.label Backend conn. not attempted localhost;localhost:varnish4_backend_traffic.backend_unhealthy.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_recycle.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_recycle.min 0 localhost;localhost:varnish4_backend_traffic.backend_recycle.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_recycle.label Backend conn. recycles localhost;localhost:varnish4_backend_traffic.backend_recycle.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_toolate.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_toolate.min 0 localhost;localhost:varnish4_backend_traffic.backend_toolate.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_toolate.label Backend conn. was closed localhost;localhost:varnish4_backend_traffic.backend_toolate.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_fail.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_fail.min 0 localhost;localhost:varnish4_backend_traffic.backend_fail.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_fail.label Backend conn. failures localhost;localhost:varnish4_backend_traffic.backend_fail.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_reuse.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_reuse.min 0 localhost;localhost:varnish4_backend_traffic.backend_reuse.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_reuse.label Backend conn. reuses localhost;localhost:varnish4_backend_traffic.backend_reuse.type DERIVE localhost;localhost:varnish4_backend_traffic.backend_req.update_rate 300 localhost;localhost:varnish4_backend_traffic.backend_req.min 0 localhost;localhost:varnish4_backend_traffic.backend_req.graph_data_size normal localhost;localhost:varnish4_backend_traffic.backend_req.label Backend requests made localhost;localhost:varnish4_backend_traffic.backend_req.type DERIVE localhost;localhost:smart_sdb.graph_title S.M.A.R.T values for drive sdb localhost;localhost:smart_sdb.graph_vlabel Attribute S.M.A.R.T value localhost;localhost:smart_sdb.graph_args --base 1000 --lower-limit 0 localhost;localhost:smart_sdb.graph_category disk localhost;localhost:smart_sdb.graph_info This graph shows the value of all S.M.A.R.T attributes of drive sdb (WDC WD30EURX-64HYZY0). smartctl_exit_status is the return value of smartctl. A non-zero return value indicates an error, a potential error, or a fault on the drive. localhost;localhost:smart_sdb.graph_order Power_Off_Retract_Count Power_Cycle_Count Current_Pending_Sector Spin_Retry_Count Offline_Uncorrectable Seek_Error_Rate Reallocated_Event_Count UDMA_CRC_Error_Count Reallocated_Sector_Ct Power_On_Hours Calibration_Retry_Count Spin_Up_Time Start_Stop_Count Temperature_Celsius Multi_Zone_Error_Rate Load_Cycle_Count Raw_Read_Error_Rate smartctl_exit_status localhost;localhost:smart_sdb.Power_On_Hours.critical 000: localhost;localhost:smart_sdb.Power_On_Hours.update_rate 300 localhost;localhost:smart_sdb.Power_On_Hours.graph_data_size normal localhost;localhost:smart_sdb.Power_On_Hours.label Power_On_Hours localhost;localhost:smart_sdb.Calibration_Retry_Count.critical 000: localhost;localhost:smart_sdb.Calibration_Retry_Count.update_rate 300 localhost;localhost:smart_sdb.Calibration_Retry_Count.graph_data_size normal localhost;localhost:smart_sdb.Calibration_Retry_Count.label Calibration_Retry_Count localhost;localhost:smart_sdb.UDMA_CRC_Error_Count.critical 000: localhost;localhost:smart_sdb.UDMA_CRC_Error_Count.update_rate 300 localhost;localhost:smart_sdb.UDMA_CRC_Error_Count.graph_data_size normal localhost;localhost:smart_sdb.UDMA_CRC_Error_Count.label UDMA_CRC_Error_Count localhost;localhost:smart_sdb.Power_Cycle_Count.critical 000: localhost;localhost:smart_sdb.Power_Cycle_Count.update_rate 300 localhost;localhost:smart_sdb.Power_Cycle_Count.graph_data_size normal localhost;localhost:smart_sdb.Power_Cycle_Count.label Power_Cycle_Count localhost;localhost:smart_sdb.Spin_Retry_Count.critical 000: localhost;localhost:smart_sdb.Spin_Retry_Count.update_rate 300 localhost;localhost:smart_sdb.Spin_Retry_Count.graph_data_size normal localhost;localhost:smart_sdb.Spin_Retry_Count.label Spin_Retry_Count localhost;localhost:smart_sdb.Multi_Zone_Error_Rate.critical 000: localhost;localhost:smart_sdb.Multi_Zone_Error_Rate.update_rate 300 localhost;localhost:smart_sdb.Multi_Zone_Error_Rate.graph_data_size normal localhost;localhost:smart_sdb.Multi_Zone_Error_Rate.label Multi_Zone_Error_Rate localhost;localhost:smart_sdb.Reallocated_Sector_Ct.critical 140: localhost;localhost:smart_sdb.Reallocated_Sector_Ct.update_rate 300 localhost;localhost:smart_sdb.Reallocated_Sector_Ct.graph_data_size normal localhost;localhost:smart_sdb.Reallocated_Sector_Ct.label Reallocated_Sector_Ct localhost;localhost:smart_sdb.Seek_Error_Rate.critical 000: localhost;localhost:smart_sdb.Seek_Error_Rate.update_rate 300 localhost;localhost:smart_sdb.Seek_Error_Rate.graph_data_size normal localhost;localhost:smart_sdb.Seek_Error_Rate.label Seek_Error_Rate localhost;localhost:smart_sdb.smartctl_exit_status.update_rate 300 localhost;localhost:smart_sdb.smartctl_exit_status.warning 1 localhost;localhost:smart_sdb.smartctl_exit_status.graph_data_size normal localhost;localhost:smart_sdb.smartctl_exit_status.label smartctl_exit_status localhost;localhost:smart_sdb.Temperature_Celsius.critical 000: localhost;localhost:smart_sdb.Temperature_Celsius.update_rate 300 localhost;localhost:smart_sdb.Temperature_Celsius.graph_data_size normal localhost;localhost:smart_sdb.Temperature_Celsius.label Temperature_Celsius localhost;localhost:smart_sdb.Power_Off_Retract_Count.critical 000: localhost;localhost:smart_sdb.Power_Off_Retract_Count.update_rate 300 localhost;localhost:smart_sdb.Power_Off_Retract_Count.graph_data_size normal localhost;localhost:smart_sdb.Power_Off_Retract_Count.label Power_Off_Retract_Count localhost;localhost:smart_sdb.Load_Cycle_Count.critical 000: localhost;localhost:smart_sdb.Load_Cycle_Count.update_rate 300 localhost;localhost:smart_sdb.Load_Cycle_Count.graph_data_size normal localhost;localhost:smart_sdb.Load_Cycle_Count.label Load_Cycle_Count localhost;localhost:smart_sdb.Offline_Uncorrectable.critical 000: localhost;localhost:smart_sdb.Offline_Uncorrectable.update_rate 300 localhost;localhost:smart_sdb.Offline_Uncorrectable.graph_data_size normal localhost;localhost:smart_sdb.Offline_Uncorrectable.label Offline_Uncorrectable localhost;localhost:smart_sdb.Reallocated_Event_Count.critical 000: localhost;localhost:smart_sdb.Reallocated_Event_Count.update_rate 300 localhost;localhost:smart_sdb.Reallocated_Event_Count.graph_data_size normal localhost;localhost:smart_sdb.Reallocated_Event_Count.label Reallocated_Event_Count localhost;localhost:smart_sdb.Current_Pending_Sector.critical 000: localhost;localhost:smart_sdb.Current_Pending_Sector.update_rate 300 localhost;localhost:smart_sdb.Current_Pending_Sector.graph_data_size normal localhost;localhost:smart_sdb.Current_Pending_Sector.label Current_Pending_Sector localhost;localhost:smart_sdb.Raw_Read_Error_Rate.critical 051: localhost;localhost:smart_sdb.Raw_Read_Error_Rate.update_rate 300 localhost;localhost:smart_sdb.Raw_Read_Error_Rate.graph_data_size normal localhost;localhost:smart_sdb.Raw_Read_Error_Rate.label Raw_Read_Error_Rate localhost;localhost:smart_sdb.Spin_Up_Time.critical 021: localhost;localhost:smart_sdb.Spin_Up_Time.update_rate 300 localhost;localhost:smart_sdb.Spin_Up_Time.graph_data_size normal localhost;localhost:smart_sdb.Spin_Up_Time.label Spin_Up_Time localhost;localhost:smart_sdb.Start_Stop_Count.critical 000: localhost;localhost:smart_sdb.Start_Stop_Count.update_rate 300 localhost;localhost:smart_sdb.Start_Stop_Count.graph_data_size normal localhost;localhost:smart_sdb.Start_Stop_Count.label Start_Stop_Count localhost;localhost:tomcat_threads.graph_title Tomcat threads localhost;localhost:tomcat_threads.graph_args --base 1000 -l 0 localhost;localhost:tomcat_threads.graph_vlabel threads localhost;localhost:tomcat_threads.graph_category tomcat localhost;localhost:tomcat_threads.graph_total total localhost;localhost:tomcat_threads.graph_order busy idle busy idle localhost;localhost:tomcat_threads.busy.update_rate 300 localhost;localhost:tomcat_threads.busy.draw AREA localhost;localhost:tomcat_threads.busy.graph_data_size normal localhost;localhost:tomcat_threads.busy.label busy threads localhost;localhost:tomcat_threads.idle.update_rate 300 localhost;localhost:tomcat_threads.idle.draw STACK localhost;localhost:tomcat_threads.idle.graph_data_size normal localhost;localhost:tomcat_threads.idle.label idle threads localhost;localhost:fw_packets.graph_title Firewall Throughput localhost;localhost:fw_packets.graph_args --base 1000 -l 0 localhost;localhost:fw_packets.graph_vlabel Packets/${graph_period} localhost;localhost:fw_packets.graph_category network localhost;localhost:fw_packets.graph_order received forwarded localhost;localhost:fw_packets.forwarded.update_rate 300 localhost;localhost:fw_packets.forwarded.draw LINE2 localhost;localhost:fw_packets.forwarded.min 0 localhost;localhost:fw_packets.forwarded.graph_data_size normal localhost;localhost:fw_packets.forwarded.label Forwarded localhost;localhost:fw_packets.forwarded.type DERIVE localhost;localhost:fw_packets.received.update_rate 300 localhost;localhost:fw_packets.received.draw AREA localhost;localhost:fw_packets.received.min 0 localhost;localhost:fw_packets.received.graph_data_size normal localhost;localhost:fw_packets.received.label Received localhost;localhost:fw_packets.received.type DERIVE localhost;localhost:diskstats_utilization.sda.graph_title Disk utilization for /dev/sda localhost;localhost:diskstats_utilization.sda.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid localhost;localhost:diskstats_utilization.sda.graph_vlabel % busy localhost;localhost:diskstats_utilization.sda.graph_category disk localhost;localhost:diskstats_utilization.sda.graph_scale no localhost;localhost:diskstats_utilization.sda.graph_order util localhost;localhost:diskstats_utilization.sda.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated. localhost;localhost:diskstats_utilization.sda.util.update_rate 300 localhost;localhost:diskstats_utilization.sda.util.draw LINE1 localhost;localhost:diskstats_utilization.sda.util.min 0 localhost;localhost:diskstats_utilization.sda.util.graph_data_size normal localhost;localhost:diskstats_utilization.sda.util.label Utilization localhost;localhost:diskstats_utilization.sda.util.type GAUGE localhost;localhost:diskstats_iops.graph_title Disk IOs per device localhost;localhost:diskstats_iops.graph_args --base 1000 localhost;localhost:diskstats_iops.graph_vlabel IOs/${graph_period} read (-) / write (+) localhost;localhost:diskstats_iops.graph_category disk localhost;localhost:diskstats_iops.graph_width 400 localhost;localhost:diskstats_iops.graph_order md126_rdio md126_wrio sda_rdio sda_wrio sdb_rdio sdb_wrio sdc_rdio sdc_wrio sdd_rdio sdd_wrio localhost;localhost:diskstats_iops.sda_wrio.update_rate 300 localhost;localhost:diskstats_iops.sda_wrio.draw LINE1 localhost;localhost:diskstats_iops.sda_wrio.min 0 localhost;localhost:diskstats_iops.sda_wrio.graph_data_size normal localhost;localhost:diskstats_iops.sda_wrio.label sda localhost;localhost:diskstats_iops.sda_wrio.type GAUGE localhost;localhost:diskstats_iops.sda_wrio.negative sda_rdio localhost;localhost:diskstats_iops.sdc_rdio.update_rate 300 localhost;localhost:diskstats_iops.sdc_rdio.draw LINE1 localhost;localhost:diskstats_iops.sdc_rdio.min 0 localhost;localhost:diskstats_iops.sdc_rdio.graph_data_size normal localhost;localhost:diskstats_iops.sdc_rdio.label sdc localhost;localhost:diskstats_iops.sdc_rdio.type GAUGE localhost;localhost:diskstats_iops.sdc_rdio.graph no localhost;localhost:diskstats_iops.md126_wrio.update_rate 300 localhost;localhost:diskstats_iops.md126_wrio.draw LINE1 localhost;localhost:diskstats_iops.md126_wrio.min 0 localhost;localhost:diskstats_iops.md126_wrio.graph_data_size normal localhost;localhost:diskstats_iops.md126_wrio.label md126 localhost;localhost:diskstats_iops.md126_wrio.type GAUGE localhost;localhost:diskstats_iops.md126_wrio.negative md126_rdio localhost;localhost:diskstats_iops.sda_rdio.update_rate 300 localhost;localhost:diskstats_iops.sda_rdio.draw LINE1 localhost;localhost:diskstats_iops.sda_rdio.min 0 localhost;localhost:diskstats_iops.sda_rdio.graph_data_size normal localhost;localhost:diskstats_iops.sda_rdio.label sda localhost;localhost:diskstats_iops.sda_rdio.type GAUGE localhost;localhost:diskstats_iops.sda_rdio.graph no localhost;localhost:diskstats_iops.sdc_wrio.update_rate 300 localhost;localhost:diskstats_iops.sdc_wrio.draw LINE1 localhost;localhost:diskstats_iops.sdc_wrio.min 0 localhost;localhost:diskstats_iops.sdc_wrio.graph_data_size normal localhost;localhost:diskstats_iops.sdc_wrio.label sdc localhost;localhost:diskstats_iops.sdc_wrio.type GAUGE localhost;localhost:diskstats_iops.sdc_wrio.negative sdc_rdio localhost;localhost:diskstats_iops.sdb_wrio.update_rate 300 localhost;localhost:diskstats_iops.sdb_wrio.draw LINE1 localhost;localhost:diskstats_iops.sdb_wrio.min 0 localhost;localhost:diskstats_iops.sdb_wrio.graph_data_size normal localhost;localhost:diskstats_iops.sdb_wrio.label sdb localhost;localhost:diskstats_iops.sdb_wrio.type GAUGE localhost;localhost:diskstats_iops.sdb_wrio.negative sdb_rdio localhost;localhost:diskstats_iops.sdd_rdio.update_rate 300 localhost;localhost:diskstats_iops.sdd_rdio.draw LINE1 localhost;localhost:diskstats_iops.sdd_rdio.min 0 localhost;localhost:diskstats_iops.sdd_rdio.graph_data_size normal localhost;localhost:diskstats_iops.sdd_rdio.label sdd localhost;localhost:diskstats_iops.sdd_rdio.type GAUGE localhost;localhost:diskstats_iops.sdd_rdio.graph no localhost;localhost:diskstats_iops.md126_rdio.update_rate 300 localhost;localhost:diskstats_iops.md126_rdio.draw LINE1 localhost;localhost:diskstats_iops.md126_rdio.min 0 localhost;localhost:diskstats_iops.md126_rdio.graph_data_size normal localhost;localhost:diskstats_iops.md126_rdio.label md126 localhost;localhost:diskstats_iops.md126_rdio.type GAUGE localhost;localhost:diskstats_iops.md126_rdio.graph no localhost;localhost:diskstats_iops.sdd_wrio.update_rate 300 localhost;localhost:diskstats_iops.sdd_wrio.draw LINE1 localhost;localhost:diskstats_iops.sdd_wrio.min 0 localhost;localhost:diskstats_iops.sdd_wrio.graph_data_size normal localhost;localhost:diskstats_iops.sdd_wrio.label sdd localhost;localhost:diskstats_iops.sdd_wrio.type GAUGE localhost;localhost:diskstats_iops.sdd_wrio.negative sdd_rdio localhost;localhost:diskstats_iops.sdb_rdio.update_rate 300 localhost;localhost:diskstats_iops.sdb_rdio.draw LINE1 localhost;localhost:diskstats_iops.sdb_rdio.min 0 localhost;localhost:diskstats_iops.sdb_rdio.graph_data_size normal localhost;localhost:diskstats_iops.sdb_rdio.label sdb localhost;localhost:diskstats_iops.sdb_rdio.type GAUGE localhost;localhost:diskstats_iops.sdb_rdio.graph no localhost;localhost:diskstats_throughput.sda.graph_title Disk throughput for /dev/sda localhost;localhost:diskstats_throughput.sda.graph_args --base 1024 localhost;localhost:diskstats_throughput.sda.graph_vlabel Pr ${graph_period} read (-) / write (+) localhost;localhost:diskstats_throughput.sda.graph_category disk localhost;localhost:diskstats_throughput.sda.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on. localhost;localhost:diskstats_throughput.sda.graph_order rdbytes wrbytes localhost;localhost:diskstats_throughput.sda.rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sda.rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sda.rdbytes.min 0 localhost;localhost:diskstats_throughput.sda.rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sda.rdbytes.label invisible localhost;localhost:diskstats_throughput.sda.rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sda.rdbytes.graph no localhost;localhost:diskstats_throughput.sda.wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sda.wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sda.wrbytes.min 0 localhost;localhost:diskstats_throughput.sda.wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sda.wrbytes.label Bytes localhost;localhost:diskstats_throughput.sda.wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sda.wrbytes.negative rdbytes localhost;localhost:varnish4_memory_usage.graph_category varnish localhost;localhost:varnish4_memory_usage.graph_title Memory usage localhost;localhost:varnish4_memory_usage.graph_vlabel bytes localhost;localhost:varnish4_memory_usage.graph_args --base 1024 localhost;localhost:varnish4_memory_usage.graph_order SMA_Transient_g_bytes SMA_s0_g_bytes SMA_Transient_g_space SMA_s0_g_space SMA_Transient_c_bytes SMA_s0_c_bytes sms_balloc sms_nbytes localhost;localhost:varnish4_memory_usage.SMA_Transient_c_bytes.update_rate 300 localhost;localhost:varnish4_memory_usage.SMA_Transient_c_bytes.graph_data_size normal localhost;localhost:varnish4_memory_usage.SMA_Transient_c_bytes.label Bytes allocated SMA Transient localhost;localhost:varnish4_memory_usage.SMA_Transient_c_bytes.type DERIVE localhost;localhost:varnish4_memory_usage.sms_nbytes.update_rate 300 localhost;localhost:varnish4_memory_usage.sms_nbytes.graph_data_size normal localhost;localhost:varnish4_memory_usage.sms_nbytes.label SMS outstanding bytes localhost;localhost:varnish4_memory_usage.sms_nbytes.type GAUGE localhost;localhost:varnish4_memory_usage.SMA_s0_g_space.update_rate 300 localhost;localhost:varnish4_memory_usage.SMA_s0_g_space.graph_data_size normal localhost;localhost:varnish4_memory_usage.SMA_s0_g_space.label Bytes available SMA s0 localhost;localhost:varnish4_memory_usage.SMA_s0_g_space.type GAUGE localhost;localhost:varnish4_memory_usage.SMA_s0_c_bytes.update_rate 300 localhost;localhost:varnish4_memory_usage.SMA_s0_c_bytes.graph_data_size normal localhost;localhost:varnish4_memory_usage.SMA_s0_c_bytes.label Bytes allocated SMA s0 localhost;localhost:varnish4_memory_usage.SMA_s0_c_bytes.type DERIVE localhost;localhost:varnish4_memory_usage.sms_balloc.update_rate 300 localhost;localhost:varnish4_memory_usage.sms_balloc.graph_data_size normal localhost;localhost:varnish4_memory_usage.sms_balloc.label SMS bytes allocated localhost;localhost:varnish4_memory_usage.sms_balloc.type GAUGE localhost;localhost:varnish4_memory_usage.SMA_Transient_g_space.update_rate 300 localhost;localhost:varnish4_memory_usage.SMA_Transient_g_space.graph_data_size normal localhost;localhost:varnish4_memory_usage.SMA_Transient_g_space.label Bytes available SMA Transient localhost;localhost:varnish4_memory_usage.SMA_Transient_g_space.type GAUGE localhost;localhost:varnish4_memory_usage.SMA_s0_g_bytes.update_rate 300 localhost;localhost:varnish4_memory_usage.SMA_s0_g_bytes.graph_data_size normal localhost;localhost:varnish4_memory_usage.SMA_s0_g_bytes.label Bytes outstanding SMA s0 localhost;localhost:varnish4_memory_usage.SMA_s0_g_bytes.type GAUGE localhost;localhost:varnish4_memory_usage.SMA_Transient_g_bytes.update_rate 300 localhost;localhost:varnish4_memory_usage.SMA_Transient_g_bytes.graph_data_size normal localhost;localhost:varnish4_memory_usage.SMA_Transient_g_bytes.label Bytes outstanding SMA Transient localhost;localhost:varnish4_memory_usage.SMA_Transient_g_bytes.type GAUGE localhost;localhost:entropy.graph_title Available entropy localhost;localhost:entropy.graph_args --base 1000 -l 0 localhost;localhost:entropy.graph_vlabel entropy (bytes) localhost;localhost:entropy.graph_scale no localhost;localhost:entropy.graph_category system localhost;localhost:entropy.graph_info This graph shows the amount of entropy available in the system. localhost;localhost:entropy.graph_order entropy localhost;localhost:entropy.entropy.info The number of random bytes available. This is typically used by cryptographic applications. localhost;localhost:entropy.entropy.update_rate 300 localhost;localhost:entropy.entropy.graph_data_size normal localhost;localhost:entropy.entropy.label entropy localhost;localhost:postfix_mailqueue.graph_title Postfix Mailqueue localhost;localhost:postfix_mailqueue.graph_vlabel Mails in queue localhost;localhost:postfix_mailqueue.graph_category mail localhost;localhost:postfix_mailqueue.graph_total Total localhost;localhost:postfix_mailqueue.graph_order active deferred maildrop incoming corrupt hold localhost;localhost:postfix_mailqueue.maildrop.update_rate 300 localhost;localhost:postfix_mailqueue.maildrop.graph_data_size normal localhost;localhost:postfix_mailqueue.maildrop.label maildrop localhost;localhost:postfix_mailqueue.hold.update_rate 300 localhost;localhost:postfix_mailqueue.hold.graph_data_size normal localhost;localhost:postfix_mailqueue.hold.label held localhost;localhost:postfix_mailqueue.active.update_rate 300 localhost;localhost:postfix_mailqueue.active.graph_data_size normal localhost;localhost:postfix_mailqueue.active.label active localhost;localhost:postfix_mailqueue.deferred.update_rate 300 localhost;localhost:postfix_mailqueue.deferred.graph_data_size normal localhost;localhost:postfix_mailqueue.deferred.label deferred localhost;localhost:postfix_mailqueue.incoming.update_rate 300 localhost;localhost:postfix_mailqueue.incoming.graph_data_size normal localhost;localhost:postfix_mailqueue.incoming.label incoming localhost;localhost:postfix_mailqueue.corrupt.update_rate 300 localhost;localhost:postfix_mailqueue.corrupt.graph_data_size normal localhost;localhost:postfix_mailqueue.corrupt.label corrupt localhost;localhost:diskstats_throughput.graph_title Throughput per device localhost;localhost:diskstats_throughput.graph_args --base 1024 localhost;localhost:diskstats_throughput.graph_vlabel Bytes/${graph_period} read (-) / write (+) localhost;localhost:diskstats_throughput.graph_category disk localhost;localhost:diskstats_throughput.graph_width 400 localhost;localhost:diskstats_throughput.graph_info This graph shows averaged throughput for the given disk in bytes. Higher throughput is usualy linked with higher service time/latency (separate graph). The graph base is 1024 yeilding Kibi- and Mebi-bytes. localhost;localhost:diskstats_throughput.graph_order md126_rdbytes md126_wrbytes sda_rdbytes sda_wrbytes sdb_rdbytes sdb_wrbytes sdc_rdbytes sdc_wrbytes sdd_rdbytes sdd_wrbytes localhost;localhost:diskstats_throughput.sdc_wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdc_wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdc_wrbytes.min 0 localhost;localhost:diskstats_throughput.sdc_wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdc_wrbytes.label sdc localhost;localhost:diskstats_throughput.sdc_wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sdc_wrbytes.negative sdc_rdbytes localhost;localhost:diskstats_throughput.sdb_wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdb_wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdb_wrbytes.min 0 localhost;localhost:diskstats_throughput.sdb_wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdb_wrbytes.label sdb localhost;localhost:diskstats_throughput.sdb_wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sdb_wrbytes.negative sdb_rdbytes localhost;localhost:diskstats_throughput.sda_wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sda_wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sda_wrbytes.min 0 localhost;localhost:diskstats_throughput.sda_wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sda_wrbytes.label sda localhost;localhost:diskstats_throughput.sda_wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sda_wrbytes.negative sda_rdbytes localhost;localhost:diskstats_throughput.sdb_rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdb_rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdb_rdbytes.min 0 localhost;localhost:diskstats_throughput.sdb_rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdb_rdbytes.label sdb localhost;localhost:diskstats_throughput.sdb_rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sdb_rdbytes.graph no localhost;localhost:diskstats_throughput.md126_rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.md126_rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.md126_rdbytes.min 0 localhost;localhost:diskstats_throughput.md126_rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.md126_rdbytes.label md126 localhost;localhost:diskstats_throughput.md126_rdbytes.type GAUGE localhost;localhost:diskstats_throughput.md126_rdbytes.graph no localhost;localhost:diskstats_throughput.sdc_rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdc_rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdc_rdbytes.min 0 localhost;localhost:diskstats_throughput.sdc_rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdc_rdbytes.label sdc localhost;localhost:diskstats_throughput.sdc_rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sdc_rdbytes.graph no localhost;localhost:diskstats_throughput.md126_wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.md126_wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.md126_wrbytes.min 0 localhost;localhost:diskstats_throughput.md126_wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.md126_wrbytes.label md126 localhost;localhost:diskstats_throughput.md126_wrbytes.type GAUGE localhost;localhost:diskstats_throughput.md126_wrbytes.negative md126_rdbytes localhost;localhost:diskstats_throughput.sda_rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sda_rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sda_rdbytes.min 0 localhost;localhost:diskstats_throughput.sda_rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sda_rdbytes.label sda localhost;localhost:diskstats_throughput.sda_rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sda_rdbytes.graph no localhost;localhost:diskstats_throughput.sdd_wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdd_wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdd_wrbytes.min 0 localhost;localhost:diskstats_throughput.sdd_wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdd_wrbytes.label sdd localhost;localhost:diskstats_throughput.sdd_wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sdd_wrbytes.negative sdd_rdbytes localhost;localhost:diskstats_throughput.sdd_rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdd_rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdd_rdbytes.min 0 localhost;localhost:diskstats_throughput.sdd_rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdd_rdbytes.label sdd localhost;localhost:diskstats_throughput.sdd_rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sdd_rdbytes.graph no localhost;localhost:interrupts.graph_title Interrupts and context switches localhost;localhost:interrupts.graph_args --base 1000 -l 0 localhost;localhost:interrupts.graph_vlabel interrupts & ctx switches / ${graph_period} localhost;localhost:interrupts.graph_category system localhost;localhost:interrupts.graph_info This graph shows the number of interrupts and context switches on the system. These are typically high on a busy system. localhost;localhost:interrupts.graph_order intr ctx localhost;localhost:interrupts.intr.info Interrupts are events that alter sequence of instructions executed by a processor. They can come from either hardware (exceptions, NMI, IRQ) or software. localhost;localhost:interrupts.intr.update_rate 300 localhost;localhost:interrupts.intr.min 0 localhost;localhost:interrupts.intr.graph_data_size normal localhost;localhost:interrupts.intr.label interrupts localhost;localhost:interrupts.intr.type DERIVE localhost;localhost:interrupts.ctx.info A context switch occurs when a multitasking operatings system suspends the currently running process, and starts executing another. localhost;localhost:interrupts.ctx.update_rate 300 localhost;localhost:interrupts.ctx.min 0 localhost;localhost:interrupts.ctx.graph_data_size normal localhost;localhost:interrupts.ctx.label context switches localhost;localhost:interrupts.ctx.type DERIVE localhost;localhost:diskstats_throughput.sdd.graph_title Disk throughput for /dev/sdd localhost;localhost:diskstats_throughput.sdd.graph_args --base 1024 localhost;localhost:diskstats_throughput.sdd.graph_vlabel Pr ${graph_period} read (-) / write (+) localhost;localhost:diskstats_throughput.sdd.graph_category disk localhost;localhost:diskstats_throughput.sdd.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on. localhost;localhost:diskstats_throughput.sdd.graph_order rdbytes wrbytes localhost;localhost:diskstats_throughput.sdd.rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdd.rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdd.rdbytes.min 0 localhost;localhost:diskstats_throughput.sdd.rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdd.rdbytes.label invisible localhost;localhost:diskstats_throughput.sdd.rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sdd.rdbytes.graph no localhost;localhost:diskstats_throughput.sdd.wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdd.wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdd.wrbytes.min 0 localhost;localhost:diskstats_throughput.sdd.wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdd.wrbytes.label Bytes localhost;localhost:diskstats_throughput.sdd.wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sdd.wrbytes.negative rdbytes localhost;localhost:diskstats_utilization.sdb.graph_title Disk utilization for /dev/sdb localhost;localhost:diskstats_utilization.sdb.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid localhost;localhost:diskstats_utilization.sdb.graph_vlabel % busy localhost;localhost:diskstats_utilization.sdb.graph_category disk localhost;localhost:diskstats_utilization.sdb.graph_scale no localhost;localhost:diskstats_utilization.sdb.graph_order util localhost;localhost:diskstats_utilization.sdb.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated. localhost;localhost:diskstats_utilization.sdb.util.update_rate 300 localhost;localhost:diskstats_utilization.sdb.util.draw LINE1 localhost;localhost:diskstats_utilization.sdb.util.min 0 localhost;localhost:diskstats_utilization.sdb.util.graph_data_size normal localhost;localhost:diskstats_utilization.sdb.util.label Utilization localhost;localhost:diskstats_utilization.sdb.util.type GAUGE localhost;localhost:diskstats_iops.sdb.graph_title IOs for /dev/sdb localhost;localhost:diskstats_iops.sdb.graph_args --base 1000 localhost;localhost:diskstats_iops.sdb.graph_vlabel Units read (-) / write (+) localhost;localhost:diskstats_iops.sdb.graph_category disk localhost;localhost:diskstats_iops.sdb.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024. localhost;localhost:diskstats_iops.sdb.graph_order rdio wrio avgrdrqsz avgwrrqsz localhost;localhost:diskstats_iops.sdb.rdio.update_rate 300 localhost;localhost:diskstats_iops.sdb.rdio.draw LINE1 localhost;localhost:diskstats_iops.sdb.rdio.min 0 localhost;localhost:diskstats_iops.sdb.rdio.graph_data_size normal localhost;localhost:diskstats_iops.sdb.rdio.label dummy localhost;localhost:diskstats_iops.sdb.rdio.type GAUGE localhost;localhost:diskstats_iops.sdb.rdio.graph no localhost;localhost:diskstats_iops.sdb.wrio.update_rate 300 localhost;localhost:diskstats_iops.sdb.wrio.draw LINE1 localhost;localhost:diskstats_iops.sdb.wrio.min 0 localhost;localhost:diskstats_iops.sdb.wrio.graph_data_size normal localhost;localhost:diskstats_iops.sdb.wrio.label IO/sec localhost;localhost:diskstats_iops.sdb.wrio.type GAUGE localhost;localhost:diskstats_iops.sdb.wrio.negative rdio localhost;localhost:diskstats_iops.sdb.avgwrrqsz.info Average Request Size in kilobytes (1000 based) localhost;localhost:diskstats_iops.sdb.avgwrrqsz.update_rate 300 localhost;localhost:diskstats_iops.sdb.avgwrrqsz.draw LINE1 localhost;localhost:diskstats_iops.sdb.avgwrrqsz.min 0 localhost;localhost:diskstats_iops.sdb.avgwrrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sdb.avgwrrqsz.negative avgrdrqsz localhost;localhost:diskstats_iops.sdb.avgwrrqsz.type GAUGE localhost;localhost:diskstats_iops.sdb.avgwrrqsz.label Req Size (KB) localhost;localhost:diskstats_iops.sdb.avgrdrqsz.update_rate 300 localhost;localhost:diskstats_iops.sdb.avgrdrqsz.draw LINE1 localhost;localhost:diskstats_iops.sdb.avgrdrqsz.min 0 localhost;localhost:diskstats_iops.sdb.avgrdrqsz.graph_data_size normal localhost;localhost:diskstats_iops.sdb.avgrdrqsz.label dummy localhost;localhost:diskstats_iops.sdb.avgrdrqsz.type GAUGE localhost;localhost:diskstats_iops.sdb.avgrdrqsz.graph no localhost;localhost:dovecot.graph_title Dovcecot Logins localhost;localhost:dovecot.graph_args --base 1000 -l 0 localhost;localhost:dovecot.graph_vlabel Login Counters localhost;localhost:dovecot.graph_order login_total login_tls login_ssl login_imap login_pop3 connected localhost;localhost:dovecot.login_tls.update_rate 300 localhost;localhost:dovecot.login_tls.graph_data_size normal localhost;localhost:dovecot.login_tls.label TLS Logins localhost;localhost:dovecot.connected.update_rate 300 localhost;localhost:dovecot.connected.graph_data_size normal localhost;localhost:dovecot.connected.label Connected Users localhost;localhost:dovecot.login_ssl.update_rate 300 localhost;localhost:dovecot.login_ssl.graph_data_size normal localhost;localhost:dovecot.login_ssl.label SSL Logins localhost;localhost:dovecot.login_pop3.update_rate 300 localhost;localhost:dovecot.login_pop3.graph_data_size normal localhost;localhost:dovecot.login_pop3.label POP3 Logins localhost;localhost:dovecot.login_total.update_rate 300 localhost;localhost:dovecot.login_total.graph_data_size normal localhost;localhost:dovecot.login_total.label Total Logins localhost;localhost:dovecot.login_imap.update_rate 300 localhost;localhost:dovecot.login_imap.graph_data_size normal localhost;localhost:dovecot.login_imap.label IMAP Logins localhost;localhost:diskstats_utilization.graph_title Utilization per device localhost;localhost:diskstats_utilization.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid localhost;localhost:diskstats_utilization.graph_vlabel % busy localhost;localhost:diskstats_utilization.graph_category disk localhost;localhost:diskstats_utilization.graph_width 400 localhost;localhost:diskstats_utilization.graph_scale no localhost;localhost:diskstats_utilization.graph_order sda_util sdb_util sdc_util sdd_util localhost;localhost:diskstats_utilization.sdd_util.info Utilization of the device localhost;localhost:diskstats_utilization.sdd_util.update_rate 300 localhost;localhost:diskstats_utilization.sdd_util.draw LINE1 localhost;localhost:diskstats_utilization.sdd_util.min 0 localhost;localhost:diskstats_utilization.sdd_util.graph_data_size normal localhost;localhost:diskstats_utilization.sdd_util.label sdd localhost;localhost:diskstats_utilization.sdd_util.type GAUGE localhost;localhost:diskstats_utilization.sdb_util.info Utilization of the device localhost;localhost:diskstats_utilization.sdb_util.update_rate 300 localhost;localhost:diskstats_utilization.sdb_util.draw LINE1 localhost;localhost:diskstats_utilization.sdb_util.min 0 localhost;localhost:diskstats_utilization.sdb_util.graph_data_size normal localhost;localhost:diskstats_utilization.sdb_util.label sdb localhost;localhost:diskstats_utilization.sdb_util.type GAUGE localhost;localhost:diskstats_utilization.sda_util.info Utilization of the device localhost;localhost:diskstats_utilization.sda_util.update_rate 300 localhost;localhost:diskstats_utilization.sda_util.draw LINE1 localhost;localhost:diskstats_utilization.sda_util.min 0 localhost;localhost:diskstats_utilization.sda_util.graph_data_size normal localhost;localhost:diskstats_utilization.sda_util.label sda localhost;localhost:diskstats_utilization.sda_util.type GAUGE localhost;localhost:diskstats_utilization.sdc_util.info Utilization of the device localhost;localhost:diskstats_utilization.sdc_util.update_rate 300 localhost;localhost:diskstats_utilization.sdc_util.draw LINE1 localhost;localhost:diskstats_utilization.sdc_util.min 0 localhost;localhost:diskstats_utilization.sdc_util.graph_data_size normal localhost;localhost:diskstats_utilization.sdc_util.label sdc localhost;localhost:diskstats_utilization.sdc_util.type GAUGE localhost;localhost:diskstats_utilization.sdc.graph_title Disk utilization for /dev/sdc localhost;localhost:diskstats_utilization.sdc.graph_args --base 1000 --lower-limit 0 --upper-limit 100 --rigid localhost;localhost:diskstats_utilization.sdc.graph_vlabel % busy localhost;localhost:diskstats_utilization.sdc.graph_category disk localhost;localhost:diskstats_utilization.sdc.graph_scale no localhost;localhost:diskstats_utilization.sdc.graph_order util localhost;localhost:diskstats_utilization.sdc.util.info Utilization of the device in percent. If the time spent for I/O is close to 1000msec for a given second, the device is nearly 100% saturated. localhost;localhost:diskstats_utilization.sdc.util.update_rate 300 localhost;localhost:diskstats_utilization.sdc.util.draw LINE1 localhost;localhost:diskstats_utilization.sdc.util.min 0 localhost;localhost:diskstats_utilization.sdc.util.graph_data_size normal localhost;localhost:diskstats_utilization.sdc.util.label Utilization localhost;localhost:diskstats_utilization.sdc.util.type GAUGE localhost;localhost:hddtemp_smartctl.graph_title HDD temperature localhost;localhost:hddtemp_smartctl.graph_vlabel Degrees Celsius localhost;localhost:hddtemp_smartctl.graph_category sensors localhost;localhost:hddtemp_smartctl.graph_info This graph shows the temperature in degrees Celsius of the hard drives in the machine. localhost;localhost:hddtemp_smartctl.graph_order sda sdb sdc sdd localhost;localhost:hddtemp_smartctl.sdc.critical 60 localhost;localhost:hddtemp_smartctl.sdc.update_rate 300 localhost;localhost:hddtemp_smartctl.sdc.max 100 localhost;localhost:hddtemp_smartctl.sdc.warning 57 localhost;localhost:hddtemp_smartctl.sdc.graph_data_size normal localhost;localhost:hddtemp_smartctl.sdc.label sdc localhost;localhost:hddtemp_smartctl.sda.critical 60 localhost;localhost:hddtemp_smartctl.sda.update_rate 300 localhost;localhost:hddtemp_smartctl.sda.max 100 localhost;localhost:hddtemp_smartctl.sda.warning 57 localhost;localhost:hddtemp_smartctl.sda.graph_data_size normal localhost;localhost:hddtemp_smartctl.sda.label sda localhost;localhost:hddtemp_smartctl.sdb.critical 60 localhost;localhost:hddtemp_smartctl.sdb.update_rate 300 localhost;localhost:hddtemp_smartctl.sdb.max 100 localhost;localhost:hddtemp_smartctl.sdb.warning 57 localhost;localhost:hddtemp_smartctl.sdb.graph_data_size normal localhost;localhost:hddtemp_smartctl.sdb.label sdb localhost;localhost:hddtemp_smartctl.sdd.critical 60 localhost;localhost:hddtemp_smartctl.sdd.update_rate 300 localhost;localhost:hddtemp_smartctl.sdd.max 100 localhost;localhost:hddtemp_smartctl.sdd.warning 57 localhost;localhost:hddtemp_smartctl.sdd.graph_data_size normal localhost;localhost:hddtemp_smartctl.sdd.label sdd localhost;localhost:proc_pri.graph_title Processes priority localhost;localhost:proc_pri.graph_order low high locked high low locked localhost;localhost:proc_pri.graph_category processes localhost;localhost:proc_pri.graph_info This graph shows number of processes at each priority localhost;localhost:proc_pri.graph_args --base 1000 -l 0 localhost;localhost:proc_pri.graph_vlabel Number of processes localhost;localhost:proc_pri.high.info The number of high-priority processes (tasks) localhost;localhost:proc_pri.high.update_rate 300 localhost;localhost:proc_pri.high.draw STACK localhost;localhost:proc_pri.high.graph_data_size normal localhost;localhost:proc_pri.high.label high priority localhost;localhost:proc_pri.locked.info The number of processes that have pages locked into memory (for real-time and custom IO) localhost;localhost:proc_pri.locked.update_rate 300 localhost;localhost:proc_pri.locked.draw STACK localhost;localhost:proc_pri.locked.graph_data_size normal localhost;localhost:proc_pri.locked.label locked in memory localhost;localhost:proc_pri.low.info The number of low-priority processes (tasks) localhost;localhost:proc_pri.low.update_rate 300 localhost;localhost:proc_pri.low.draw AREA localhost;localhost:proc_pri.low.graph_data_size normal localhost;localhost:proc_pri.low.label low priority localhost;localhost:spamassassin.graph_title SpamAssassin stats localhost;localhost:spamassassin.graph_args --base 1000 -l 0 localhost;localhost:spamassassin.graph_vlabel SpamAssassin mail/sec localhost;localhost:spamassassin.graph_order spam ham mail ham spam localhost;localhost:spamassassin.graph_category Mail localhost;localhost:spamassassin.spam.update_rate 300 localhost;localhost:spamassassin.spam.draw AREA localhost;localhost:spamassassin.spam.min 0 localhost;localhost:spamassassin.spam.graph_data_size normal localhost;localhost:spamassassin.spam.label spam localhost;localhost:spamassassin.spam.type DERIVE localhost;localhost:spamassassin.mail.update_rate 300 localhost;localhost:spamassassin.mail.draw LINE2 localhost;localhost:spamassassin.mail.min 0 localhost;localhost:spamassassin.mail.graph_data_size normal localhost;localhost:spamassassin.mail.label mail localhost;localhost:spamassassin.mail.type DERIVE localhost;localhost:spamassassin.ham.update_rate 300 localhost;localhost:spamassassin.ham.draw LINE2 localhost;localhost:spamassassin.ham.min 0 localhost;localhost:spamassassin.ham.graph_data_size normal localhost;localhost:spamassassin.ham.label ham localhost;localhost:spamassassin.ham.type DERIVE localhost;localhost:postfix_mailvolume.graph_title Postfix bytes throughput localhost;localhost:postfix_mailvolume.graph_args --base 1000 -l 0 localhost;localhost:postfix_mailvolume.graph_vlabel bytes / ${graph_period} localhost;localhost:postfix_mailvolume.graph_scale yes localhost;localhost:postfix_mailvolume.graph_category mail localhost;localhost:postfix_mailvolume.graph_order volume localhost;localhost:postfix_mailvolume.volume.update_rate 300 localhost;localhost:postfix_mailvolume.volume.min 0 localhost;localhost:postfix_mailvolume.volume.graph_data_size normal localhost;localhost:postfix_mailvolume.volume.label delivered volume localhost;localhost:postfix_mailvolume.volume.type DERIVE localhost;localhost:threads.graph_title Number of threads localhost;localhost:threads.graph_vlabel number of threads localhost;localhost:threads.graph_category processes localhost;localhost:threads.graph_info This graph shows the number of threads. localhost;localhost:threads.graph_order threads localhost;localhost:threads.threads.info The current number of threads. localhost;localhost:threads.threads.update_rate 300 localhost;localhost:threads.threads.graph_data_size normal localhost;localhost:threads.threads.label threads localhost;localhost:diskstats_iops.md126.graph_title IOs for /dev/md126 localhost;localhost:diskstats_iops.md126.graph_args --base 1000 localhost;localhost:diskstats_iops.md126.graph_vlabel Units read (-) / write (+) localhost;localhost:diskstats_iops.md126.graph_category disk localhost;localhost:diskstats_iops.md126.graph_info This graph shows the number of IO operations pr second and the average size of these requests. Lots of small requests should result in in lower throughput (separate graph) and higher service time (separate graph). Please note that starting with munin-node 2.0 the divisor for K is 1000 instead of 1024 which it was prior to 2.0 beta 3. This is because the base for this graph is 1000 not 1024. localhost;localhost:diskstats_iops.md126.graph_order rdio wrio avgrdrqsz avgwrrqsz localhost;localhost:diskstats_iops.md126.rdio.update_rate 300 localhost;localhost:diskstats_iops.md126.rdio.draw LINE1 localhost;localhost:diskstats_iops.md126.rdio.min 0 localhost;localhost:diskstats_iops.md126.rdio.graph_data_size normal localhost;localhost:diskstats_iops.md126.rdio.label dummy localhost;localhost:diskstats_iops.md126.rdio.type GAUGE localhost;localhost:diskstats_iops.md126.rdio.graph no localhost;localhost:diskstats_iops.md126.wrio.update_rate 300 localhost;localhost:diskstats_iops.md126.wrio.draw LINE1 localhost;localhost:diskstats_iops.md126.wrio.min 0 localhost;localhost:diskstats_iops.md126.wrio.graph_data_size normal localhost;localhost:diskstats_iops.md126.wrio.label IO/sec localhost;localhost:diskstats_iops.md126.wrio.type GAUGE localhost;localhost:diskstats_iops.md126.wrio.negative rdio localhost;localhost:diskstats_iops.md126.avgwrrqsz.info Average Request Size in kilobytes (1000 based) localhost;localhost:diskstats_iops.md126.avgwrrqsz.update_rate 300 localhost;localhost:diskstats_iops.md126.avgwrrqsz.draw LINE1 localhost;localhost:diskstats_iops.md126.avgwrrqsz.min 0 localhost;localhost:diskstats_iops.md126.avgwrrqsz.graph_data_size normal localhost;localhost:diskstats_iops.md126.avgwrrqsz.negative avgrdrqsz localhost;localhost:diskstats_iops.md126.avgwrrqsz.type GAUGE localhost;localhost:diskstats_iops.md126.avgwrrqsz.label Req Size (KB) localhost;localhost:diskstats_iops.md126.avgrdrqsz.update_rate 300 localhost;localhost:diskstats_iops.md126.avgrdrqsz.draw LINE1 localhost;localhost:diskstats_iops.md126.avgrdrqsz.min 0 localhost;localhost:diskstats_iops.md126.avgrdrqsz.graph_data_size normal localhost;localhost:diskstats_iops.md126.avgrdrqsz.label dummy localhost;localhost:diskstats_iops.md126.avgrdrqsz.type GAUGE localhost;localhost:diskstats_iops.md126.avgrdrqsz.graph no localhost;localhost:varnish4_threads.graph_category varnish localhost;localhost:varnish4_threads.graph_title Thread status localhost;localhost:varnish4_threads.graph_order threads threads_created threads_destroyed threads_failed threads_limited localhost;localhost:varnish4_threads.threads_created.update_rate 300 localhost;localhost:varnish4_threads.threads_created.min 0 localhost;localhost:varnish4_threads.threads_created.graph_data_size normal localhost;localhost:varnish4_threads.threads_created.label Threads created localhost;localhost:varnish4_threads.threads_created.type DERIVE localhost;localhost:varnish4_threads.threads_destroyed.update_rate 300 localhost;localhost:varnish4_threads.threads_destroyed.min 0 localhost;localhost:varnish4_threads.threads_destroyed.warning :1 localhost;localhost:varnish4_threads.threads_destroyed.graph_data_size normal localhost;localhost:varnish4_threads.threads_destroyed.label Threads destroyed localhost;localhost:varnish4_threads.threads_destroyed.type DERIVE localhost;localhost:varnish4_threads.threads_failed.update_rate 300 localhost;localhost:varnish4_threads.threads_failed.min 0 localhost;localhost:varnish4_threads.threads_failed.warning :1 localhost;localhost:varnish4_threads.threads_failed.graph_data_size normal localhost;localhost:varnish4_threads.threads_failed.label Thread creation failed localhost;localhost:varnish4_threads.threads_failed.type DERIVE localhost;localhost:varnish4_threads.threads_limited.update_rate 300 localhost;localhost:varnish4_threads.threads_limited.min 0 localhost;localhost:varnish4_threads.threads_limited.graph_data_size normal localhost;localhost:varnish4_threads.threads_limited.label Threads hit max localhost;localhost:varnish4_threads.threads_limited.type DERIVE localhost;localhost:varnish4_threads.threads.update_rate 300 localhost;localhost:varnish4_threads.threads.min 0 localhost;localhost:varnish4_threads.threads.warning 1: localhost;localhost:varnish4_threads.threads.graph_data_size normal localhost;localhost:varnish4_threads.threads.label Total number of threads localhost;localhost:varnish4_threads.threads.type GAUGE localhost;localhost:diskstats_throughput.sdb.graph_title Disk throughput for /dev/sdb localhost;localhost:diskstats_throughput.sdb.graph_args --base 1024 localhost;localhost:diskstats_throughput.sdb.graph_vlabel Pr ${graph_period} read (-) / write (+) localhost;localhost:diskstats_throughput.sdb.graph_category disk localhost;localhost:diskstats_throughput.sdb.graph_info This graph shows disk throughput in bytes pr ${graph_period}. The graph base is 1024 so KB is for Kibi bytes and so on. localhost;localhost:diskstats_throughput.sdb.graph_order rdbytes wrbytes localhost;localhost:diskstats_throughput.sdb.rdbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdb.rdbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdb.rdbytes.min 0 localhost;localhost:diskstats_throughput.sdb.rdbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdb.rdbytes.label invisible localhost;localhost:diskstats_throughput.sdb.rdbytes.type GAUGE localhost;localhost:diskstats_throughput.sdb.rdbytes.graph no localhost;localhost:diskstats_throughput.sdb.wrbytes.update_rate 300 localhost;localhost:diskstats_throughput.sdb.wrbytes.draw LINE1 localhost;localhost:diskstats_throughput.sdb.wrbytes.min 0 localhost;localhost:diskstats_throughput.sdb.wrbytes.graph_data_size normal localhost;localhost:diskstats_throughput.sdb.wrbytes.label Bytes localhost;localhost:diskstats_throughput.sdb.wrbytes.type GAUGE localhost;localhost:diskstats_throughput.sdb.wrbytes.negative rdbytes localhost;localhost:diskstats_latency.sda.graph_title Average latency for /dev/sda localhost;localhost:diskstats_latency.sda.graph_args --base 1000 --logarithmic localhost;localhost:diskstats_latency.sda.graph_vlabel seconds localhost;localhost:diskstats_latency.sda.graph_category disk localhost;localhost:diskstats_latency.sda.graph_info This graph shows average waiting time/latency for different categories of disk operations. The times that include the queue times indicate how busy your system is. If the waiting time hits 1 second then your I/O system is 100% busy. localhost;localhost:diskstats_latency.sda.graph_order svctm avgwait avgrdwait avgwrwait localhost;localhost:diskstats_latency.sda.avgrdwait.info Average wait time for a read I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sda.avgrdwait.update_rate 300 localhost;localhost:diskstats_latency.sda.avgrdwait.draw LINE1 localhost;localhost:diskstats_latency.sda.avgrdwait.min 0 localhost;localhost:diskstats_latency.sda.avgrdwait.graph_data_size normal localhost;localhost:diskstats_latency.sda.avgrdwait.warning 0:3 localhost;localhost:diskstats_latency.sda.avgrdwait.type GAUGE localhost;localhost:diskstats_latency.sda.avgrdwait.label Read IO Wait time localhost;localhost:diskstats_latency.sda.svctm.info Average time an I/O takes on the block device not including any queue times, just the round trip time for the disk request. localhost;localhost:diskstats_latency.sda.svctm.update_rate 300 localhost;localhost:diskstats_latency.sda.svctm.draw LINE1 localhost;localhost:diskstats_latency.sda.svctm.min 0 localhost;localhost:diskstats_latency.sda.svctm.graph_data_size normal localhost;localhost:diskstats_latency.sda.svctm.label Device IO time localhost;localhost:diskstats_latency.sda.svctm.type GAUGE localhost;localhost:diskstats_latency.sda.avgwait.info Average wait time for an I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sda.avgwait.update_rate 300 localhost;localhost:diskstats_latency.sda.avgwait.draw LINE1 localhost;localhost:diskstats_latency.sda.avgwait.min 0 localhost;localhost:diskstats_latency.sda.avgwait.graph_data_size normal localhost;localhost:diskstats_latency.sda.avgwait.label IO Wait time localhost;localhost:diskstats_latency.sda.avgwait.type GAUGE localhost;localhost:diskstats_latency.sda.avgwrwait.info Average wait time for a write I/O from request start to finish (includes queue times et al) localhost;localhost:diskstats_latency.sda.avgwrwait.update_rate 300 localhost;localhost:diskstats_latency.sda.avgwrwait.draw LINE1 localhost;localhost:diskstats_latency.sda.avgwrwait.min 0 localhost;localhost:diskstats_latency.sda.avgwrwait.graph_data_size normal localhost;localhost:diskstats_latency.sda.avgwrwait.warning 0:3 localhost;localhost:diskstats_latency.sda.avgwrwait.type GAUGE localhost;localhost:diskstats_latency.sda.avgwrwait.label Write IO Wait time