![](https://secure.gravatar.com/avatar/0896e673efe2e0118c2617b5af6c817b.jpg?s=120&d=mm&r=g)
The graph scale is calculated by rrdtool. It closely follows the SI/metrix scale
yotta Y zetta Z exa E peta P tera T giga G mega M kilo k hecto h (I've not seen rrdtool display this) deca da (I've not seen rrdtool display this) deci d (this gets no scale as it is a "pure unit") centi c (I've not seen rrdtool display this) milli m micro ยต nano n pico p femto f atto a zepto z yocto y
The 'm' stands for milli. In something like an interface error graph, with an occassional error encountered, you might get a very low number/scale on the graph. 1 errored packet in a 5 minute polling window becomes 1/300 = 0.0033. rrdtool would choose to draw this and automatically adjust the scale so that it would read 0.33 millipackets or maybe even 330 micropackets.
Hope this helps.
Michael
On 25 Jan 2017, at 12:56 am, Satish Patel satish.txt@gmail.com wrote:
Nick,
Does that mean "m" stand for minute right? Or million?
Sent from my iPhone
On Jan 24, 2017, at 12:56 AM, Nick Schmalenberger nick@schmalenberger.us wrote:
On Mon, Jan 23, 2017 at 09:07:28PM -0800, William Bauer wrote: I've been meaning to ask the same thing, for two reasons. One is just due to typical abbreviations, such as "m" for milli, and "M" for mega, the other is because for us the error graphs appear way off, even though other graphs are not. We might have 100,000 absolute errors on a port, but the error graph will read about "15m". I find the difference in the absolute versus graphed numbers confusing. Perhaps it's an averaged amount over the sample period? If so, can we get actual numbers?
Part of the problem might be the only ports we have with errors are Brocade ports; our Cisco ports don't typically accumulate errors. Perhaps the Brocade error counters are atypical, even though "good" traffic counters are correct.
The rates are always in terms of seconds (average over the polling period). So when something like an error is less frequent than once per second, the rate is a decimal per second. So the reciprocal will be average seconds per error which is easier to think of in this situation, or multiplying by 300 to put it in terms of the sample period will also help. -Nick _______________________________________________ observium mailing list observium@observium.org http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
observium mailing list observium@observium.org http://postman.memetic.org/cgi-bin/mailman/listinfo/observium