On 2018-11-08 07:38:38, Henrik Cednert (Filmlance) via observium <observium@observium.org> wrote:
The graph issue was probably just me being a bit eager… =)
I noticed something odd. Is there different math for different graphs when looking at storage?
When looking at combined Device Storage /ddnnas0 is listed with proper size, 1.82PB. But when I graph just the usage of ddnnas0 it’s listead as 1.68PB. Just looks like there’s some sort of multiplier that’s a bit bonkers since the percentage is correct.
Thoughts?
Cheers and thanks again. =)
--
Henrik Cednert / + 46 704 71 89 54 / CTO / Filmlance
Disclaimer, the hideous bs disclaimer at the bottom is forced, sorry. ¯\_(ツ)_/¯
On 7 Nov 2018, at 13:26, Henrik Cednert (Filmlance) via observium <observium@observium.org> wrote:
Oh my! Oh yeah! Wow! Thanks! =)_______________________________________________
Indeed I see it now. Some draw error on graph and possible size error. But size is close enough for me. =) Will investigate the draw error.
<Screenshot 2018-11-07 at 13.24.39.png>
Many thanks!!!
--
Henrik Cednert / + 46 704 71 89 54 / CTO / Filmlance
Disclaimer, the hideous bs disclaimer at the bottom is forced, sorry. ¯\_(ツ)_/¯
On 7 Nov 2018, at 11:57, Adam Armstrong via observium <observium@observium.org> wrote:
_______________________________________________Hi Henrik,
I've made some minor changes to the storage discovery in r9557. These might make this drive appear.
It might still have broken sizes though :)
adam.On 2018-11-07 09:29:03, Henrik Cednert (Filmlance) via observium <observium@observium.org> wrote:
Morning Adam
I’m not sure that all data reported in dskTable is correct. I assume that there’s some sort of math that happens since I don’t really get all the numbers to add up. dskPercent for sure seems about right, but not sure about Total, Avail and Used.
I did update net-snmp to latest that was available (1:5.5-60.el6 ), but no change in behaviour.
df | grep ddnnas0
/dev/ddnnas0 1772965855232 979888488448 793077366784 56% /ddnnas0
snmpwalk -v2c -c ddn4ucommunity localhost dskTable | grep .1
UCD-SNMP-MIB::dskIndex.1 = INTEGER: 1
UCD-SNMP-MIB::dskPath.1 = STRING: /ddnnas0
UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/ddnnas0
UCD-SNMP-MIB::dskMinimum.1 = INTEGER: -1
UCD-SNMP-MIB::dskMinPercent.1 = INTEGER: 3
UCD-SNMP-MIB::dskTotal.1 = INTEGER: 2147483647
UCD-SNMP-MIB::dskAvail.1 = INTEGER: 2147483647
UCD-SNMP-MIB::dskUsed.1 = INTEGER: 2147483647
UCD-SNMP-MIB::dskPercent.1 = INTEGER: 55
UCD-SNMP-MIB::dskPercentNode.1 = INTEGER: 3
UCD-SNMP-MIB::dskTotalLow.1 = Gauge32: 3439329280
UCD-SNMP-MIB::dskTotalHigh.1 = Gauge32: 412
UCD-SNMP-MIB::dskAvailLow.1 = Gauge32: 2803367936
UCD-SNMP-MIB::dskAvailHigh.1 = Gauge32: 184
UCD-SNMP-MIB::dskUsedLow.1 = Gauge32: 635961344
UCD-SNMP-MIB::dskUsedHigh.1 = Gauge32: 228
UCD-SNMP-MIB::dskErrorFlag.1 = INTEGER: noError(0)
UCD-SNMP-MIB::dskErrorMsg.1 = STRING:
Cheers
--
Henrik Cednert / + 46 704 71 89 54 / CTO / Filmlance
Disclaimer, the hideous bs disclaimer at the bottom is forced, sorry. ¯\_(ツ)_/¯
On 7 Nov 2018, at 08:33, Adam Armstrong via observium <observium@observium.org> wrote:
Hi Henrik,
Can you tell if the data reported in dskTable is correct?
We use dskTable to fill in invalid data in hrStorageTable sometimes, but we don't outright collect data from dskTable.
I seem to recall this is an issue with ZFS too, and switching to a later version of net-snmp fixes it.
There are sadly no 5.8 (or even 5.7) packages for EL6 though (not that I could find!), so you would have to create one. Nothing really depends upon net-snmp, so no real risk in breaking anything!
Once upon a time I had 5.6 packages for EL6 floating around from an old requirement, but I seem to have lost them.
adam.
On 2018-11-06 22:00, Henrik Cednert (Filmlance) via observium wrote:
Heya_______________________________________________
I really wish i was at your level of expertise here Adam, sadly I’m
not. =( I googled a bit and walked hrStorageTable and this is the
hrStorageDescr from that:
HOST-RESOURCES-MIB::hrStorageDescr.1 = STRING: Physical memory
HOST-RESOURCES-MIB::hrStorageDescr.3 = STRING: Virtual memory
HOST-RESOURCES-MIB::hrStorageDescr.6 = STRING: Memory buffers
HOST-RESOURCES-MIB::hrStorageDescr.7 = STRING: Cached memory
HOST-RESOURCES-MIB::hrStorageDescr.10 = STRING: Swap space
HOST-RESOURCES-MIB::hrStorageDescr.31 = STRING: /
HOST-RESOURCES-MIB::hrStorageDescr.35 = STRING: /dev/shm
HOST-RESOURCES-MIB::hrStorageDescr.36 = STRING: /boot
HOST-RESOURCES-MIB::hrStorageDescr.37 = STRING: /boot-rcvy
HOST-RESOURCES-MIB::hrStorageDescr.38 = STRING: /crash
HOST-RESOURCES-MIB::hrStorageDescr.39 = STRING: /rcvy
HOST-RESOURCES-MIB::hrStorageDescr.40 = STRING: /var
HOST-RESOURCES-MIB::hrStorageDescr.41 = STRING: /var-rcvy
So no luck there. In diskTable I do see it though:
UCD-SNMP-MIB::dskPath.1 = STRING: /ddnnas0
UCD-SNMP-MIB::dskPath.2 = STRING: /
So with this, any hints on how to make it work? Or is the first step
to update net-snmp?
Installed Packages
net-snmp.x86_64
1:5.5-54.el6_7.1
@ddn-dvd
Cheers and thanks a million
--
Henrik Cednert / + 46 704 71 89 54 / CTO / FILMLANCE
Disclaimer, the hideous bs disclaimer at the bottom is forced, sorry.
¯\_(ツ)_/¯
On 6 Nov 2018, at 22:53, Adam Armstrong via observiumLinks:
<observium@observium.org> wrote:
We actually use the HOST-RESOURCES-MIB hrStorage table, not the
UCD-MIB dskTable.
Have you checked what SNMP is sending? You can walk the
hrStorageTable and dskTable to see what is returned.
I seem to remember some issue with SNMP not picking up "strange"
filesystems. Making sure you're running the very latest net-snmp
might help.
Adam.
On 2018-11-06 21:17, Henrik Cednert (Filmlance) via observium wrote:
Hello
So I got the ‘go ahead’ from DDN to monitor it directly on the
server. Strange thing is, I’ve added the mount point to the
/etc/snmpd.conf but it still doesn’t appear in observium. I’ve
played with the syscontact just to make sure that the snmpd.conf is
actually read properly upon discovery and poll, and it is. But it
does
not seem to take this string from conf:
disk /ddnnas0 80%
And that is the actual mount point, as seen from ‘mount’ here:
/dev/ddnnas0 on /ddnnas0 type gpfs
(rw,relatime,mtime,nfssync,dev=ddnnas0)
Am i missing something obvious? If so, please be gentle and throw me
with great force, and maybe anger, in the right direction. =)
Cheers and thanks
--
Henrik Cednert / + 46 704 71 89 54 / CTO / FILMLANCE
Disclaimer, the hideous bs disclaimer at the bottom is forced,
sorry.
¯\_(ツ)_/¯
On 23 Oct 2018, at 21:32, Chris Neam via observium
<observium@observium.org> wrote:
We monitor our GPFS system directly from the GridScaler and have
been able to monitor up to 2.6PB so far. I'd guess it is something
with the way Windows is handling it.
On Tue, Oct 23, 2018 at 12:54 PM Henrik Cednert (Filmlance) via
observium <observium@observium.org> wrote:
Hello
We just expanded a volume on a DDN GPFS system to over 1 PB. I
monitor this volume via SNMP via a Windows server. Noticed that I
run into a 1024TB cap on the size there though. Trying to
understand where this cap is comping from, if it’s the SNMP on
windows or something in observium.
Anyone that’ve dealt with this?
Cheers and thanks
--
Henrik Cednert / + 46 704 71 89 54 / CTO / FILMLANCE
Disclaimer, the hideous bs disclaimer at the bottom is forced,
sorry. ¯\_(ツ)_/¯
DISCLAIMER
The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and
others authorized to receive it. If you are not the recipient, you
are hereby notified that any disclosure, copying, distribution or
taking action in relation of the contents of this information is
strictly prohibited and may be unlawful.
_______________________________________________
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
--
CHRIS NEAM
US Mobile: +1 (703) 628 7897 [1]
E-mail: chris.neam@vricon.com
VRICON
Visiting address
US: 8280 Greensboro Drive, Suite 850, McLean, VA, 22102, USA
Sweden: Bröderna Ugglas gata, 581 88 Linköping, Sweden
www.vricon.com [1] [2]
This e-mail is private and confidential between the sender and the
addressee.
In the event of misdirection, the recipient is prohibited from
using, copying or
disseminating it or any information in it. Please notify the above
if any misdirection. <Screen Shot 2018-10-23 at
15.29.23.png>_______________________________________________
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
------
[1] tel:%2B1%20%28703%29%20628%207897
[2] http://www.vricon.com/
_______________________________________________
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
_______________________________________________
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
DISCLAIMER
The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and
others authorized to receive it. If you are not the recipient, you are
hereby notified that any disclosure, copying, distribution or taking
action in relation of the contents of this information is strictly
prohibited and may be unlawful.
Links:
------
[1] http://www.vricon.com/
_______________________________________________
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
Disclaimer
The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium