Hello!

Markus, nice result! Didn't you consider a compressed RAM disk for RRDs? A month ago I've done one more installation of Observium  with RRDs in RAM and it's superfast in both polling and Web UI while keeping the underlying SSDs from the heavy IO, helping them to live more :) That installation has about 28GB of raw RRDs, compressed by about 6 times in RAM. The only drawback is that the RAM disk starts about 8 minutes on system startup, as it needs to read all the RRDs from physical drive.


Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On July 19, 2018 4:28 PM, Tom Laermans <tom.laermans@powersource.cx> wrote:

I must agree with Adam here, you're still polling all devices in 5 minutes, after eachother...

With your system you're just sometimes leaving gaps in between the high load instead of continuing with load then dropping off, but the end result is pretty much the same :-)

But hey whatever works ! The amount of devices is damn impressive :-)

On 7/17/2018 11:24 PM, Adam Armstrong wrote:
I do not think you are right, Herr Swe.

Adam.

Sent from BlueMail
On 17 Jul 2018, at 12:25, Markus Klock <markus@best-practice.se> wrote:
The same reason you recommend keeping polling-time as close to 300s as possible.
When doing this you can run the poller with much fewer threads needed for polling.
Instead of starting every 5min with 128 threads you can do 32 threads every 1min which will be a lot less CPU context switches and much lower IO spikes for the system.
The system will be under little load all the time instead of one huge spike and then idle for 150s :)
/Markus

2018-07-17 20:33 GMT+02:00 Adam Armstrong <adama@memetic.org>:

But why though? :D

The poller-wrapper's entire purpose is to do what you're doing here :D

adam.


On 2018-07-12 00:36, Markus Klock wrote:
Yeah, I usually even out the server load by splitting the number of
devices in 5 and start the polling of them 1 minute apart with cron
like this:
0-59/5 *     * * *   observium    /opt/observium/observium-wrapper
polling -i 5 -n 0 >> /dev/null 2>&1
1-59/5 *     * * *   observium    /opt/observium/observium-wrapper
polling -i 5 -n 1 >> /dev/null 2>&1
2-59/5 *     * * *   observium    /opt/observium/observium-wrapper
polling -i 5 -n 2 >> /dev/null 2>&1
3-59/5 *     * * *   observium    /opt/observium/observium-wrapper
polling -i 5 -n 3 >> /dev/null 2>&1
4-59/5 *     * * *   observium    /opt/observium/observium-wrapper
polling -i 5 -n 4 >> /dev/null 2>&1

This makes it work with a lot fewer threads and the load on disk and
database is much lower as only 1/5 of the data needs to be progressed
at the same time.

/Markus

2018-07-12 7:09 GMT+02:00 Adam Armstrong <adama@memetic.org>:

You should probably decide threads to even out the load. The goal is
to be as close to 300 seconds as you can manage, else you'll get
spiky io and myself load.

Adam.

Sent from BlueMail [2]