![](https://secure.gravatar.com/avatar/9113800bbd271c46f4585a9549d85c15.jpg?s=120&d=mm&r=g)
I see, interesting solution :) /Markus
2018-07-21 13:12 GMT+02:00 xomka686 xomka686@protonmail.com:
I'm doing compressed RAM in a more weird, but yet working way :) I'm creating a regular tmpfs, a file in it and then a compressed ZFS Pool on top of that file. The reason for such approach is I wasn't able to get ZRAM to work. Compression algorithm I use is LZ4 and for 430GB of raw RRDs it will require no more than 80GB of RAM (given 5x compressratio). 80GB of DDR4 RDIMM RAM in turn costs about $1000, which sounds pretty worth the result for a 5000-device setup ;)
Sent with ProtonMail https://protonmail.com Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On July 21, 2018 12:16 AM, Adam Armstrong adama@memetic.org wrote:
https://en.m.wikipedia.org/wiki/Zram I think we might have this in our documentation, I forget. Adam. Sent from BlueMail http://www.bluemail.me/r?b=13187 On 20 Jul 2018, at 14:32, Markus Klock markus@best-practice.se wrote:
Intresting, how did you setup a compressed RAM-disk? And what compression algorithm is used? It would be pretty expensive for this setup through as I have about 430GB of RRDs :) /Markus
Den 20 juli 2018 21:43 skrev "xomka686" xomka686@protonmail.com:
Hello!
Markus, nice result! Didn't you consider a compressed RAM disk for RRDs? A month ago I've done one more installation of Observium with RRDs in RAM and it's superfast in both polling and Web UI while keeping the underlying SSDs from the heavy IO, helping them to live more :) That installation has about 28GB of raw RRDs, compressed by about 6 times in RAM. The only drawback is that the RAM disk starts about 8 minutes on system startup, as it needs to read all the RRDs from physical drive.
Sent with ProtonMail https://protonmail.com Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On July 19, 2018 4:28 PM, Tom Laermans tom.laermans@powersource.cx wrote:
I must agree with Adam here, you're still polling all devices in 5 minutes, after eachother...
With your system you're just sometimes leaving gaps in between the high load instead of continuing with load then dropping off, but the end result is pretty much the same :-)
But hey whatever works ! The amount of devices is damn impressive :-)
On 7/17/2018 11:24 PM, Adam Armstrong wrote:
I do not think you are right, Herr Swe.
Adam.
Sent from BlueMail http://www.bluemail.me/r?b=13187 On 17 Jul 2018, at 12:25, Markus Klock markus@best-practice.se wrote:
The same reason you recommend keeping polling-time as close to 300s as possible. When doing this you can run the poller with much fewer threads needed for polling. Instead of starting every 5min with 128 threads you can do 32 threads every 1min which will be a lot less CPU context switches and much lower IO spikes for the system. The system will be under little load all the time instead of one huge spike and then idle for 150s :) /Markus
2018-07-17 20:33 GMT+02:00 Adam Armstrong adama@memetic.org:
But why though? :D
The poller-wrapper's entire purpose is to do what you're doing here :D
adam.
On 2018-07-12 00:36, Markus Klock wrote:
Yeah, I usually even out the server load by splitting the number of devices in 5 and start the polling of them 1 minute apart with cron like this: 0-59/5 * * * * observium /opt/observium/observium-wrapper polling -i 5 -n 0 >> /dev/null 2>&1 1-59/5 * * * * observium /opt/observium/observium-wrapper polling -i 5 -n 1 >> /dev/null 2>&1 2-59/5 * * * * observium /opt/observium/observium-wrapper polling -i 5 -n 2 >> /dev/null 2>&1 3-59/5 * * * * observium /opt/observium/observium-wrapper polling -i 5 -n 3 >> /dev/null 2>&1 4-59/5 * * * * observium /opt/observium/observium-wrapper polling -i 5 -n 4 >> /dev/null 2>&1
This makes it work with a lot fewer threads and the load on disk and database is much lower as only 1/5 of the data needs to be progressed at the same time.
/Markus
2018-07-12 7:09 GMT+02:00 Adam Armstrong adama@memetic.org:
You should probably decide threads to even out the load. The goal is to be as close to 300 seconds as you can manage, else you'll get spiky io and myself load.
Adam.
Sent from BlueMail [2]
observium mailing list observium@observium.org http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
observium mailing list observium@observium.org http://postman.memetic.org/cgi-bin/mailman/listinfo/observium
observium mailing list observium@observium.org http://postman.memetic.org/cgi-bin/mailman/listinfo/observium