![](https://secure.gravatar.com/avatar/1c685a39a957c5e4dd2544f4cdc48c02.jpg?s=120&d=mm&r=g)
That has definitely not been my experience. Load is the main issue on our server, and it seems to be largely due to contention during polls. I've tuned down vm.dirty_writeback_centisecs to 5000 (default is 500), and I/O is relatively low. But we monitor a lot of systems across bad ADSL links, and this means we have to run a lot of pollers in parallel to get a poll done in 5 minutes (some of our devices actually take more than 300 seconds to poll; they're disabled at the moment). This pushes up the load (we run 5 pollers per core) and makes the system very sluggish. I've ordered a new server to replace this one this week, and I ended up going for 2 x 8 cores for a much smaller install than Ciro's:
Devices 157 139 up 1 down 4 ignored 13 disabled Ports 9150 918 up 6 down 1083 ignored 7006 shutdown
Paul
On 08/24/2013 03:36 AM, Moerman, Maarten wrote:
Btw, do you have issues? Load is unlikely to be an issue, unless you don't like high numbers...
Sent from my iPhone
On 23 aug. 2013, at 19:21, "Ciro Iriarte" cyruspy@gmail.com wrote:
Hi!, anybody considered scaling adding more servers instead of going to a bigger one?. Using something like Open Grid Scheduler for example.
Currently I'm maxing out our server (2 x Xeon E5-2630, 16 cores) with 37500 ports, 548 devices
cplanning:~ # uptime 13:19pm up 151 days 3:03, 2 users, load average: 43.37, 43.00, 42.60
Regards, CI.-