2013/2/14 Adam Armstrong <adama@memetic.org>

On Thu, 14 Feb 2013 01:19:20 -0300, Ciro Iriarte <cyruspy@gmail.com>
wrote:
> 2013/2/14 christopher barry <cbarry@rjmetrics.com>
>
>>
>> Why not RAM? Build a box w/256G or more, and keep them all in tmpfs
>> during operation. (ditch Solaris. It was cool once, but...)
>>
>> Copy them up into tmpfs on boot, then cron an rsync to disk between
>> polling or even occasionally, depending on pain tolerance.
>>
>> -C
>>
>>
> Hmm, well, I could only steal a machine with 64GB of RAM this time, but
ZFS
> + SSD cache + regular spindles sounds cleaner that rsync of 150GB every
2
> hours..

Why the SSD cache? Why not just use the SSD. 

"SSD disk size" < "rrd directory size" mostly, and I/O to a mid range storage shouldn't hurt either...
 

Having the filesystem layer dealing with the caching is likely to add even
more overhead, you're also relying on the cache to actually cache the
things you are accessing.

I can't imagine something with more insight on access pattern than the filesystem... ZFS has a second level "read cache" (L2ARC) and a "write cache" (ZIL, that mostly helps with synchronous writes). I'm not telling that it must make a difference in this case, but I would like to save stats in case I finally add those disks...

Last time I checked, disk svctime was not hi, but system CPU time was, maybe network delay?...
 
I would first ditch Solaris and see if the performance changes

Right now I don't have spare time to do it all again from scratch, sadly. 
 
 
How long does it take to run /one/ instance of the poller?

I'll check the logs again in the morning once I get to the office. 
 

adam.
_______________________________________________
observium mailing list
observium@observium.org
http://postman.memetic.org/cgi-bin/mailman/listinfo/observium

Regards,

--
Ciro Iriarte
http://cyruspy.wordpress.com
--