Yes, I think that was fixed some time ago by Mike.
New CE is not too far out!
Adam.
Sent with AquaMail for Android http://www.aqua-mail.com
On 8 September 2015 19:38:31 "Moore, Cameron" cmoore@hsutx.edu wrote:
Yes, we're chasing two separate issues. However, I did apply the "excessive cycles" patch you provided, and I see not excessive cycles. It looks like that issue has been fixed in the latest Professional Edition. -- Cameron Moore Manager of Systems & Networks Hardin-Simmons University, Technology Services Ph: (325) 670-1506 Fx: (325) 670-1570
-----Original Message----- From: observium [mailto:observium-bounces@observium.org] On Behalf Of Mark Martinec Sent: Tuesday, September 08, 2015 11:28 AM To: observium@observium.org Subject: Re: [Observium] poller.php burns all available CPU in the last community edition
2015-09-08 Eugene Nechai wrote:
Just in case, I'm running Observium CE 0.15.6.6430 (upgraded from 0.13.10.4585 right after the update was available) and I do not have such kind of issues with CPU usage. It's the same as it was before the upgrade (the graph attached).
I haven't seen the logging overhead that Cameron has been fixing, probably because the poller in CE 0.15.6.6430 does not contain colorization of text in its printout. Neither does the xdebug with KCacheGrind show any such remaining hotspots in the poller.php run.
Note that the select-on-eof problem that I have reported at the head of this topic is unrelated to what Cameron has been chasing, and it is quite possible that it does not affect all installations, or that EOF on both pipes is often (but not always) seen close to one another so the spinning may be short. Or perhaps that it has already been fixed in the professional edition.
If anyone wants to try it, here is a hackish patch against CE 0.15.6.6430, which prints 'EXCESSIVE IDLE CYCLES' in the output of the poller.php run, if there are any:
--- includes/common.inc.php.ori 2015-09-04 20:29:39.561244285 +0200 +++ includes/common.inc.php 2015-09-08 17:59:13.600049587 +0200 @@ -618,7 +618,9 @@ { $start = microtime(TRUE);
- $idle_cycles = 0; //while ($status['running'] !== FALSE) while (feof($pipes[1]) === FALSE || feof($pipes[2]) === FALSE) {
$any_progress = 0; stream_select( $read = array($pipes[1], $pipes[2]),
@@ -634,9 +636,13 @@ if ($pipe === $pipes[1]) {
$stdout .= fread($pipe, 8192);
$str = fread($pipe, 8192);
if (strlen($str) > 0) { $any_progress = 1; }
$stdout .= $str; } else if ($pipe === $pipes[2]) {
$stderr .= fread($pipe, 8192);
$str = fread($pipe, 8192);
if (strlen($str) > 0) { $any_progress = 1; }
$stderr .= $str; } }
@@ -670,4 +676,8 @@ } }
if (!$any_progress) { $idle_cycles++; }
- }
- if ($idle_cycles > 2) {
printf("EXCESSIVE IDLE CYCLES: %s\n", $idle_cycles); } if ($status['running'])
In my case there are numerous. The more both Eof conditions on pipes are apart in time, the worse the CPU idle spinning gets:
$ ./poller.php -i 8 -n 1 | fgrep 'EXCESSIVE IDLE CYCLES' EXCESSIVE IDLE CYCLES: 456 EXCESSIVE IDLE CYCLES: 2075 EXCESSIVE IDLE CYCLES: 2800 EXCESSIVE IDLE CYCLES: 1661 EXCESSIVE IDLE CYCLES: 1301 EXCESSIVE IDLE CYCLES: 295 EXCESSIVE IDLE CYCLES: 2457 EXCESSIVE IDLE CYCLES: 4217 EXCESSIVE IDLE CYCLES: 14354 EXCESSIVE IDLE CYCLES: 732 EXCESSIVE IDLE CYCLES: 11438 EXCESSIVE IDLE CYCLES: 4784 EXCESSIVE IDLE CYCLES: 10939 EXCESSIVE IDLE CYCLES: 589 ... EXCESSIVE IDLE CYCLES: 4782 EXCESSIVE IDLE CYCLES: 84 EXCESSIVE IDLE CYCLES: 518 EXCESSIVE IDLE CYCLES: 2796 EXCESSIVE IDLE CYCLES: 2260 EXCESSIVE IDLE CYCLES: 3591 EXCESSIVE IDLE CYCLES: 7266 EXCESSIVE IDLE CYCLES: 13707 EXCESSIVE IDLE CYCLES: 1509 EXCESSIVE IDLE CYCLES: 4420 EXCESSIVE IDLE CYCLES: 2103 EXCESSIVE IDLE CYCLES: 15664 ... EXCESSIVE IDLE CYCLES: 1495 EXCESSIVE IDLE CYCLES: 3592 EXCESSIVE IDLE CYCLES: 68420 EXCESSIVE IDLE CYCLES: 5430 EXCESSIVE IDLE CYCLES: 5003 EXCESSIVE IDLE CYCLES: 191203 EXCESSIVE IDLE CYCLES: 148976 EXCESSIVE IDLE CYCLES: 20664 EXCESSIVE IDLE CYCLES: 46755 EXCESSIVE IDLE CYCLES: 6071 EXCESSIVE IDLE CYCLES: 109397 ...etc
A php select on a pipe which is already at Eof is futile (always fires as ready), so that needs to be avoided even if a particular installation does not encounter problems.
Mark
observium mailing list observium@observium.org http://postman.memetic.org/cgi-bin/mailman/listinfo/observium _______________________________________________ observium mailing list observium@observium.org http://postman.memetic.org/cgi-bin/mailman/listinfo/observium