… | |
… | |
386 | For few fds, this backend is a bit little slower than poll and select, |
386 | For few fds, this backend is a bit little slower than poll and select, |
387 | but it scales phenomenally better. While poll and select usually scale |
387 | but it scales phenomenally better. While poll and select usually scale |
388 | like O(total_fds) where n is the total number of fds (or the highest fd), |
388 | like O(total_fds) where n is the total number of fds (or the highest fd), |
389 | epoll scales either O(1) or O(active_fds). |
389 | epoll scales either O(1) or O(active_fds). |
390 | |
390 | |
391 | The epoll syscalls are the most misdesigned of the more advanced |
391 | The epoll syscalls are the most misdesigned of the more advanced event |
392 | event mechanisms: probelsm include silently dropping events in some |
392 | mechanisms: problems include silently dropping fds, requiring a system |
393 | hard-to-detect cases, requiring a system call per fd change, no fork |
393 | call per change per fd (and unnecessary guessing of parameters), problems |
394 | support, problems with dup and so on. |
394 | with dup and so on. The biggest issue is fork races, however - if a |
|
|
395 | program forks then I<both> parent and child process have to recreate the |
|
|
396 | epoll set, which can take considerable time (one syscall per fd) and is of |
|
|
397 | course hard to detect. |
395 | |
398 | |
396 | Epoll is also notoriously buggy - embedding epoll fds should work, but |
399 | Epoll is also notoriously buggy - embedding epoll fds should work, but |
397 | of course doesn't, and epoll just loves to report events for totally |
400 | of course doesn't, and epoll just loves to report events for totally |
398 | I<different> file descriptors (even already closed ones, so one cannot |
401 | I<different> file descriptors (even already closed ones, so one cannot |
399 | even remove them from the set) than registered in the set (especially |
402 | even remove them from the set) than registered in the set (especially |
… | |
… | |
409 | |
412 | |
410 | Best performance from this backend is achieved by not unregistering all |
413 | Best performance from this backend is achieved by not unregistering all |
411 | watchers for a file descriptor until it has been closed, if possible, |
414 | watchers for a file descriptor until it has been closed, if possible, |
412 | i.e. keep at least one watcher active per fd at all times. Stopping and |
415 | i.e. keep at least one watcher active per fd at all times. Stopping and |
413 | starting a watcher (without re-setting it) also usually doesn't cause |
416 | starting a watcher (without re-setting it) also usually doesn't cause |
414 | extra overhead. |
417 | extra overhead. A fork can both result in spurious notifications as well |
|
|
418 | as in libev having to destroy and recreate the epoll object, which can |
|
|
419 | take considerable time and thus should be avoided. |
415 | |
420 | |
416 | While nominally embeddable in other event loops, this feature is broken in |
421 | While nominally embeddable in other event loops, this feature is broken in |
417 | all kernel versions tested so far. |
422 | all kernel versions tested so far. |
418 | |
423 | |
419 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
424 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
… | |
… | |
434 | |
439 | |
435 | It scales in the same way as the epoll backend, but the interface to the |
440 | It scales in the same way as the epoll backend, but the interface to the |
436 | kernel is more efficient (which says nothing about its actual speed, of |
441 | kernel is more efficient (which says nothing about its actual speed, of |
437 | course). While stopping, setting and starting an I/O watcher does never |
442 | course). While stopping, setting and starting an I/O watcher does never |
438 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
443 | cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to |
439 | two event changes per incident. Support for C<fork ()> is very bad and it |
444 | two event changes per incident. Support for C<fork ()> is very bad (but |
440 | drops fds silently in similarly hard-to-detect cases. |
445 | sane, unlike epoll) and it drops fds silently in similarly hard-to-detect |
|
|
446 | cases |
441 | |
447 | |
442 | This backend usually performs well under most conditions. |
448 | This backend usually performs well under most conditions. |
443 | |
449 | |
444 | While nominally embeddable in other event loops, this doesn't work |
450 | While nominally embeddable in other event loops, this doesn't work |
445 | everywhere, so you might need to test for this. And since it is broken |
451 | everywhere, so you might need to test for this. And since it is broken |
… | |
… | |
474 | might perform better. |
480 | might perform better. |
475 | |
481 | |
476 | On the positive side, with the exception of the spurious readiness |
482 | On the positive side, with the exception of the spurious readiness |
477 | notifications, this backend actually performed fully to specification |
483 | notifications, this backend actually performed fully to specification |
478 | in all tests and is fully embeddable, which is a rare feat among the |
484 | in all tests and is fully embeddable, which is a rare feat among the |
479 | OS-specific backends. |
485 | OS-specific backends (I vastly prefer correctness over speed hacks). |
480 | |
486 | |
481 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
487 | This backend maps C<EV_READ> and C<EV_WRITE> in the same way as |
482 | C<EVBACKEND_POLL>. |
488 | C<EVBACKEND_POLL>. |
483 | |
489 | |
484 | =item C<EVBACKEND_ALL> |
490 | =item C<EVBACKEND_ALL> |