… | |
… | |
463 | epoll scales either O(1) or O(active_fds). |
463 | epoll scales either O(1) or O(active_fds). |
464 | |
464 | |
465 | The epoll mechanism deserves honorable mention as the most misdesigned |
465 | The epoll mechanism deserves honorable mention as the most misdesigned |
466 | of the more advanced event mechanisms: mere annoyances include silently |
466 | of the more advanced event mechanisms: mere annoyances include silently |
467 | dropping file descriptors, requiring a system call per change per file |
467 | dropping file descriptors, requiring a system call per change per file |
468 | descriptor (and unnecessary guessing of parameters), problems with dup and |
468 | descriptor (and unnecessary guessing of parameters), problems with dup, |
|
|
469 | returning before the timeout value, resulting in additional iterations |
|
|
470 | (and only giving 5ms accuracy while select on the same platform gives |
469 | so on. The biggest issue is fork races, however - if a program forks then |
471 | 0.1ms) and so on. The biggest issue is fork races, however - if a program |
470 | I<both> parent and child process have to recreate the epoll set, which can |
472 | forks then I<both> parent and child process have to recreate the epoll |
471 | take considerable time (one syscall per file descriptor) and is of course |
473 | set, which can take considerable time (one syscall per file descriptor) |
472 | hard to detect. |
474 | and is of course hard to detect. |
473 | |
475 | |
474 | Epoll is also notoriously buggy - embedding epoll fds I<should> work, but |
476 | Epoll is also notoriously buggy - embedding epoll fds I<should> work, but |
475 | of course I<doesn't>, and epoll just loves to report events for totally |
477 | of course I<doesn't>, and epoll just loves to report events for totally |
476 | I<different> file descriptors (even already closed ones, so one cannot |
478 | I<different> file descriptors (even already closed ones, so one cannot |
477 | even remove them from the set) than registered in the set (especially |
479 | even remove them from the set) than registered in the set (especially |
478 | on SMP systems). Libev tries to counter these spurious notifications by |
480 | on SMP systems). Libev tries to counter these spurious notifications by |
479 | employing an additional generation counter and comparing that against the |
481 | employing an additional generation counter and comparing that against the |
480 | events to filter out spurious ones, recreating the set when required. Last |
482 | events to filter out spurious ones, recreating the set when required. Last |
481 | not least, it also refuses to work with some file descriptors which work |
483 | not least, it also refuses to work with some file descriptors which work |
482 | perfectly fine with C<select> (files, many character devices...). |
484 | perfectly fine with C<select> (files, many character devices...). |
|
|
485 | |
|
|
486 | Epoll is truly the train wreck analog among event poll mechanisms. |
483 | |
487 | |
484 | While stopping, setting and starting an I/O watcher in the same iteration |
488 | While stopping, setting and starting an I/O watcher in the same iteration |
485 | will result in some caching, there is still a system call per such |
489 | will result in some caching, there is still a system call per such |
486 | incident (because the same I<file descriptor> could point to a different |
490 | incident (because the same I<file descriptor> could point to a different |
487 | I<file description> now), so its best to avoid that. Also, C<dup ()>'ed |
491 | I<file description> now), so its best to avoid that. Also, C<dup ()>'ed |
… | |
… | |
823 | Can be used to make a call to C<ev_run> return early (but only after it |
827 | Can be used to make a call to C<ev_run> return early (but only after it |
824 | has processed all outstanding events). The C<how> argument must be either |
828 | has processed all outstanding events). The C<how> argument must be either |
825 | C<EVBREAK_ONE>, which will make the innermost C<ev_run> call return, or |
829 | C<EVBREAK_ONE>, which will make the innermost C<ev_run> call return, or |
826 | C<EVBREAK_ALL>, which will make all nested C<ev_run> calls return. |
830 | C<EVBREAK_ALL>, which will make all nested C<ev_run> calls return. |
827 | |
831 | |
828 | This "unloop state" will be cleared when entering C<ev_run> again. |
832 | This "break state" will be cleared when entering C<ev_run> again. |
829 | |
833 | |
830 | It is safe to call C<ev_break> from outside any C<ev_run> calls. ##TODO## |
834 | It is safe to call C<ev_break> from outside any C<ev_run> calls, too. |
831 | |
835 | |
832 | =item ev_ref (loop) |
836 | =item ev_ref (loop) |
833 | |
837 | |
834 | =item ev_unref (loop) |
838 | =item ev_unref (loop) |
835 | |
839 | |